Keywords

1 Introduction

Neuro-pharmacology is an important field that studies the effect of some drugs in the neural system. To prove this kind of effects, animals are used before a drug can be given to a human. The most common animals for the test are rodents specifically rats, that is because we know entirely its biology, also they are easy to breed and feed. There are mainly two ways to verify the effects in the neural system of the animal test: an invasive one in which is necessary to check the brain chemistry by a surgical that involves sacrificing the animal; the other one is a non-invasive method that uses behavioral animal models to observe the behaviors of the animal and compare before and after a drug application.

One of the most important tests is the Open Field Maze [16] (OFM, OFT). This test consists of a square box, it has a base and four walls. The typical sizes are between 50 cm to 100 cm. Inside the box, in the base, are painted a grid to identify zones in where the rodent stays. This test can be around 15 min long or a few hours. Usually the researchers in the field put the animal in the maze and record a video of all the test, next they watch the video to identify one or more behaviors, this process is repeated many times as necessary, so this causes that a systematic error is present in the results, also the measures can become variable depending of the personal interpretation.

Given this problem, one solution is the aid of an automatic system that can detect and account for the behaviors present during the open field test. There is some commercial system that is too expensive and leaves out the possibility of being acquired by those who need it. It is from here that many approaches have been proposed to give a solution to this need, The next section gives a review of the works proposed in the last years to approach an automatic system for behaviors and tracking.

2 Related Work

The task to analyze the rodent’s behavior when is placed in a test like the open field maze has been tried to solve in different ways since the early 80s when computer capabilities were still weak, a combination of algorithms and electronics were the first attempts reported [6, 7].

More recent approximations that can successfully track the rodent in the test [5, 8, 19, 21, 23] have not behavior detection available and that limits the potential results of the open field maze.

There are special cases where controlled conditions of light are necessary to remark a high contrast between the animal and the scenario to performs the identification of the rodent [1, 13], but these conditions are not always possible to set by the researchers in the neuroscience.

We found some cases where invasive techniques are used to track the rodent. In the works presented in [2, 4, 11, 14] a surgical implant to the animal is done to identify the rodent and track it, but this is not ideal because the animal is exposed to an unusual conditioning and could affect its behavior, in the other hand, invasive techniques changes the animal’s welfare.

In addition to the identification of the rodent, is important to detect some specific behaviors that are commonly presented during the test. For this reason, some works try to identify behaviors in the rodent using special devices (infrared camera, touch-panel, sensors) or more powerful computers (faster CPU, GPU) [8, 9, 12, 15, 22].

Other approximations are the uses of depth cameras to identify the rodent position and also gets its orientation, besides they can analyze more than one rodent, are not capable to identify behaviors like spinning or freezing and only detects rearing [10, 18, 20].

The research developed in [8, 15] can identify many behaviors of the rodent in the open field maze in a successful way, but a high-cost computer is used to performs this results and these devices are not accessible for many researchers in the field.

In not all cases the results are processed in real-time, in the others 30 fps system are developed but an especial dimension is set mainly \(320\times 240\) pixel frame is used [1, 9, 17, 19, 23]. The better approximations that perform real-time and behavior identification are limited to open field maze are [3, 20], and have not probed in other mazes that can result of interest for researchers.

As we view a system that can perform the identification and tracking of the rodent in real-time and also can identify behaviors is needed with the use of no high-cost computer. In this work, we propose a system in real-time that can track the rodent efficiently and also detect some specific behavior presents in the execution of the open field test.

3 Methods

To achieve the goal to develop a system that can perform the behaviors detection and rodent tracking, we propose the methodology described in the next sections.

3.1 System Calibration

The position of the camera and the characteristics of the box test (color, illumination, position, etc.) are the initial problems to solve. The system will not require a specific color in the arena and neither a specific position of the camera, instead a calibration process is implemented. When the test is ready, before put the rat inside the box, it is necessary to mark manually the corners of the box by click on them in the image showing by the system. This calibration removes the need to adjust the position of the camera to match a specific area. Thus, a time of 5 s of the camera recording the scenario allows the system to learn the characteristics of the arena without the need of use a particular box with some special color or illumination.

3.2 Rodent Segmentation and Tracking

Before we can do the rodent’s tracking, an observable parameter is needed in order to use the EKF, the parameter used is the centroid of the rat (ratCentroid). Based on the bgMean, each pixel of the current frame is analyzed by calculating its standard deviation and verifying if it is under the Gauss bell in about four standards deviations, classifying by background and non-background each pixel.

By the use of Eq. 1, we calculated the centroid of the rat from the segmentation.

$$\begin{aligned} ratCentroid = ( \sum {\frac{x}{totalPixels}},\sum {\frac{y}{totalPixels}}) \end{aligned}$$
(1)

3.3 Features Extraction

As mentioned in the previous section, the EKF is used to extract dynamic information from the rodent. We propose the dynamical model described in Eq. 2, from this model we obtain position (x, y), velocity (r), orientation (\(\theta \)), acceleration (\(\dot{r}\)) and angular velocity (\(\dot{\theta }\)).

$$\begin{aligned} X = (x, y, r, \theta , \dot{r}, \dot{\theta }) \end{aligned}$$
(2)
$$\begin{aligned} x = \varDelta t \cdot \dot{r} \cdot cos ( \dot{\theta } \cdot \varDelta t) + \varDelta W \end{aligned}$$
(3)
$$\begin{aligned} y = \varDelta t \cdot \dot{r} \cdot sin ( \dot{\theta } \cdot \varDelta t) + \varDelta W \end{aligned}$$
(4)
$$\begin{aligned} r = \varDelta t \cdot r + \varDelta W \end{aligned}$$
(5)
$$\begin{aligned} \theta = \varDelta t \cdot \theta + \varDelta W \end{aligned}$$
(6)
$$\begin{aligned} \dot{r} = \varDelta W \end{aligned}$$
(7)
$$\begin{aligned} \dot{\theta } = \varDelta W \end{aligned}$$
(8)

For the predicted step (Eq. 9) we use the model from above and we use the rodent centroid calculated from segmentation as the observable parameter in 10

$$\begin{aligned} X_k = f(X_{k-1}, U_k, W_k) \end{aligned}$$
(9)
$$\begin{aligned} Z_k = h(X_k, V_k) \end{aligned}$$
(10)

We need additional information about the rat-like shape, i.e., every time the rat is moving and present some behaviors its body shape changes. For example, when the rat is rearing its body stretches, or when it is grooming usually its body shrinking forming a circle form. For this reason, we calculate its body shape deformations by Principal Component Analysis (PCA). This let us reduce data and only have two lines that represent the height and width of the rat. With this we can know when the rat’s body looks like an ellipse or a circle, bringing us information about the things the rat is probably doing. Joining all these characteristics information we develop rules that can identify what are the rat’s behaviors. The Fig. 1 shows the features extracted from the rat.

Fig. 1.
figure 1

Features obtained from the rat.

3.4 Behaviors Detection

At this point we have identified the rodent and we know its position, velocity, direction (angle) and shape. Till now we can track the rodent motion, the next is know what the rodent is doing in every frame. The behaviors required for this test are wall rear, path distance, walking and freezing.

For wall rearing detect, we generated rules to classify if the rodent is rearing or not, taking the shape and orientation of the rodent. We observed in the test that when the rat is wall rearing two important characteristics are present, the body of the rat is over a defined limit that we can estimate, and when this occurs, its body stretches and the ellipse that is formed has the principal diagonal greater than the secondary one, with this analysis we estimate when a rearing is happening.

By the use of the features extracted we can detect the freezing, this means the absence of movement of the rodent. Using the velocity, we can estimate when the rodent is quiet and can label this behavior as freezing.

We estimate the distance traveled by rodent using velocity parameter, the velocity is given as the total pixels moved from the previous frame, this means we don’t calculate the velocity in terms of meters over seconds, instead is calculate how many pixels the rodent is moving in every time recorded. So, with this measure, and applying a rule based on the known size of the box we estimate the distance that the rodent has covered during the test.

4 Results

We present the analysis and the proposed solution in the section above. For the implementation, we use c++ with OpenCV library for video and image operations (opening, math operations), all programmed under Linux Ubuntu distribution with no special characteristics in the computer. We count with a data set to test the proposed solution, each video was tested with the system, then the result of every algorithm implemented is shown in order to verify the correct function of the system.

Supplementary video: https://youtu.be/6Smkff19r14.

4.1 Segmentation

For our propose, the first step is the system calibration, next the extraction of the rodent from the frames is required, by applying the algorithm explained in Sect. 3.2, we can separate the rodent from the rest of the background and we use the segmentation to calculate the centroid of the rodent, as we can see even the tail is not complete segmented (see Fig. 2a) the centroid is positioned correctly compared when the system preserves the complete tail in the segmentation (see Fig. 2b).

Fig. 2.
figure 2

Segmentation of the rodent

4.2 Tracking

As we explained early, the observable parameter for the EKF is the centroid obtained from segmentation. To estimate the accuracy of the centroid calculated, we compare the data resulting from the system with hand-labeled data for the centroid (see Fig. 3). We calculated the RMSE for the coordinates x and y. For x we obtain RMSE of 2.4, and 6.82 for y. With this, we make sure that the centroid calculated is good for EKF measurement, also we have to considerate that the hand-labeled data is not always in the exact center of the rodent.

Fig. 3.
figure 3

Comparison plot of centroid calculated from segmentation and hand label centroid.

In the Fig. 4, we observe the tracking of the rat estimated by EKF (red circles), We show the comparison between the original frame and the segmentation, thus we plot the tracking generate, all this for one representative video.

Fig. 4.
figure 4

Image sequence for a video. First row shows the original frame from video. Second row shows the segmentation for that frame. Last row shows the rat’s tracked trajectory. (Color figure online)

4.3 Behaviors Identification

The rules generated in the previous sections were applied to the data set. An example of the visual result for the rearing detection is showing in the Fig. 5. The Fig. 5a shows the original frame from video. In the Fig. 5b we paint a blue oval around the rat every time it performs a wall rearing in the box. Additional of this, we count every wall rearing and at the end of the process. Another result we can observe in the Fig. 5c is the information given by PCA, this information is painted in green and blue lines in the rodent segmentation representing the tendency of the shape of the rodent. The last result showed is the bounding box marked with a blue square. We do not process all the image, we only work in the area restricted by the bounding box, thus we speed up the process.

Fig. 5.
figure 5

Wall rearing. (Color figure online)

We can observe in the video that the camera position is not completely over the box, the camera has an inclination that causes a box distortion like a trapezoid shape, additionally, the box has not perfect square shape and this increases the distortion effect. Because of this, there are some positions of the rodent that confuse the algorithm and counts it as wall rearing.

4.4 Ethogram Generation

In the previous section, we show examples from the system operation in a specific frame. Given the amount of data generated for the entire video, the system generates a report for every frame of the video specifying what is the rodent activity in that frame. This report is called ethogram and is drawn as a colored graphic representing each behavior with one color. The Fig. 6 shows the Ethogram for video 2. We can observe from the ethogram that the behavior of the rodent is not constant.

Fig. 6.
figure 6

Ethogram resulting from video 2. (Color figure online)

The blue color represents when the rodent performs a wall rearing, the yellow one indicates that the rodent is walking and the orange shows when it is freezing.

At the beginning of the test, the rodent is not familiarized with the box and an exploration behavior is presented, this means the rodent have the need to sniffing (including wall rearing) and travel for all the box, that is what we found in the first part of the ethogram. After a few minutes, the wall raring is present for a longer time combining with walking. After the rat is familiarized with the environment its activity reduces drastically, this behavior is observed by the freezing (orange color) because the need to explore decreases in the rodent.

Table 1. Time execution per frame in video. The columns segmentation, tracking and behaviors show the mean time needed to process the task. Complete process column is the mean of the time necessary to complete one frame from the video.

4.5 Time Execution

To evaluate the velocity to obtain results by our proposal, we measure the time required for each module. For this test, we divide the complete process into three steps: segmentation, prediction of position (tracking) and behavior detect. The Table 1 shows the mean times for the main blocks for each video. From the table, we notice that the process that takes the longest time is rodent segmentation and is the time predominant in the complete process. Computing the average time needed to complete each frame from the video we show that our proposal can run in real time, even more, the max speed is over 100 Hz. This time is better than most reported in the related work.

5 Conclusion and Future Work

In this paper, we have presented a system for rodents tracking and behaviors detection. In our proposal we didn’t change the initial conditions in the test, we worked directly on the videos without any prior information of manual adjustments. Even when the camera position was not the best, we correctly segmented and identified the rodent. In addition, our system was able to detect behaviors of particular interest in the test from which an ethogram was also generated, a graph that can be used by the experts to analyze the rodent’s behaviors along time and after having a applied a drug to the rodent.

Therefore, we demonstrated that it is possible to do tracking and behavior identification successfully without any special conditions an also our proposal runs in high speed over 100 Hz without requiring special hardware such as a GPU.

For future work, we propose the use of other classification techniques to detect more behaviors and compare with current results in order to improve the behaviors detection. Also, we will expand the work to other mazes like water maze or elevated plus maze and detect the corresponding behaviors presented in each test.