The use of computer vision techniques to automate the process of animal observation would appear to be of great benefit to the agricultural industry, yet has borne few viable applications. Work such as the tracking of piglets [ 1 ], or Schofield's use of a vision system to weigh pigs by the knowledge of the relationship between surface area of the pig in the image and its weight [ 2 ], give satisfactory results but do not fully exploit the characteristics of dealing with animals.
The work described in this paper forms part of the Robotic Sheepdog Project; an investigation into animal interactive robotics. The aim of the project, based at Silsoe Research Institute, is to demonstrate an autonomous robot system that can successfully manipulate a group of animals (in this case, ducks) to some pre-determined goal.
In order to achieve this, machine vision is employed to pass information on the relative positions of the robot and ducks to a control system for the autonomous vehicle. Successful robotic control and manipulation of the animals towards their goal relies heavily on a suitable model of how a flock of ducks react in relation to a predator [ 3 ].
The common method of describing animal flocking motion is through simulation (eg. Reynolds, 1987 [ 4 ]). By applying empirical data such as maximum bird speed and flight distance of the animals, found by studying the animals themselves [ 5 ], the simulation can appear to mimic closely actual behaviour. In this paper we describe an alternate approach where the model of flocking behaviour can be extracted automatically from image sequences of the animals moving through their environment.
In the case of classifying an individual animal, the use of a non-rigid, deformable model to describe the shape of the subject would seem a sensible choice. One such model, the Point Distribution Model (PDM) [ 6 ], forms the basis of many successful computer vision applications, for example [ 7 , 8 , 9 ]. Modelling the observed shape change of the animal would be comparable to that of modelling a human [ 10 ].
However, in this paper we consider the problem of modelling a group of animals, in particular how the shape change of the group varies according to the influence of an outside body; the robot. The proximity of the predator will affect not only the shape, but also the speed and direction of the group's motion.
The standard PDM is successful at modelling shape - here we require to extend it to include additional parameters such as flock velocity and relative position of the robot. Augmenting the PDM in such a way raises issues of parameter scaling, which must be overcome for a successful model representation.
By using training data, automatically obtained from image sequences of live animals, the resulting PDM would capture the behavioural traits of the observed group. The resulting model would have the advantage over simulated descriptions of animal flocking motion, in that it would be based upon observations of the animals themselves, rather than hypothetical rules. Moreover, it provides the opportunity to calibrate and measure the accuracy of flocking simulations.
The rest of this paper describes the construction of the model, based upon extending the standard shape-only PDM.
N Sumpter