British Machine Vision Association and Society for Pattern Recognition

OPTIMISTAION IN COMPUTER VISION

One Day Technical Meeting, 8th May 1997
at the British Institute of Radiology , 36 Portland Place, London, W1N 4AT.

Chairman: Dr. Majid Mirmehdi (Surrey University)


Multiresolution Hopfield Networks for Vision

R Wilson, Dept of Computer Science, University of Warwick, Coventry CV4 7AL

Over the last ten years, approaches to problems in vision based on the regularisation approach have received widespread attention. Among such methods, Hopfield neural networks provide a well defined iterative procedure, with some relation to biological networks, but have the major drawback of tending to get trapped in local minima. An effective way to overcome this limitation is to use a multiresolution network, in which solutions at coarser scales bias those at fine scales, thereby steering the solution towards the global minimum. The paper will show how these networks can be used for boundary extraction and present an approach to vision based on the notion of a hierarchy of networks.


Syntactic neural networks, lazy evaluation and constraint satisfaction within computer vision

S. Lucas , L. Du and A. Downton, Department of Electronic Systems Engineering, University of Essex, Colchester CO4 3SQ

A general approach to computer vision is described whereby a special kind of neural network called a syntactic neural net (SNN) is used to acquire a grammar to describe a class of images. Recognition of a particular image is then performed by finding the maximum-likelihood parse of the input image with respect to the inferred grammar. Lazy evaluation is the key to making this process efficient. Results for a postcode recognition problem are interesting in that retrieval (finding the maximum likelihood postcode, given the input image) actually gets faster as the size of the dictionary (set of possible valid codes) increases. The applicability of this approach to other computer vision problems such as face recognition and fingerprint recognition will be discussed.


Mean Field Theory and Geometric Hashing

Andrew J. Stoddart Centre for Vision, Signal, & Speech Processing, Dept. of Electrical Engineering, University of Surrey, Guildford GU2 5XH

Mean field theory is a powerful annealed optimization theory that can be applied to labelling problems in computer vision. We will show how this technique may be applied in combination with geometric hashing to obtain a fast and powerful optimization method.


Relational Matching with Genetic Search

Andrew Cross and Edwin R. Hancock Department of Computer Science, University of York, York, Y01 5DD

This paper describes a framework for performing relational graph matching using genetic search. There are three novel ingredients to the work. Firstly, we cast the optimisation process into a Bayesian framework by exploiting the recently reported global consistency measure of Wilson and Hancock as a fitness measure. The second novel idea is to realise the crossover process at the level of subgraphs, rather then employing string-based or random crossover. Finally, we accelerate convergence by employing a deterministic hill-climbing process prior to selection. Since we adopt the Bayesian consistency measure as a fitness function, the basic measure of relational distance underpinning the technique is Hamming distance. Our standpoint is that genetic search provides a more attractive means of performing stochastic discrete optimisation on the global consistency measure than alternatives such as simulated annealing. Moreover, the action of the optimisation process is easily understood in terms of its action in the Hamming distance domain. We demonstrate empirically not only that the method possesses polynomial convergence time but that the convergence rate is more rapid than simulated annealing. We provide some experimental evaluation of the method in the matching of aerial stereograms and evaluate its sensitivity on synthetically generated graphs.


Model-based Tracking via Predictive Optimisation

Tony Heap Machine Vision Group, School of Computer Studies, The University of Leeds, Leeds, LS2 9JT

Models of shape play an important role in vision-based object tracking. Existing tracking algorithms generally use some form of local optimisation, assuming that an object's position and shape change smoothly over time. In some situations this assumption is not valid: the trackable shape of an object may change discontinuously, if it is the 2D silhouette of a 3D object. We are currently investigating a novel method for modelling temporal shape discontinuities explicitly. Allowable shapes are represented as a union of (learned) bounded regions within a shape space. Discontinuous shape changes are described in terms of transitions between these regions. Transition probabilities are learned from training sequences and stored in a Markov model. In this way we can create `wormholes' in shape space. Tracking with such models is via an adaptation of Isard and Blake's recently- developed CONDENSATION algorithm. CONDENSATION tracks by propagating a probability distribution in model state space over time, optimising a model fitness function. We substitute Isard and Blake's continuous dynamic model with an adapted Markov process, driven by our new model.


Adaptive Control of a Contour Extraction Algorithm

Gianni Pronzato , Computer Vision Group, Dept. of Computing & Electrical Eng., Heriot-Watt Univ., Edinburgh EH14 4AS

In order to improve on current low-level contour extraction, we propose the use of adaptive control over the processing chain in order to optimise the final result. Current work is involved in establishing performance measures based on edge continuity and localisation and developing appropriate methods for optimisation to enable the system to automatically adjust its free parameters accordingly.


Graph-theoretic optimisation for binary line image analysis

S. Marchand-Maillet and Y. Sharaiha. Operational Research and Systems, The Management School, Imperial College, London SW7 2PG

This project is part of a three year EPSRC-sponsored work, and this presentation will give an overview of the applications of an optimisation approach to line image analysis. In particular, we present the development of theoretical and algorithmic tools for the processing of binary digital line images from a graph-theoretic viewpoint. The analogy between a digital image and its grid graph counterpart is first established when mapping the concept of neighbourhood between pixels onto that of adjacency between pixels. In this context, topological and geometrical definitions such as discrete distance and straightness are characterised using well-known graph-theoretical properties. In particular, the problem of distance maps is resolved via this graph-theoretical approach. The skeletonisation problem can then be mapped onto the central path location problem on the grid graph. This discrete location optimisation problem is decomposed into sub-problems such as junction location and minimal weighted path location which are then solved using optimal graph-theoretic algorithms. Results of this work are presented for a number of line image applications including fingerprint and road network analysis. Acknowledgments: This work is supported by the Engineering and Physical Sciences Research Council (EPSRC) under Grant number GR/J85271.


Model Selection and Parameter Optimisation for Segmentation/Tracking Systems Using ROC Analysis

Daniel Alexander , University College London, Gower Street, London WC1E 6BT

The model selection problem is well known in the statistics community. Given a set of data and a set of candidate models for that data, how can we decide which model is most suitable. Several "goodness-of-fit" measures have been proposed in the past, for example, AIC (Akaike, 1974); BIC (Schwarz, 1978). However, there is no guarantee that a model selected using any of these criteria will produce the best performance in a real system. We propose a methodology for model selection at the algorithm level by using ROC analysis to characterise the performance of the algorithm, using each of the candidate models, applied to a set of test data. The methodology is used to choose the most effective model from three candidate statistical models of colour data. A statistical snake algorithm uses the models for tracking roads in natural scenes and it is shown how the methodology concurrently finds optimal internal parameter settings for the snake itself.


Recovery of depth information from optical microscope images by constrained deconvolution

C.A. Glasbey Biomathematics and Statistics Scotland, JCMB, King's Buildings, Edinburgh EH9 3JZ, Scotland

A rigorous, new, depth-from-focus algorithm is proposed: microscope images at a series of focal planes are jointly deconvolved, subject to the constraint that the recovered specimen is a two-dimensional surface. A quadratic program is used to estimate the point-spread function of the optical system. Then, depth and brightness are estimated at each location on the surface by minimising a sum of squares plus roughness penalty, using a simulated annealing schedule. The algorithm has been implemented in parallel on a Cray-T3D, and the method is illustrated using brightfield optical sections of a diatom.


Film Restoration using Soft Morphological Filters

Neal Harvey and Stephen Marshall, Signal Processing Division, Dept. of Electronic & Electrical Engineering, University of Strathclyde, Glasgow, G1 1XW

Corruption or noise is a common problem in any field which makes use of images. A ready market is emerging for digital video recordings of old films and television broadcasts. It is desirable to remove unwanted noise/corruption from the original material. Film dirt is only really present in the luminance fields of each frame. In order to remove this corruption, a suitable filter is applied to the luminance field of each frame. It is necessary to determine which is the optimal filter for this application. In this paper will be considering the class of grey-scale soft morphological filters. Newly developed techniques for the design of soft morphological filters which incorporate the use of genetic algorithms are applied to the problem. Simple genetic algorithms are employed in the search for the optimum combination of filter parameters subject to specific error criteria, such as the Mean Absolute and Mean Square Error. The talk will describe in more detail the genetic algorithms used in the filter optimization: how the filter parameters are encoded etc. Examples of applying these optimization techniques to the restoration of actual BBC archive film sequences will be shown.


M. Mirmehdi