Home > Research > Publications & Outputs > Multiple-class land-cover mapping at the sub-pi...

Links

Text available via DOI:

View graph of relations

Multiple-class land-cover mapping at the sub-pixel scale using a Hopfield neural network

Research output: Contribution to journalJournal articlepeer-review

Published
Close
<mark>Journal publication date</mark>2001
<mark>Journal</mark>International Journal of Applied Earth Observation and Geoinformation
Issue number2
Volume3
Number of pages7
Pages (from-to)184-190
Publication StatusPublished
<mark>Original language</mark>English

Abstract

Land cover class composition of image pixels can be estimated using soft classification techniques. However, their output provides no indication of how such classes are distributed spatially within the instantaneous field of view represented by the pixel. Robust techniques to provide an improved spatial representation of land cover have yet to be developed. The use of a Hopfield neural network technique to map the spatial distributions of classes reliably using information of pixel composition determined from soft classification was investigated in previous papers by Tatem et al. The network converges to a minimum of an energy function defined as a goal and several constraints. The approach involved designing the energy function to produce a ‘best guess’ prediction of the spatial distribution of class components in each pixel. Tatem et al described the application of the technique to target mapping at the sub-pixel scale, but only for single classes. We now show how this approach can be extended to map multiple classes at the sub-pixel scale, by adding new constraints into the energy formulation. The new technique has been applied to simulated SPOT HRV and Landsat TM agriculture imagery to derive accurate estimates of land cover. The results show that this extension of the neural network now represents a simple efficient tool for mapping land cover and can deliver requisite results for the analysis of practical remotely sensed imagery at the sub pixel scale.

Bibliographic note

M1 - 2