Final published version
Licence: CC BY: Creative Commons Attribution 4.0 International License
Research output: Contribution to Journal/Magazine › Journal article › peer-review
V1-based modelling of discrimination between natural scenes within the luminance and isoluminant colour planes. / To, Michelle; Tolhurst, David.
In: Journal of Vision, Vol. 19, No. 1, 9, 16.01.2019, p. 1-19.Research output: Contribution to Journal/Magazine › Journal article › peer-review
}
TY - JOUR
T1 - V1-based modelling of discrimination between natural scenes within the luminance and isoluminant colour planes
AU - To, Michelle
AU - Tolhurst, David
PY - 2019/1/16
Y1 - 2019/1/16
N2 - We have been developing a computational visual difference predictor model that can predict how human observers rate the perceived magnitude of suprathreshold differences between pairs of full-color naturalistic scenes (To, Lovell, Troscianko, & Tolhurst, 2010). The model is based closely on V1 neurophysiology and has recently been updated to more realistically implement sequential application of nonlinear inhibitions (contrast normalization followed by surround suppression; To, Chirimuuta, & Tolhurst, 2017). The model is based originally on a reliable luminance model (Watson & Solomon, 1997) which we have extended to the red/green and blue/yellow opponent planes, assuming that the three planes (luminance, red/green, and blue/yellow) can be modeled similarly to each other with narrow-band oriented filters. This paper examines whether this may be a false assumption, by decomposing our original full-color stimulus images into monochromatic and isoluminant variants, which observers rate separately and which we model separately. The ratings for the original full-color scenes correlate better with the new ratings for the monochromatic variants than for the isoluminant ones, suggesting that luminance cues carry more weight in observers' ratings to full-color images. The ratings for the original full-color stimuli can be predicted from the new monochromatic and isoluminant rating data by combining them by Minkowski summation with power m = 2.71, consistent with other studies involving feature summation. The model performed well at predicting ratings for monochromatic stimuli, but was weaker for isoluminant stimuli, indicating that mirroring the monochromatic models is not sufficient to model the color planes. We discuss several alternative strategies to improve the color modeling.
AB - We have been developing a computational visual difference predictor model that can predict how human observers rate the perceived magnitude of suprathreshold differences between pairs of full-color naturalistic scenes (To, Lovell, Troscianko, & Tolhurst, 2010). The model is based closely on V1 neurophysiology and has recently been updated to more realistically implement sequential application of nonlinear inhibitions (contrast normalization followed by surround suppression; To, Chirimuuta, & Tolhurst, 2017). The model is based originally on a reliable luminance model (Watson & Solomon, 1997) which we have extended to the red/green and blue/yellow opponent planes, assuming that the three planes (luminance, red/green, and blue/yellow) can be modeled similarly to each other with narrow-band oriented filters. This paper examines whether this may be a false assumption, by decomposing our original full-color stimulus images into monochromatic and isoluminant variants, which observers rate separately and which we model separately. The ratings for the original full-color scenes correlate better with the new ratings for the monochromatic variants than for the isoluminant ones, suggesting that luminance cues carry more weight in observers' ratings to full-color images. The ratings for the original full-color stimuli can be predicted from the new monochromatic and isoluminant rating data by combining them by Minkowski summation with power m = 2.71, consistent with other studies involving feature summation. The model performed well at predicting ratings for monochromatic stimuli, but was weaker for isoluminant stimuli, indicating that mirroring the monochromatic models is not sufficient to model the color planes. We discuss several alternative strategies to improve the color modeling.
KW - Computational Modelling
KW - Visual Discrimination
KW - Natural Scenes
KW - Luminance
KW - Isoluminance
U2 - 10.1167/19.1.9
DO - 10.1167/19.1.9
M3 - Journal article
VL - 19
SP - 1
EP - 19
JO - Journal of Vision
JF - Journal of Vision
SN - 1534-7362
IS - 1
M1 - 9
ER -