Research outputs and teaching resources
The sections below list traditional (e.g., published articles) and non-traditional (e.g., software) research outputs and teaching resources that I have created.
Note that my focus from 2023 onwards is on providing computational and data-focused support across a variety of research domains. Before 2023, my research and teaching was primarily in an area of psychology called ‘perception’, which relates to our ability to gain information about the world through our senses (see this recording of a talk I gave for an overview of some of that research).
Software
-
pylater
- Mannion, D.J., Quiroga, M., Tescari, E., Anderson, A.
- A Python library for working with LATER ('Linear Approach to Threshold with Ergodic Rate for Reaction Times') models using Bayesian methods in PyMC.
- Github
- Documentation
-
LATERmodel
- Quiroga, M., Mannion, D.J., Tescari, E., Anderson, A.
- The LATERmodel R package is an open-source implementation of Roger Carpenter’s Linear Approach to Threshold with Ergodic Rate (LATER) model.
- Github
- Documentation
-
pympljstyle
- Mannion, D.J.
- A Python library for specifying and applying journal-specific styles to matplotlib figures.
- Github
Teaching resources
-
Interactive image Fourier analysis
- A website for exploring the frequency content of images and the impact of spatial frequency and orientation filtering operations.
-
Fundamentals of data and coding for psychology
- Set of lessons for advanced psychology undergraduates; taught at UNSW in 2019.
-
Veusz GUI walkthrough
- Video demonstrating the use of the software package Veusz to create and improve a visualisation; created to support research supervision at UNSW in 2019.
-
Colour vision
Neural processing
Motion and measuring perception- Lessons with interactive demonstrations for psychology undergraduates; taught at UNSW in 2020.
-
Programming for Psychology in Python
Screencasts- Set of lessons on Python programming for vision science, data analysis, and visualisation; supported teaching at UNSW in 2016.
Publications (first, co-senior, or senior author)
-
Peterson, L.M., Kersten, D.K., & Mannion, D.J. (2024) Estimating lighting direction in scenes with multiple objects.
Attention, Perception, & Psychophysics, 86, 186–212.
[ Code, data, and computational platform ]
[ Article ]
Show abstract
To recover the reflectance and shape of an object in a scene, the human visual system must account for the properties of the light (such as the direction) illuminating the object. Here, we examine the extent to which multiple objects within a scene are utilised to estimate the direction of lighting in a scene. In Experiment 1, we presented participants with rendered scenes that contained 1, 9, or 25 unfamiliar blob-like objects and measured their capacity to discriminate whether a directional light source was left or right of the participants' vantage point. Trends reported for ensemble perception suggest that the number of utilised objects—and, consequently, discrimination sensitivity—would increase with set size. However, we find little indication that increasing the number of objects in a scene increased discrimination sensitivity. In Experiment 2, an equivalent noise analysis was used to measure participants' internal noise and the number of objects used to judge the average light source direction in a scene, finding that participants relied on 1 or 2 objects to make their judgement regardless of whether 9 or 25 objects were present. In Experiment 3, participants completed a shape identification task that required an implicit judgement of light source direction, rather than an explicit judgement as in Experiment 1 and 2. We find that sensitivity for identifying surface shape was comparable for scenes containing 1, 9, and 25 objects. Our results suggest that the visual system relied on a small number of objects to estimate the direction of lighting in our rendered scenes.
-
Tsang, K.Y. & Mannion, D.J. (2022) Relating sight and sound in simulated environments. Multisensory Research, 35(7–8), 589–622.
[ Article ]
[ Video of experiment session ]
[ Code, data, and computational platform ]
Show abstract
The auditory signals at the ear can be affected by components arriving both directly from a sound source and indirectly via environmental reverberation. Previous studies have suggested that the perceptual separation of these contributions can be aided by expectations of likely reverberant qualities. Here, we investigated whether vision can provide information about the auditory properties of physical locations that could also be used to develop such expectations. We presented participants with audiovisual stimuli derived from 10 simulated real-world locations via a head-mounted display (HMD; n=44) or a web-based (n=60) delivery method. On each trial, participants viewed a first-person perspective rendering of a location before hearing a spoken utterance that was convolved with an impulse response that was from a location that was either the same as (congruent) or different to (incongruent) the visually-depicted location. We find that audiovisual congruence was associated with an increase in the probability of participants reporting an audiovisual match of about 0.22 (95% credible interval: [0.17, 0.27]), and that participants were more likely to confuse audiovisual pairs as matching if their locations had similar reverberation times. Overall, this study suggests that human perceivers have a capacity to form expectations of reverberation from visual information. Such expectations may be useful for the perceptual challenge of separating sound sources and reverberation from within the signal available at the ear.
-
Libesman, S., Mannion, D.J., & Whitford, T.J. (2020) Seeing the intensity of a sound-producing event modulates the amplitude of the auditory evoked response. Journal of Cognitive Neuroscience, 32(3), 426–434.
[ Article ]
Show abstract
An auditory event is often accompanied by characteristic visual information. For example, the sound level produced by a vigorous handclap may be related to the speed of hands as they move toward collision. Here, we tested the hypothesis that visual information about the intensity of auditory signals are capable of altering the subsequent neurophysiological response to auditory stimulation. To do this, we used EEG to measure the response of the human brain (n = 28) to the audiovisual delivery of handclaps. Depictions of a weak handclap were accompanied by auditory handclaps at low (65 dB) and intermediate (72.5 dB) sound levels, whereas depictions of a vigorous handclap were accompanied by auditory handclaps at intermediate 72.5 dB) and high (80 dB) sound levels. The dependent variable was the amplitude of the initial negative component (N1) of the auditory evoked potential. We find that identical clap sounds (intermediate level; 72.5 dB) elicited significantly lower N1 amplitudes when paired with a video of a weak clap, compared with when paired with a video of a vigorous clap. These results demonstrate that intensity predictions can affect the neural responses to auditory stimulation at very early stages (<100 msec) in sensory processing. Furthermore, the established sound-level dependence of auditory N1 amplitude suggests that such effects may serve the functional role of altering auditory responses in accordance with visual inferences. Thus, this study provides evidence that the neurally evoked response to an auditory event results from a combination of a person’s beliefs with incoming auditory input.
-
Peterson, L.M., Kersten, D.J., & Mannion, D.J. (2018) Surface curvature from kinetic depth can affect lightness. Journal of Experimental Psychology: Human Perception & Performance, 44(12), 1856–1864.
[ Article ]
[ PDF (author’s version) ]
Show abstract
The light reaching the eye confounds the proportion of light reflected from surfaces in the environment with their illumination. To achieve constancy in perceived surface reflectance (lightness) across variations in illumination, the visual system must infer the relative contribution of reflectance to the incoming luminance signals. Previous studies have shown that contour and stereo cues to surface shape can affect the lightness of sawtooth luminance profiles. Here, we investigated whether cues to surface shape provided solely by motion (via the kinetic depth effect) can similarly influence lightness. Human observers judged the relative brightness of patches contained within abutting surfaces with identical luminance ramps. We found that the reported brightness differences were significantly lower when the kinetic depth effect supported the impression of curved surfaces, compared to similar conditions without the kinetic depth effect. This demonstrates the capacity of the visual system to use shape from motion to “explain away” alternative interpretations of luminance gradients, and supports the cue-invariance of the interaction between shape and lightness.
-
Mannion, D.J., Donkin, C., & Whitford, T.J. (2017) No apparent influence of schizotypy on orientation-dependent contextual modulation of visual contrast detection. PeerJ, 5, e2921. [ Article ]
Show abstract
We investigated the relationship between psychometrically-defined schizotypy and the ability to detect a visual target pattern. Target detection is typically impaired by a surrounding pattern (context) with an orientation that is parallel to the target, relative to a surrounding pattern with an orientation that is orthogonal to the target (orientation-dependent contextual modulation). Based on reports that this effect is reduced in those with schizophrenia, we hypothesised that there would be a negative relationship between the relative score on psychometrically-defined schizotypy and the relative effect of orientation-dependent contextual modulation. We measured visual contrast detection thresholds and scores on the Oxford-Liverpool Inventory of Feelings and Experiences (O-LIFE) from a non-clinical sample (N = 100). Contrary to our hypothesis, we find an absence of a monotonic relationship between the relative magnitude of orientation-dependent contextual modulation of visual contrast detection and the relative score on any of the subscales of the O-LIFE. The apparent difference of this result with previous reports on those with schizophrenia suggests that orientation-dependent contextual modulation may be an informative condition in which schizophrenia and psychometrically-defined schizotypy are dissociated. However, further research is also required to clarify the strength of orientation-dependent contextual modulation in those with schizophrenia.
-
Mannion, D.J., Kersten, D.J., & Olman, C.A. (2015) Scene coherence can affect the local response to natural images in human V1. European Journal of Neuroscience, 42(11), 2895–2903.
[ Article ]
[ PDF (author’s version) ]
Show abstract
Neurons in primary visual cortex (V1) can be indirectly affected by visual stimulation positioned outside their receptive fields. Although this contextual modulation has been intensely studied, we have little notion of how it manifests with naturalistic stimulation. Here, we investigated how the V1 response to a natural image fragment is affected by spatial context that is consistent or inconsistent with the scene from which it was extracted. Using functional magnetic resonance imaging at 7T, we measured the blood oxygen level-dependent signal in human V1 (n = 8) while participants viewed an array of apertures. Most apertures showed fragments from a single scene, yielding a dominant perceptual interpretation which participants were asked to categorize, and the remaining apertures each showed fragments drawn from a set of 20 scenes. We find that the V1 response was significantly increased for apertures showing image structure that was coherent with the dominant scene relative to the response to the same image structure when it was non-coherent. Additional analyses suggest that this effect was mostly evident for apertures in the periphery of the visual field, that it peaked towards the centre of the aperture, and that it peaked in the middle to superficial regions of the cortical grey matter. These findings suggest that knowledge of typical spatial relationships is embedded in the circuitry of contextual modulation. Such mechanisms, possibly augmented by contributions from attentional factors, serve to increase the local V1 activity under conditions of contextual consistency.
-
Mannion, D.J. (2015) Sensitivity to the visual field origin of natural image patches in human low-level visual cortex. PeerJ, 3, e1038.
[ Article ]
Show abstract
Asymmetries in the response to visual patterns in the upper and lower visual fields (above and below the centre of gaze) have been associated with ecological factors relating to the structure of typical visual environments. Here, we investigated whether the content of the upper and lower visual field representations in low-level regions of human visual cortex are specialised for visual patterns that arise from the upper and lower visual fields in natural images. We presented image patches, drawn from above or below the centre of gaze of an observer navigating a natural environment, to either the upper or lower visual fields of human participants (n = 7) while we used functional magnetic resonance imaging (fMRI) to measure the magnitude of evoked activity in the visual areas V1, V2, and V3. We found a significant interaction between the presentation location (upper or lower visual field) and the image patch source location (above or below fixation); the responses to lower visual field presentation were significantly greater for image patches sourced from below than above fixation, while the responses in the upper visual field were not significantly different for image patches sourced from above and below fixation. This finding demonstrates an association between the representation of the lower visual field in human visual cortex and the structure of the visual input that is likely to be encountered below the centre of gaze.
-
Mannion, D.J., Kersten, D.J., & Olman, C.A. (2014) Regions of mid-level human visual cortex sensitive to the global coherence of local image patches. Journal of Cognitive Neuroscience, 26(8), 1764–1774.
[ Article ]
[ PDF (author’s version) ]
Show abstract
The global structural arrangement and spatial layout of the visual environment must be derived from the integration of local signals represented in the lower tiers of the visual system. This interaction between the spatially local and global properties of visual stimulation underlies many of our visual capacities, and how this is achieved in the brain is a central question for visual and cognitive neuroscience. Here, we examine the sensitivity of regions of the posterior human brain to the global coordination of spatially displaced naturalistic image patches. We presented observers with image patches in two circular apertures to the left and right of central fixation, with the patches drawn from either the same (coherent condition) or different (noncoherent condition) extended image. Using fMRI at 7T (n = 5), we find that global coherence affected signal amplitude in regions of dorsal mid-level cortex. Furthermore, we find that extensive regions of mid-level visual cortex contained information in their local activity pattern that could discriminate coherent and noncoherent stimuli. These findings indicate that the global coordination of local naturalistic image information has important consequences for the processing in human mid-level visual cortex.
-
Mannion, D.J., Kersten, D.J., & Olman, C.A. (2013) Consequences of polar form coherence for fMRI responses in human visual cortex. NeuroImage, 78, 152–158.
[ Article ]
Show abstract
Relevant features in the visual image are often spatially extensive and have complex orientation structure. Our perceptual sensitivity to such spatial form is demonstrated by polar Glass patterns, in which an array of randomly-positioned dot pairs that are each aligned with a particular polar displacement (rotation, for example) yield a salient impression of spatial structure. Such patterns are typically considered to be processed in two main stages: local spatial filtering in low-level visual cortex followed by spatial pooling and complex form selectivity in mid-level visual cortex. However, it remains unclear both whether reciprocal interactions within the cortical hierarchy are involved in polar Glass pattern processing and which mid-level areas identify and communicate polar Glass pattern structure. Here, we used functional magnetic resonance imaging (fMRI) at 7T to infer the magnitude of neural response within human low-level and mid-level visual cortex to polar Glass patterns of varying coherence (proportion of signal elements). The activity within low-level visual areas V1 and V2 was not significantly modulated by polar Glass pattern coherence, while the low-level area V3, dorsal and ventral mid-level areas, and the human MT complex each showed a positive linear coherence response functions. The cortical processing of polar Glass patterns thus appears to involve primarily feedforward communication of local signals from V1 and V2, with initial polar form selectivity reached in V3 and distributed to multiple pathways in mid-level visual cortex.
-
Mannion, D.J. & Clifford, C.W.G. (2011) Cortical and behavioral sensitivity to eccentric polar form. Journal of Vision, 11(6):17, 1–9.
[ Article ]
Show abstract
Patterns composed of local features aligned relative to polar angle, yielding starbursts, concentric circles, and spirals, can inform the understanding of spatial form perception. Previous studies have shown that starburst and concentric form instantiated in Glass patterns are, relative to spirals, both more readily detected in noise and evoke higher levels of blood-oxygen level-dependent (BOLD) signal, as measured with functional magnetic resonance imaging (fMRI), in the retinotopic cortex. However, such studies have typically presented the polar form at the center of gaze, which confounds the distribution of local orientations relative to fixation with variations in polar form. Here, we measure psychophysical detection thresholds and evoked BOLD signal to Glass patterns of varying polar orientation centered at eccentricity. We find an enhanced behavioral sensitivity to starburst and concentric form, consistent with previous studies. While visual areas V1, V2, V3, V3A/B, and hV4 showed elevated levels of BOLD activity to concentric patterns, V1 and V2 showed little to none of the increased activity to starburst patterns evident in areas V3, V3A/B, and hV4. Such findings demonstrate the anisotropic response of the human visual system to variations in polar form independent of variations in local orientation distributions.
-
Mannion, D.J., McDonald, J.S., & Clifford, C.W.G. (2010) The influence of global form on local orientation anisotropies in human visual cortex. NeuroImage, 52(2), 600–605.
[ Article ]
[ PDF (author’s version) ]
Show abstract
Perception of the spatial structure of the environment results from visual system processes which integrate local information to produce global percepts. Here, we investigated whether particular global spatial arrangements evoke greater responses in the human visual system, and how such anisotropies relate to those evident in the responses to the local elements that comprise the global form. We presented observers with Glass patterns; images composed of randomly positioned dot pairings (dipoles) spatially arranged to produce a percept of translational or polar global form. We used functional magnetic resonance imaging (fMRI) to infer the magnitude of neural activity within early retinotopic regions of visual cortex (V1, V2, V3, V3A/B, and hV4) while the angular arrangement of the dipoles was modulated over time to sample the range of orientations. For both translational and polar Glass patterns, V1 showed an increased response to vertical dipole orientations and all visual areas showed a bias towards dipole orientations that were radial to the point of fixation. However, areas V1, V2, V3, and hV4 also demonstrated a bias, only present for polar Glass patterns, towards dipole orientations that were tangential to the point of fixation. This enhanced response to tangential orientations within polar form indicates sensitivity to curvature or more global form characteristics as early as primary visual cortex.
-
Mannion, D.J., McDonald, J.S., & Clifford, C.W.G. (2010) Orientation anisotropies in human visual cortex. Journal of Neurophysiology, 103(6), 3465–3471.
[ Article ]
Show abstract
Representing the orientation of features in the visual image is a fundamental operation of the early cortical visual system. The nature of such representations can be informed by considering anisotropic distributions of response across the range of orientations. Here we used functional MRI to study modulations in the cortical activity elicited by observation of a sinusoidal grating that varied in orientation. We report a significant anisotropy in the measured blood-oxygen level-dependent activity within visual areas V1, V2, V3, and V3A/B in which horizontal orientations evoked a reduced response. These visual areas and hV4 showed a further anisotropy in which increased responses were observed for orientations that were radial to the point of fixation. We speculate that the anisotropies in cortical activity may be related to anisotropies in the prevalence and behavioral relevance of orientations in typical natural environments.
-
Mannion, D.J., McDonald, J.S., & Clifford, C.W.G. (2009) Discrimination of the local orientation structure of spiral Glass patterns early in human visual cortex. NeuroImage, 46(2), 511–515.
[ Article ]
[ PDF (author’s version) ]
Show abstract
The local orientation structure of a visual image is fundamental to the perception of spatial form. Reports of reliable orientation-selective modulations in the pattern of fMRI activity have demonstrated the potential for investigating the representation of orientation in the human visual cortex. Orientation-selective voxel responses could arise from anisotropies in the preferred orientations of pooled neurons due to the random sampling of the cortical surface. However, it is unclear whether orientation-selective voxel responses reflect biases in the underlying distribution of neuronal orientation preference, such as the demonstrated over-representation of radial orientations (those collinear with fixation). Here, we investigated whether stimuli balanced in their radial components could evoke orientation-selective biases in voxel activity. We attempted to discriminate the sense of spiral Glass patterns (opening anti-clockwise or clockwise), in which the local orientation structure was defined by the placement of paired dots at an orientation offset from the radial. We found that information within the spatial pattern of fMRI responses in each of V1, V2, V3, and V3A/B allowed discrimination of the spiral sense with accuracies significantly above chance. This result demonstrates that orientation-selective voxel responses can arise without the influence of a radial bias. Furthermore, the finding indicates the importance of the early visual areas in representing the local orientation structure for the perception of complex spatial form.
Publications (contributing author)
-
Kruse, A.W., Mannion, D.J., Alenin, A., & Tyo, J.S. (2022) User study comparing linearity and orthogonalization for polarimetric visualizations. IEEE Access, 10, 28308–28321.
[ Article ]
Show abstract
Traditionally, polarimetric imaging data is visualized by mapping angle of polarization, degree of polarization, and intensity to hue, saturation, and value coordinates of HSV color space. Due to possible perceptual uniformity issues in HSV, a method based on CAM02-UCS color space has been recently proposed. In this user study, the perceptual uniformity and nonlinear bias of the encoding of the degree of polarization parameter into the chromatic magnitude color channel is modeled by a power-law relationship between stimulus scale level and is estimated from responses to paired 2-alternative forced choice questions using Maximum Likelihood Difference Scaling. Estimated exponent and noise parameters for these methods are compared for same-hue and different-hue conditions to determine whether the chromatic magnitude channel can be used to orthogonality encode data parameters independently from the hue channel. Overall, the HSV condition displayed more nonuniformity, more nonlinear bias, and more non-orthogonality than the UCS condition. The results here indicate a lower bound for differences between methods since the intensity was chosen for the “best case” of HSV. These results further support the claim that the chromatic magnitude color channel of a uniform color space can be used to encode a data parameter independently of the hue channel in a multivariate colormapping visualization.
-
Harrison, A.W., Mannion, D.J., Jack, B.N., Griffiths, O., Hughes, G., & Whitford, T.J. (2021) Sensory attenuation is modulated by the contrasting effects of predictability and control. NeuroImage, 237:118103.
[ Article ]
Show abstract
Self-generated stimuli have been found to elicit a reduced sensory response compared with externally-generated stimuli. However, much of the literature has not adequately controlled for differences in the temporal predictability and temporal control of stimuli. In two experiments, we compared the N1 (and P2) components of the auditory-evoked potential to self- and externally-generated tones that differed with respect to these two factors. In Experiment 1 (n = 42), we found that increasing temporal predictability reduced N1 amplitude in a manner that may often account for the observed reduction in sensory response to self-generated sounds. We also observed that reducing temporal control over the tones resulted in a reduction in N1 amplitude. The contrasting effects of temporal predictability and temporal control on N1 amplitude meant that sensory attenuation prevailed when controlling for each. Experiment 2 (n = 38) explored the potential effect of selective attention on the results of Experiment 1 by modifying task requirements such that similar levels of attention were allocated to the visual stimuli across conditions. The results of Experiment 2 replicated those of Experiment 1, and suggested that the observed effects of temporal control and sensory attenuation were not driven by differences in attention. Given that self- and externally-generated sensations commonly differ with respect to both temporal predictability and temporal control, findings of the present study may necessitate a re-evaluation of the experimental paradigms used to study sensory attenuation.
-
Patten, M.L., Mannion, D.J., & Clifford, C.W.G. (2017) Correlates of perceptual orientation biases in human primary visual cortex. Journal of Neuroscience, 37(18), 4744–4750.
[ Article ]
Show abstract
Vision can be considered as a process of probabilistic inference. In a Bayesian framework, perceptual estimates from sensory information are combined with prior knowledge, with a stronger influence of the prior when the sensory evidence is less certain. Here, we explored the behavioral and neural consequences of manipulating stimulus certainty in the context of orientation processing. First, we asked participants to judge whether a stimulus was oriented closer to vertical or the clockwise primary oblique (45°) for two stimulus types (spatially filtered noise textures and sinusoidal gratings) and three manipulations of certainty (orientation bandwidth, contrast, and duration). We found that participants consistently had a bias toward reporting orientation as closer to 45° during conditions of high certainty and that this bias was reduced when sensory evidence was less certain. Second, we measured event-related fMRI BOLD responses in human primary visual cortex (V1) and manipulated certainty via stimulus contrast (100% vs 3%). We then trained a multivariate classifier on the pattern of responses in V1 to cardinal and primary oblique orientations. We found that the classifier showed a bias toward classifying orientation as oblique at high contrast but categorized a wider range of orientations as cardinal for low-contrast stimuli. Orientation classification based on data from V1 thus paralleled the perceptual biases revealed through the behavioral experiments. This pattern of bias cannot be explained simply by a prior for cardinal orientations.
-
Whitford, T.J., Mitchell, A., & Mannion, D.J. (2017) The ability to tickle oneself is associated with level of psychometric schizotypy in non-clinical individuals. Consciousness and Cognition, 52, 93–103.
[ Article ]
Show abstract
A recent study (Lemaitre et al., 2016, Consciousness and Cognition, 41, 64-71) found that non-clinical individuals who scored highly on a psychometric scale of schizotypy were able to tickle themselves. The present study aimed to extend this finding by investigating whether the ability to tickle oneself was associated with level of psychometric schizotypy considered as a continuous variable. One hundred and eleven students completed the Schizotypal Personality Questionnaire (SPQ). A mechanical device delivered tactile stimulation to participants’ palms. The device was operated by the experimenter (External) or the participant (Self). Participants were asked to rate the intensity, ticklishness and pleasantness of the stimulation. A significant association was observed between participants’ tactile self-suppression (External minus Self) and their score on the SPQ. These results suggest that the ability to suppress the tactile consequences of self-generated movements varies across the general population, and maps directly onto the personality dimension of schizotypy.
-
Kam, T.-E. Mannion, D.J., Lee, S-W., Doerschner, K., & Kersten, D.J. (2015) Human visual cortical responses to specular and matte motion flows. Frontiers in Human Neuroscience, 9:579.
[ Article ]
Show abstract
Determining the compositional properties of surfaces in the environment is an important visual capacity. One such property is specular reflectance, which encompasses the range from matte to shiny surfaces. Visual estimation of specular reflectance can be informed by characteristic motion profiles; a surface with a specular reflectance that is difficult to determine while static can be confidently disambiguated when set in motion. Here, we used fMRI to trace the sensitivity of human visual cortex to such motion cues, both with and without photometric cues to specular reflectance. Participants viewed rotating blob-like objects that were rendered as images (photometric) or dots (kinematic) with either matte-consistent or shiny-consistent specular reflectance profiles. We were unable to identify any areas in low and mid-level human visual cortex that responded preferentially to surface specular reflectance from motion. However, univariate and multivariate analyses identified several visual areas; V1, V2, V3, V3A/B, and hMT+, capable of differentiating shiny from matte surface flows. These results indicate that the machinery for extracting kinematic cues is present in human visual cortex, but the areas involved in integrating such information with the photometric cues necessary for surface specular reflectance remain unclear.
-
Matthews, N.L., Todd, J., Mannion, D.J., Finnigan, S., Catts, S., Michie, P.T. (2013) Impaired processing of binaural temporal cues to auditory scene analysis in schizophrenia. Schizophrenia Research, 146(1–3), 344–348.
[ Article ]
Show abstract
It is well established that individuals with schizophrenia demonstrate alterations in auditory perception beginning at the very earliest stages of information processing. However, it is not clear how these impairments in basic information processing translate into high-order cognitive deficits. Auditory scene analysis allows listeners to group auditory information into meaningful objects, and as such provides an important link between low-level auditory processing and higher cognitive abilities. In the present study we investigated whether low-level impairments in the processing of binaural temporal information impact upon auditory scene analysis ability. Binaural temporal processing ability was investigated in 19 individuals with schizophrenia and 19 matched controls. Individuals with schizophrenia showed impaired binaural temporal processing ability on an inter-aural time difference (ITD) discrimination task. In addition, patients demonstrated impairment in two measures of auditory scene analysis. Specifically, patients had reduced ability to use binaural temporal cues to extract signal from noise in a masking level difference paradigm, and to separate the location of a source sound in the presence of an echo in the precedence effect paradigm. These findings demonstrate that individuals with schizophrenia have impairments in the accuracy with which simple binaural temporal information is encoded in the auditory system, and furthermore, this impairment has functional consequences in terms of the use of these cues to extract information in complex auditory environments.
-
McDonald, J.S., Mannion, D.J., & Clifford, C.W.G. (2012) Gain control in the response of human visual cortex to plaids. Journal of Neurophysiology, 107(9), 2570–2580.
[ Article ]
Show abstract
A recent intrinsic signal optical imaging study in tree shrew showed, surprisingly, that the population response of V1 to plaid patterns comprising grating components of equal contrast is predicted by the average of the responses to the individual components (MacEvoy SP, Tucker TR, Fitzpatrick D. Nat Neurosci 12: 637-645, 2009). This prompted us to compare responses to plaids and gratings in human visual cortex as a function of contrast and orientation. We found that the functional MRI (fMRI) blood oxygenation level-dependent (BOLD) responses of areas V1–V3 to a plaid comprising superposed grating components of equal contrast are significantly higher than the responses to a single component. Furthermore, the orientation response profile of a plaid is poorly predicted from a linear combination of the responses to its components. Together, these results indicate that the model of MacEvoy et al. (2009) cannot, without modification, account for the fMRI BOLD response to plaids in human visual cortex.
-
Goddard, E., Mannion, D.J., McDonald, J.S., Solomon, S.G., & Clifford, C.W.G. (2011) Colour responsiveness argues against a dorsal component of human V4. Journal of Vision, 11(4):3, 1–21.
[ Article ]
Show abstract
The retinotopic organization, position, and functional responsiveness of some early visual cortical areas in human and non-human primates are consistent with their being homologous structures. The organization of other areas remains controversial. A critical debate concerns the potential human homologue of macaque area V4, an area very responsive to colored images: specifically, whether human V4 is divided between ventral and dorsal components, as in the macaque, or whether human V4 is confined to one ventral area. We used fMRI to define these areas retinotopically in human and to test the impact of image color on their responsivity. We found a robust preference for full-color movie segments over a luminance-matched achromatic version in ventral V4 but little or no preference in the vicinity of the putative dorsal counterpart. Contrary to previous reports that visual field coverage in the ventral part of V4 is deficient without the dorsal part, we found that coverage in ventral V4 extended to the lower vertical meridian, including the entire contralateral hemifield. Together these results provide evidence against a dorsal component of human V4. Instead, they are consistent with human V4 being a single, ventral region that is sensitive to the chromatic components of images.
-
McDonald, J.S., Mannion, D.J., Goddard, E., & Clifford, C.W.G. (2010) Orientation-selective chromatic mechanisms in human visual cortex. Journal of Vision, 10(12):34, 1–12.
[ Article ]
Show abstract
We used functional magnetic resonance imaging (fMRI) at 3T in human participants to trace the chromatic selectivity of orientation processing through functionally defined regions of visual cortex. Our aim was to identify mechanisms that respond to chromatically defined orientation and to establish whether they are tuned specifically to color or operate in an essentially cue-invariant manner. Using an annular test region surrounded inside and out by an inducing stimulus, we found evidence of sensitivity to orientation defined by red-green (L − M) or blue-yellow (S-cone isolating) chromatic modulations across retinotopic visual cortex and of joint selectivity for color and orientation. The likely mechanisms underlying this selectivity are discussed in terms of orientation-specific lateral interactions and spatial summation within the receptive field.
-
Goddard, E., Mannion, D.J., McDonald, J.S., Solomon, S.G., & Clifford, C.W.G. (2010) Combination of subcortical color channels in human visual cortex. Journal of Vision, 10(5):25, 1–17.
[ Article ]
Show abstract
Mechanisms of color vision in cortex have not been as well characterized as those in sub-cortical areas, particularly in humans. We used fMRI in conjunction with univariate and multivariate (pattern) analysis to test for the initial transformation of sub-cortical inputs by human visual cortex. Subjects viewed each of two patterns modulating in color between orange-cyan or lime-magenta. We tested for higher order cortical representations of color capable of discriminating these stimuli, which were designed so that they could not be distinguished by the postulated L − M and S − (L + M) sub-cortical opponent channels. We found differences both in the average response and in the pattern of activity evoked by these two types of stimuli, across a range of early visual areas. This result implies that sub-cortical chromatic channels are recombined early in cortical processing to form novel representations of color. Our results also suggest a cortical bias for lime-magenta over orange-cyan stimuli, when they are matched for cone contrast and the response they would elicit in the L − M and S − (L + M) opponent channels.
-
Clifford, C.W.G., Mannion, D.J., & McDonald, J.S. (2009) Radial biases in the processing of motion and motion-defined contours by human visual cortex. Journal of Neurophysiology, 102(5), 2974–2981.
[ Article ]
Show abstract
Luminance gratings reportedly produce a stronger functional magnetic resonance imaging (fMRI) blood oxygen level-dependent (BOLD) signal in those parts of the retinotopic cortical maps where they are oriented radially to the point of fixation. We sought to extend this finding by examining anisotropies in the response of cortical areas V1–V3 to motion-defined contour stimuli. fMRI at 3 Tesla was used to measure the BOLD signal in the visual cortex of six human subjects. Stimuli were composed of strips of spatial white noise texture presented in an annular window. The texture in alternate strips moved in opposite directions (left-right or up-down). The strips themselves were static and tilted 45 degrees left or right from vertical. Comparison with maps of the visual field obtained from phase-encoded retinotopic analysis revealed systematic patterns of radial bias. For motion, a stronger response to horizontal was evident within V1 and along the borders between V2 and V3. For orientation, the response to leftward tilted contours was greater in left dorsal and right ventral V1–V3. Radial bias for the orientation of motion-defined contours analogous to that reported previously for luminance gratings could reflect cue-invariant processing or the operation of distinct mechanisms subject to similar anisotropies in orientation tuning. Radial bias for motion might be related to the phenomenon of “motion streaks,” whereby temporal integration by the visual system introduces oriented blur along the axis of motion. We speculate that the observed forms of radial bias reflect a common underlying anisotropy in the representation of spatiotemporal image structure across the visual field.
Commentaries
-
Clifford, C.W.G. & Mannion, D.J. (2015) Orientation decoding: Sense in spirals? NeuroImage, 110, 219–222.
[ Article ]
Show abstract
The orientation of a visual stimulus can be successfully decoded from the multivariate pattern of fMRI activity in human visual cortex. Whether this capacity requires coarse-scale orientation biases is controversial. We and others have advocated the use of spiral stimuli to eliminate a potential coarse-scale bias-the radial bias toward local orientations that are collinear with the centre of gaze-and hence narrow down the potential coarse-scale biases that could contribute to orientation decoding. The usefulness of this strategy is challenged by the computational simulations of Carlson (2014), who reported the ability to successfully decode spirals of opposite sense (opening clockwise or counter-clockwise) from the pooled output of purportedly unbiased orientation filters. Here, we elaborate the mathematical relationship between spirals of opposite sense to confirm that they cannot be discriminated on the basis of the pooled output of unbiased or radially biased orientation filters. We then demonstrate that Carlson’s (2014) reported decoding ability is consistent with the presence of inadvertent biases in the set of orientation filters; biases introduced by their digital implementation and unrelated to the brain’s processing of orientation. These analyses demonstrate that spirals must be processed with an orientation bias other than the radial bias for successful decoding of spiral sense.
- Mannion, D.J. (2012) Navigating the vision science Internet. i-Perception, 3(7), 413. [ Article ]
- Clifford, C.W.G., Mannion, D.J., Seymour, K.J., McDonald, J.S., & Bartels, A. (2011) Are coarse-scale orientation maps really necessary for orientation decoding? E-letter to Journal of Neuroscience. [ Article ]
Unpublished manuscripts
These are manuscripts that either have been not been peer reviewed, have not yet been peer reviewed, or have been peer reviewed but not accepted for publication.
-
Turnbull, R., Mannion, D.J., Wells, J., Manandhar Shrestha, K., Balogh, A, & Runting, R. (2024) Themeda: Predicting land cover change using deep learning.
[ Preprint ]
[ Code (model) ]
[ Code (pre-processing) ]
[ Data ]
Show abstract
Accurately predicting land cover vegetation change could inform a broad range of proactive land management actions. Artificial intelligence, and deep learning in particular, is a promising tool for predicting land cover change because it is highly flexible and can learn complex relationships. This paper introduces Themeda, a neural network model that generates probability distributions of predicted future land cover based on multiple data sources. Our study leverages 22 years of remotely sensed land cover data for the world's largest intact savanna region, complemented by a large variety of spatial and temporal data including rainfall, maximum temperature, elevation, soils, land use, and fire scars. Our spatial cross-validation shows that Themeda achieves 93.4% pixel-wise accuracy in predicting FAO Level 3 land cover labels, improving on the baseline persistence model of 88.36%. The model also closely predicts the distribution of land cover classes for 4000 m x 4000 m regions, giving KL divergence values of 1.64e-03. Results on the test set of years not seen in training also improve on the baseline. Our pre-processing pipeline and prediction model carry considerable potential to be applied to land cover predictions in other regions, and the model's probabilistic land cover predictions could help improve cellular automata and integrated assessment models of land use and land management.
-
Sun, H.-C. & Mannion, D.J. (2021) A comparison of the apparent gloss of rendered objects presented in the lower and upper regions of the visual field.
[ Preprint ]
Show abstract
Gloss is an aspect of surface perception that is important for understanding the material properties of the environment. Because a surface can stimulate any region of the visual field during natural viewing, it is of interest to measure the potential influence of visual field asymmetries on perceived gloss—as such asymmetries could make the perception of gloss dependent on the visual field location. Here, our aim was to compare the apparent glossiness of renderings of nondescript objects when positioned in the lower and upper regions of the visual field. In Experiment 1, participants (n=20) evaluated the glossiness of objects presented simultaneously below and above central fixation. Estimates of the specular reflectance required for perceptual gloss equality indicated little effect of the visual field location. In Experiment 2, participants (n=19) compared the magnitude of gloss differences across two pairs of objects in either the lower or the upper visual field. Estimates of the exponent relating specular reflectance to a gloss difference scale and a noise parameter again indicated little effect of the visual field location. Overall, these estimates are consistent with the existence of a high degree of gloss constancy across presentations in the lower and upper visual fields.
-
Libesman, S., Whitford, T.J., & Mannion, D.J. (2018) Loudness judgements are not necessarily affected by visual cues to sound source distance.
[ Preprint ]
Show abstract
The level of the auditory signals at the ear depends both on the capacity of the sound source to produce acoustic energy and on the distance of the source from the listener. Loudness constancy requires that our perception of sound level, loudness, corresponds to the source level by remaining invariant to the confounding effects of distance. Here, we assessed the evidence for a potential contribution of vision, via the disambiguation of sound source distance, to loudness constancy. We presented participants with a visual environment, on a computer monitor, which contained a visible loudspeaker at a particular distance and was accompanied by the auditory delivery, via headphones, of an anechoic sound of a particular aural level. We measured the point of subjective loudness equality for sounds associated with loudspeakers at different visually-depicted distances. We report strong evidence that such loudness judgements were closely aligned with the aural level, rather than being affected by the apparent distance of the sound source conveyed visually. Similar results were obtained across variations in sound and environment characteristics. We conclude that the loudness of anechoic sounds are not necessarily affected by indications of the sound source distance as established via vision.