Skip to main content
Advertisement
  • Loading metrics

Seeing and Feeling Motion: Canonical Computations in Vision and Touch

Abstract

While the different sensory modalities are sensitive to different stimulus energies, they are often charged with extracting analogous information about the environment. Neural systems may thus have evolved to implement similar algorithms across modalities to extract behaviorally relevant stimulus information, leading to the notion of a canonical computation. In both vision and touch, information about motion is extracted from a spatiotemporal pattern of activation across a sensory sheet (in the retina and in the skin, respectively), a process that has been extensively studied in both modalities. In this essay, we examine the processing of motion information as it ascends the primate visual and somatosensory neuraxes and conclude that similar computations are implemented in the two sensory systems.

Introduction

The nervous systems of humans and other mammals contain sensory receptors that differ in their sensitivities to different categories of stimuli. In touch, mechanoreceptors embedded in the skin respond to physical deformations of the skin; in vision, photoreceptors in the retina respond to light (Fig 1A and 1B). Although the brain modules for processing different types of inputs are largely distinct, the internal organization of these modules is surprisingly similar. In particular, sensory areas exhibit a topographic organization [1], wherein nearby neurons respond to similar stimulus features. This organization is columnar in the sense that, while neuronal response properties differ along a direction parallel to the cortical surface, they tend to be similar along the perpendicular direction [1]. In mammals, columns span the six layers of neocortex, and the connectivity within and between these layers is similar in most brain regions. These commonalities have led to the notion of a canonical circuit [2,3] that implements canonical computations. In this conception, cortical networks devoted to different sensory modalities differ only in the peripheral receptors that provide them with input and are otherwise identical or at least highly similar [4,5].

thumbnail
Fig 1. (A) Eye with slice of the retina. (B) Fingertip skin with a representation of a Meissner corpuscle (top), Pacinian corpuscle (middle), and Merkel receptor (bottom). (C) Cuneate nucleus (highlighted in purple). (D) Thalamus. Purple: ventral posterior nucleus; orange: lateral geniculate nucleus. (E) Purple: S1 (including Brodmann’s areas 3b, 1, and 2).

Orange: V1. Peach: MT. Image credit: Kenzie Green.

https://doi.org/10.1371/journal.pbio.1002271.g001

This is a powerful idea: to the extent that neural circuits perform canonical functions, we may be closer to understanding the brain than we realize. That is, some of the more complex functions performed by sensory systems—face recognition or texture identification, for example—might reflect relatively simple computations, iterated over multiple stages of neural processing in different modalities. Although this idea was proposed long ago on physiological [6,7] and theoretical [8] grounds, there has been little progress in testing it over the ensuing decades [9].

In this essay, we compare sensory processing in vision and touch to assess the degree to which analogous mechanisms are implemented in these modalities to solve analogous problems. To this end, we exploit recent developments that have led to algorithmic descriptions of a key function carried out by both systems, namely the processing of stimulus motion. The development of quantitative models of motion processing has yielded a reasonably clear picture of the computations carried out by the cortex in vision [10,11] and in touch [12]. Moreover, recent advances in statistical modeling have opened up new approaches to identifying and comparing neural computations in high-level sensory structures [13].

We suggest that the brain regions devoted to vision and touch, despite receiving fundamentally different physical inputs, implement many of the same processing strategies. We propose that the identification of canonical computations can be used as a starting point for the development of a quantitative understanding of other brain regions. Such a convergence of ideas has important implications for both basic and applied neuroscience [14].

Motion Processing in the Periphery and Thalamus

A moving object is one that changes position over time within some reference frame. When the reference frame is a receptor surface, the job of the nervous system is to estimate the object’s velocity from the outputs of peripheral receptors. In species such as mice and rabbits, strong velocity selectivity is found in the outputs of individual neurons in the sensory periphery [15]. In contrast, while some direction tuning is observed at the visual [16] and somatosensory periphery [17,18] under some circumstances, it tends to be much weaker than that observed in cortex (Fig 2) [16,19]. This suggests that, in primates, estimates of stimulus velocity are computed more centrally from peripheral signals.

thumbnail
Fig 2. The responses of somatosensory neurons to bars scanned across the skin.

(A) The responses of slowly adapting Type 1 (SA1) afferents are relatively insensitive to scanning direction. (B) The responses of a subpopulation of neurons in S1 are strongly tuned for scanning direction [19].

https://doi.org/10.1371/journal.pbio.1002271.g002

In the primate retina, at least two populations of neurons contribute to motion processing (Table 1). Magnocellular neurons have relatively large receptive fields and respond best to transient stimulus events [20], whereas parvocellular neurons have smaller receptive fields and respond well to slow motion [20]. Similarly, rapidly adapting (RA) and Pacinian (PC) afferents in touch (which innervate Meissner and Pacinian corpuscles, respectively) have larger receptive fields (RFs) and respond to rapid skin deflections, whereas slowly adapting Type 1 (SA1) afferents (associated with Merkel receptors) have small RFs and respond well to slow-moving or stationary stimuli (Fig 1A) [2123]. Thus, individual afferent classes in both vision and touch exhibit different selectivities for temporal structure in the stimulus. These neurons also have receptive fields that are quite small (with the exception of PC afferents in touch), indicating that they can also signal the position of a stimulus with high accuracy. Selectivity for the temporal and spatial structure of the stimulus is, however, insufficient to establish velocity selectivity, which entails a neural preference for motion in some directions over others, as well as tuning for a specific range of speeds.

thumbnail
Table 1. Types of peripheral receptors that contribute to motion processing.

https://doi.org/10.1371/journal.pbio.1002271.t001

Inputs from visual and somatosensory afferents are relayed to cortex via thalamic nuclei. In touch, there is an intervening synapse in the cuneate nucleus, where cutaneous signals may be processed to more closely match those of retinal ganglion cells, a hypothesis that has yet to be formally tested (Fig 1C and 1D). In the visual system, the lateral geniculate nucleus (LGN) provides the main relay from the periphery to the primary sensory cortex, while the analogous structure for touch is the ventral posterior nucleus (VPN). Although modest directional biases have been observed in the responses of individual LGN neurons to visual motion [24], strong direction selectivity is effectively absent in this thalamic nucleus. The same is likely true for VPN neurons, although this has not been systematically investigated.

Spatiotemporal Processing of Inputs in Primary Visual and Somatosensory Cortices

In primates, robust neuronal selectivity for the direction of visual motion first appears in the primary visual cortex (area V1, Fig 1E). Although different studies have applied different criteria for classifying a cell as direction selective, the typical finding is that roughly 15%–30% of V1 neurons exhibit this property [2527]. Similarly, robust direction selectivity is found in about 30% of neurons in Brodmann’s area 3b (Fig 1E, Fig 2B) [19], which, along with area 3a, forms the primary somatosensory cortex proper.

Many models have been proposed to account for the emergence of direction selectivity in the primary visual cortex. From a theoretical perspective, the problem is to integrate the outputs of thalamic neurons in such a way as to derive selectivity for motion direction and speed. This approach is conceptually identical to that of the Hubel and Wiesel model of orientation selectivity, with velocity simply being orientation in space-time (Fig 3) [28]. Examination of the structure of spatiotemporal receptive fields in visual and somatosensory cortices thus provides a critical comparison of computation in the two modalities.

thumbnail
Fig 3. Models of elementary motion selectivity.

(A) The space-time slant model. In this model, direction selectivity is due to the spatiotemporal preferences of the excitatory inputs (red). That is, the outputs of neurons with different receptive field positions and response latencies combine to produce a stronger response for rightward than for leftward motion. (B) The inhibitory “veto” model. In this conception, a moving stimulus activates a suppressive input (blue) if it passes through a certain position in space. Because this suppression is delayed, it arrives simultaneously with the input from an excitatory input (red). The suppression and excitation cancel out, effectively yielding no response to motion in the nonpreferred (here, leftward) direction.

https://doi.org/10.1371/journal.pbio.1002271.g003

Early studies in the visual system revealed two mechanisms that were consistently associated with direction selectivity. The first is a facilitation of a neuron’s response to a stimulus at one spatial position by the previous appearance of another stimulus at a nearby position (Fig 3A). In this scenario, direction selectivity results from an interaction between two or more excitatory inputs, and the preferred direction is determined by the relative positions of the receptive fields of these inputs [28]. The second is a suppression of the response to a stimulus at one position by a stimulus at a different position (Fig 3B). In this case, direction selectivity results from a synaptic mechanism that effectively vetoes responses in the nonpreferred direction [15].

Excitatory receptive field interactions can arise simply from afferent inputs that exhibit different response latencies at different spatial positions [29]. Specifically, integration over the outputs of afferents with suitable spatial positions and response latencies can yield receptive fields that exhibit orientation in space-time (Fig 3A), the angle of which reflects the preferred velocity [28]. Receptive fields with excitatory space-time orientation are found in both V1 [30] and S1 [31].

Evidence for the suppressive mechanism in primate V1 comes from physiological studies that show that the response to a flashed stimulus is reduced when it is preceded by another flashed stimulus at a spatially offset location [32]. The mechanism responsible for this property is a suppressive input that arrives at the neuron with some delay relative to the excitatory inputs. The spatial arrangement of stimuli that generate this interaction is generally consistent with the neuron’s preferred direction. Similarly, direction selectivity in many S1 neurons also relies on a lagged and spatially offset suppressive component [31].

In summary, the emergence of direction selectivity in primary visual and somatosensory cortices involves a combination of excitatory and suppressive mechanisms. These are instantiated through integration of thalamic inputs with specific spatial and temporal selectivities. It is important to note that there is nothing inevitable about this result: direction selectivity could be computed in the sensory periphery [15] or by other mechanisms that would yield spatiotemporal receptive fields different from those observed [33].

Hierarchical Motion Processing: Beyond Primary Sensory Cortex

Given the strong selectivity for stimulus orientation in V1 and S1, experimenters typically study motion processing with oriented stimuli that move across the receptive field. While many V1 and S1 neurons exhibit strong direction selectivity to this kind of stimulus, their direction selectivity for stimuli that contain multiple orientations, such as random dot fields [19,34], is much weaker. Both S1 and V1 send projections to cortical areas that are either specialized for motion processing or contain subpopulations of neurons that are, namely the middle temporal (MT or V5) area in vision [35] and Brodmann’s area 1 in touch [19,36]. In contrast to their counterparts in earlier areas, neurons in MT and area 1 exhibit strong direction selectivity for random dot fields [19,27]. This selectivity is thought to arise via integration of inputs from primary cortical neurons with many different orientation preferences [37].

In addition to integrating over orientations, neurons in both MT and area 1 integrate across space. Indeed, individual RFs in area V1 cover a tiny fraction of the visual field [25]; similarly, the majority of RFs in area 3b are smaller than 40 mm2, so most of them cover a small fraction of a finger pad [38]. Such small receptive fields can be problematic for motion processing, because a small field of view does not necessarily permit reliable estimates of velocity for larger objects (Fig 4A). In contrast, neurons in both MT and area 1 have relatively larger RFs [39,40].

thumbnail
Fig 4. Limitations on motion processing by neurons with small receptive fields.

(A) A neuron stimulated by an edge drifting through its receptive field, denoted by the circle, can only estimate the component of motion perpendicular to the edge orientation (red arrow), which is not generally the same as the actual direction (green arrows). (B) Small receptive fields compute velocity estimates (orange arrows) that depend on local stimulus structure. An end-stopped receptive field performs feature detection by responding to the end points (top and bottom) of a contour, which provide accurate estimates (green arrows). On the other hand, these receptive fields do not respond to continuous contours (center circle).

https://doi.org/10.1371/journal.pbio.1002271.g004

Consistent with this hierarchical organization of sensory cortices is the view that motion processing requires at least two stages [41]. Theoretically, there are many ways to formulate this two-stage process [10]. One class of models hypothesizes that object velocity is explicitly represented only at the second stage, while a second class of models hypothesizes that an initial estimate is obtained at the first stage, based on local features (Fig 4B). The two models are not mutually exclusive, and a combination of the two mechanisms would likely yield more robust and precise estimates of object velocity [10].

Evidence for the first class of models comes from the observation that some neurons in MT and area 1 appear to estimate object velocity in a manner that is independent of the spatiotemporal structure of the stimulus [19,42], in contrast to earlier stages of visual or tactile processing. For example, in touch, scanned bars yield direction-selective responses in area 3b, while random dot fields do not [19]. A similar phenomenon is obtained with plaid stimuli, which contain edges moving in two or more directions. When the orientations and speeds of the edges are chosen properly, the stimulus appears as one pattern moving in a single direction. While individual neurons in areas MT and area 1 exhibit selectivity for this pattern motion [19,43,44], such selectivity is generally lacking in earlier areas [19,43].

In the second class of models, feature extraction in the primary sensory cortex is tailored to facilitate accurate velocity estimation. Theoretically [45], the most informative features are those that contain multiple orientations in small image regions, for example, corners and line intersections. Because these features are defined locally, they can be detected with small receptive fields that exhibit more complex selectivity than a preference for a single orientation. Such selectivity was first noted in V1 by Hubel and Wiesel [46], who found evidence for neurons that responded best to short line segments; the responses of these “end-stopped” neurons were suppressed by extended edges. Similarly, a subpopulation of neurons in area 3b exhibit a receptive field structure that would in principle yield end-stopped responses, although they have not been tested specifically for this property [31,47]. Pack et al. [48] showed that end-stopped neurons could encode motion direction in a manner that was to some degree independent of the spatial configuration of the stimulus. With this mechanism in place, the responses of neurons in MT and area 1 can often be predicted based on a simple average of their inputs from V1 and area 3b, respectively, assuming that input from end-stopped neurons is weighted more heavily than that from motion-selective edge detectors [36,49]. There is thus indirect evidence to support the idea that visual and tactile motion processing benefits from more complex feature extraction at an early stage, and this idea has been incorporated into more recent computational models [36,5053].

These considerations highlight two key computations that are shared between vision and touch. The first is hierarchical processing: stimulus velocity is computed in stages, and the algorithmic details of the computations at each stage are quite similar across the two modalities. The second is feature detection: the extraction of specific features at one stage facilitates computation at the next stage. Again, there is nothing inevitable about these similarities: theoretical work has shown that alternative approaches can compute velocity very well [37].

Velocity Perception in Vision and Touch

Given the similarity and sophistication of motion processing in higher-order regions of visual and somatosensory cortex, one might ask whether these neuronal populations lead to similar perceptual experiences of motion and can account for observers’ perceptual reports of motion direction. This question can be addressed with plaid stimuli, for which the perceived direction depends heavily on the precise composition of the stimulus, namely, the respective motion directions of the component gratings. Psychophysically, the perceived direction depends heavily on stimulus composition, and this dependence is similar for tactile and visual plaids [54]. Furthermore, paired psychophysical experiments (in humans) and neurophysiological experiments (in monkeys) have shown that the responses of neurons in higher-level cortical areas account for the perceived direction of plaids across a wide range of conditions in both vision [43,55] and touch [36].

Visual and tactile velocity perception has also been studied with random-dot motion stimuli that are corrupted with random noise. As more noise is added to the stimulus, the task becomes more difficult, and one can then examine the conditions under which neural processing and the perception of directed motion begin to break down. Work in area MT has shown that the sensitivity of individual neurons to visual motion is similar to that of observers [56], suggesting that perceptual decisions about visual motion can be driven by the outputs of a small number of MT neurons. Similarly, in touch, the mean sensitivity to changes in direction of individual direction-selective neurons in area 1 matches that of human observers across all tested conditions [19].

Why Are Visual and Tactile Processing So Similar?

In vision and touch, stimulus information is extracted from a spatiotemporal pattern of activation across a sensory sheet, in the retina and the skin, respectively. The peripheral signals from the two systems are analogous, with magnocellular retinal ganglion cells corresponding to RA (and perhaps PC) fibers and parvocellular cells to SA1 fibers. At the earliest stage of cortical processing (V1, area 3b), visual and tactile motion signals are extracted by neurons with a specific spatiotemporal receptive field structure in such a way that their responses are highly dependent on the spatial properties of the stimulus. In the next hierarchical stage, motion representations are relatively independent of stimulus shape, due in part to the extraction of informative features in the primary sensory cortex. The receptive field structure, hierarchical sequence, and feature detection computations appear to be similar across modalities. One might ask, then, why motion processing in vision and touch is so similar.

One possibility is that the statistics of the stimuli that impinge on both systems in everyday life are similar. Indeed, objects have edges that move across the sensory sheet with a distribution of velocities that are approximately analogous in both modalities. As such, it is possible that analogous mechanisms evolved independently following principles of efficient coding [57]. A related possibility is that both systems evolved from a common receptor type; indeed, there is evidence that basic visual circuitry has been conserved across species, over millions of years [58]. In addition, visual motion computations are highly similar between insects and primates [13,29,59], to the extent that models of motion processing in the beetle predict human motion perception with remarkable fidelity [29].

Another possibility is that a fundamental principle that guided the evolution of these two sensory systems is that the resulting sensory representations be expressed in a common language that allows these to be integrated and, when necessary, mutually recalibrated. The integration of visual and tactile representations is well documented, including in motion processing. Indeed, the visual perception of motion has been shown to interact with its tactile counterparts in a variety of behavioral contexts [6062], which has been interpreted as evidence that these motion representations converge somewhere along the neuraxis [63].

The notion of canonical computation likely extends beyond the motion domain. In fact, the computations described above are similar to those involved in the extraction of shape information, which is also analogous in vision and touch [64,65], and strong analogies between vision and audition can be drawn as well [5]. This convergence of ideas holds great promise for future neuroscience investigations: if the bewildering complexity of sensory cortex can be reduced to a few canonical computations, we can narrow our search for candidate mechanisms. The search for canonical computations may thus dovetail with that for canonical neural circuits and lead to a more integrated view of nervous system function.

References

  1. 1. Mountcastle VB. Modality and topographic properties of single neurons of cat's somatic sensory cortex. J Neurophysiol. 1957;20(4):408–34. pmid:13439410
  2. 2. de No RL. Analysis of the activity of the chains of internuncial neurons. Journal of Neurophysiology. 1938;1(3):207–44.
  3. 3. Douglas RJ, Martin KA. A functional microcircuit for cat visual cortex. J Physiol. 1991;440:735–69. pmid:1666655
  4. 4. Yau JM, Pasupathy A, Fitzgerald PJ, Hsiao SS, Connor CE. Analogous intermediate shape coding in vision and touch. Proc Natl Acad Sci U S A. 2009;106(38):16457–62. pmid:19805320
  5. 5. Rauschecker JP. Auditory and visual cortex of primates: a comparison of two sensory systems. Eur J Neurosci. 2015;41(5):579–85. pmid:25728177
  6. 6. Creutzfeldt OD. Generality of the functional structure of the neocortex. Naturwissenschaften. 1977;64(10):507–17. pmid:337161
  7. 7. Mountcastle VB. An Organizing Principle for Cerebral Function: The Unit Model and the Distributed System. In: Gerald M. Edelman and Vernon B. Mountcastle, editor. The Mindful Brain. Cambridge: MIT Press; 1978.
  8. 8. Barlow H. Cerebral Cortex as Model Builder. Models of the visual cortex. New York: Wiley; 1985.
  9. 9. Marcus G, Marblestone A, Dean T. Neuroscience. The atoms of neural computation. Science. 2014;346(6209):551–2. pmid:25359953
  10. 10. Bradley DC, Goyal MS. Velocity computation in the primate visual system. Nat Rev Neurosci. 2008;9(9):686–95. pmid:19143050
  11. 11. Krause MR, Pack CC. Contextual modulation and stimulus selectivity in extrastriate cortex. Vision Res. 2014;104:36–46. pmid:25449337
  12. 12. Pei YC, Bensmaia SJ. The neural basis of tactile motion perception. J Neurophysiol. 2014;112(12):3023–32. pmid:25253479
  13. 13. Mineault PJ, Khawaja FA, Butts DA, Pack CC. Hierarchical processing of complex motion along the primate dorsal visual pathway. Proc Natl Acad Sci U S A. 2012;109(16):E972–80. pmid:22308392
  14. 14. Casanova MF. Canonical circuits of the cerebral cortex as enablers of neuroprosthetics. Frontiers in systems neuroscience. 2013;7:77. pmid:24265606
  15. 15. Barlow HB, Levick WR. The mechanism of directionally selective units in rabbit's retina. J Physiol (Lond). 1965;178(3):477–504.
  16. 16. Frechette ES, Sher A, Grivich MI, Petrusca D, Litke AM, Chichilnisky EJ. Fidelity of the ensemble code for visual motion in primate retina. J Neurophysiol. 2005;94(1):119–35. pmid:15625091
  17. 17. Essick GK, Edin BB. Receptor encoding of moving tactile stimuli in humans. II. The mean response of individual low-threshold mechanoreceptors to motion across the receptive field. The Journal of neuroscience: the official journal of the Society for Neuroscience. 1995;15(1 Pt 2):848–64.
  18. 18. Wheat HE, Salo LM, Goodwin AW. Cutaneous afferents from the monkeys fingers: responses to tangential and normal forces. Journal of neurophysiology. 2010;103(2):950–61. pmid:19955296
  19. 19. Pei YC, Hsiao SS, Craig JC, Bensmaia SJ. Shape invariant coding of motion direction in somatosensory cortex. PLoS Biol. 2010;8(2):e1000305. pmid:20126380
  20. 20. Xu X, Ichida JM, Allison JD, Boyd JD, Bonds AB, Casagrande VA. A comparison of koniocellular, magnocellular and parvocellular receptive field properties in the lateral geniculate nucleus of the owl monkey (Aotus trivirgatus). J Physiol. 2001;531(Pt 1):203–18.
  21. 21. Johansson RS, Vallbo AB. Spatial properties of the population of mechanoreceptive units in the glabrous skin of the human hand. Brain research. 1980;184(2):353–66. pmid:7353161
  22. 22. Vega-Bermudez F, Johnson KO. SA1 and RA receptive fields, response variability, and population responses mapped with a probe array. Journal of neurophysiology. 1999;81(6):2701–10. pmid:10368390
  23. 23. Muniak MA, Ray S, Hsiao SS, Dammann JF, Bensmaia SJ. The neural coding of stimulus intensity: linking the population response of mechanoreceptive afferents with psychophysical behavior. J Neurosci. 2007;27(43):11687–99. pmid:17959811
  24. 24. Xu X, Ichida J, Shostak Y, Bonds AB, Casagrande VA. Are primate lateral geniculate nucleus (LGN) cells really sensitive to orientation or direction? Vis Neurosci. 2002;19(1):97–108. pmid:12180863
  25. 25. Hubel DH, Wiesel TN. Receptive fields and functional architecture of monkey striate cortex. J Physiol. 1968;195(1):215–43. pmid:4966457
  26. 26. Mikami A, Newsome WT, Wurtz RH. Motion selectivity in macaque visual cortex. I. Mechanisms of direction and speed selectivity in extrastriate area MT. J Neurophysiol. 1986;55(6):1308–27. pmid:3016210
  27. 27. Pack CC, Conway BR, Born RT, Livingstone MS. Spatiotemporal structure of nonlinear subunits in macaque visual cortex. J Neurosci. 2006;26(3):893–907. pmid:16421309
  28. 28. Adelson EH, Bergen JR. Spatiotemporal energy models for the perception of motion. J Opt Soc Am A. 1985;2(2):284–99. pmid:3973762
  29. 29. van Santen JP, Sperling G. Elaborated Reichardt detectors. J Opt Soc Am A. 1985;2(2):300–21. pmid:3973763
  30. 30. De Valois RL, Cottaris NP. Inputs to directionally selective simple cells in macaque striate cortex. Proc Natl Acad Sci U S A. 1998;95(24):14488–93. pmid:9826727
  31. 31. DiCarlo JJ, Johnson KO. Spatial and temporal structure of receptive fields in primate somatosensory area 3b: effects of stimulus scanning direction and orientation. The Journal of neuroscience: the official journal of the Society for Neuroscience. 2000;20(1):495–510.
  32. 32. Livingstone MS. Mechanisms of direction selectivity in macaque V1. Neuron. 1998;20(3):509–26. pmid:9539125
  33. 33. Emerson RC, Bergen JR, Adelson EH. Directionally selective complex cells and the computation of motion energy in cat visual cortex. Vision Res. 1992;32(2):203–18. pmid:1574836
  34. 34. Snowden RJ, Treue S, Andersen RA. The response of neurons in areas V1 and MT of the alert rhesus monkey to moving random dot patterns. Exp Brain Res. 1992;88(2):389–400. pmid:1577111
  35. 35. Dubner R, Zeki SM. Response properties and receptive fields of cells in an anatomically defined region of the superior temporal sulcus in the monkey. Brain Res. 1971;35(2):528–32. pmid:5002708
  36. 36. Pei YC, Hsiao SS, Craig JC, Bensmaia SJ. Neural mechanisms of tactile motion integration in somatosensory cortex. Neuron. 2011;69(3):536–47. pmid:21315263
  37. 37. Simoncelli EP, Heeger DJ. A model of neuronal responses in visual area MT. Vision Res. 1998;38(5):743–61. pmid:9604103
  38. 38. Sripati AP, Yoshioka T, Denchev P, Hsiao SS, Johnson KO. Spatiotemporal receptive fields of peripheral afferents and cortical area 3b and 1 neurons in the primate somatosensory system. The Journal of neuroscience: the official journal of the Society for Neuroscience. 2006;26(7):2101–14.
  39. 39. Allman JM, Kaas JH. A representation of the visual field in the caudal third of the middle tempral gyrus of the owl monkey (Aotus trivirgatus). Brain Res. 1971;31(1):85–105. pmid:4998922
  40. 40. Gardner EP. Somatosensory cortical mechanisms of feature detection in tactile and kinesthetic discrimination. Canadian journal of physiology and pharmacology. 1988;66(4):439–54. pmid:3139269
  41. 41. Khawaja FA, Tsui JM, Pack CC. Pattern motion selectivity of spiking outputs and local field potentials in macaque visual cortex. J Neurosci. 2009;29(43):13702–9. pmid:19864582
  42. 42. Perrone JA, Thiele A. Speed skills: measuring the visual speed analyzing properties of primate MT neurons. Nat Neurosci. 2001;4(5):526–32. pmid:11319562
  43. 43. Movshon JA, Adelson EH, Gizzi MS, Newsome WT. The analysis of moving visual patterns. In: Chagas C, Gattass R, Gross C, editors. Pattern Recognition Mechanisms. Rome: Vatican Press; 1985. p. 117–51.
  44. 44. Pack CC, Berezovskii VK, Born RT. Dynamic properties of neurons in cortical area MT in alert and anaesthetized macaque monkeys. Nature. 2001;414(6866):905–8. pmid:11780062
  45. 45. Attneave F. Some informational aspects of visual perception. Psychological Review. 1954;61:183–93. pmid:13167245
  46. 46. Hubel DH, Wiesel TN. Receptive fields and functional architecture in two non-striate visual areas (18 and 19) of the cat. J Neurophysiol. 1965;28:229–89. pmid:14283058
  47. 47. DiCarlo JJ, Johnson KO. Receptive field structure in cortical area 3b of the alert monkey. Behavioural brain research. 2002;135(1–2):167–78. pmid:12356447
  48. 48. Pack CC, Livingstone M, Duffy K, Born RT. End-stopping and the aperture problem: two-dimensional motion signals in macaque V1. Neuron. 2003;39(4):671–80. pmid:12925280
  49. 49. Pack CC, Gartland AJ, Born RT. Integration of Contour and Terminator Signals in Visual Area MT of Alert Macaque. J Neurosci. 2004;24(13):3268–80. pmid:15056706
  50. 50. van den Berg AV, Noest AJ. Motion transparency and coherence in plaids: the role of end-stopped cells. Exp Brain Res. 1993;96(3):519–33. pmid:8299753
  51. 51. Tsui JM, Hunter JN, Born RT, Pack CC. The role of V1 surround suppression in MT motion integration. J Neurophysiol. 2010;103(6):3123–38. pmid:20457860
  52. 52. Rust NC, Mante V, Simoncelli EP, Movshon JA. How MT cells analyze the motion of visual patterns. Nat Neurosci. 2006;9(11):1421–31. pmid:17041595
  53. 53. Zetzsche C, Barth E. Fundamental limits of linear filters in the visual processing of two-dimensional signals. Vision Res. 1990;30(7):1111–7. pmid:2392840
  54. 54. Pei YC, Hsiao SS, Bensmaia SJ. The tactile integration of local motion cues is analogous to its visual counterpart. Proceedings of the National Academy of Sciences of the United States of America. 2008;105(23):8130–5. pmid:18524953
  55. 55. Khawaja FA, Liu LD, Pack CC. Responses of MST neurons to plaid stimuli. J Neurophysiol. 2013;110(1):63–74. pmid:23596331
  56. 56. Britten KH, Shadlen MN, Newsome WT, Movshon JA. The analysis of visual motion: a comparison of neuronal and psychophysical performance. J Neurosci. 1992;12(12):4745–65. pmid:1464765
  57. 57. Olshausen BA, Field DJ. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature. 1996;381(6583):607–9. pmid:8637596
  58. 58. Sanes JR, Zipursky SL. Design principles of insect and vertebrate visual systems. Neuron. 2010;66(1):15–36. pmid:20399726
  59. 59. Borst A, Helmstaedter M. Common circuit design in fly and mammalian motion vision. Nat Neurosci. 2015;18(8):1067–76. pmid:26120965
  60. 60. Bensmaia SJ, Killebrew JH, Craig JC. Influence of visual motion on tactile motion perception. J Neurophysiol. 2006;96(3):1625–37. pmid:16723415
  61. 61. Konkle T, Wang Q, Hayward V, Moore CI. Motion aftereffects transfer between touch and vision. Current biology: CB. 2009;19(9):745–50. pmid:19361996
  62. 62. Blake R, Sobel KV, James TW. Neural synergy between kinetic vision and touch. Psychological science. 2004;15(6):397–402. pmid:15147493
  63. 63. Sathian K, Stilla R. Cross-modal plasticity of tactile perception in blindness. Restorative neurology and neuroscience. 2010;28(2):271–81. pmid:20404414
  64. 64. Bensmaia SJ, Hsiao SS, Denchev PV, Killebrew JH, Craig JC. The tactile perception of stimulus orientation. Somatosens Mot Res. 2008;25(1):49–59. pmid:18344147
  65. 65. Bensmaia SJ, Denchev PV, Dammann JF 3rd, Craig JC, Hsiao SS. The representation of stimulus orientation in the early stages of somatosensory processing. J Neurosci. 2008;28(3):776–86. pmid:18199777