Abstract
While cognition is possible without perception (e.g. the proverbial ‘brain in a vat’), it is through our perceptual abilities that we can interact with the world. Although we sense our environment through two-dimensional (2-D) arrays of light (or continuous streams of sound energy), what we perceive and think about are three-dimensional (3-D) objects. Consequently, human perception and cognition requires that we explain how humans learn, remember, recognize, manipulate, act on and react to such objects. Perception is the process that allows us to interact with objects in the world, providing both a way to gather information about them and a way of acting upon them.
In humans and other primates, the most salient perceptual system is vision. Traditionally, the study of visual processes has been directed at discrete dimensions of visual information, such as thresholds of luminance (e.g. König & Brodhun, 1889; Hecht, Shlaer, & Pirenne, 1942), mechanisms of stereopsis (e.g. Wheatstone, 1838), or color (e.g. Gregory, 1977). Hubel and Wiesel's (1959, 1962) Nobel prize-winning discovery of orientation-selective cells in the cat brain saw a broadening of such dimensions to include orientation of edges, length, spatial frequency and many other more complex visual attributes. Contemporary developments in these fields have been covered in depth in the previous chapter. In this chapter, however, we discuss a growing body of work that has addressed a different question: how do we see objects? That is, rather than examining the kinds of information gained through our sensory systems, how is this information used to derive knowledge about objects in the world around us? As we shall see, answering this question has proved very difficult. Consider that in low-level vision, a theorist can point to cells in the retina that respond selectively to different wavelengths of light, or neurons in primary visual cortex that spike most actively when lines of a particular orientation or disparity are presented within their receptive fields; said theorist can then build an account of the manner in which this information (wavelengths, orientations, disparities) is organized by perceptual processes. When thinking about objects, however, there is no simple mapping between neural codes and our phenomenology. Rather, single objects contain vast amounts of information that is somehow organized and recruited in a task-appropriate fashion according to higher-order functional considerations.
In humans and other primates, the most salient perceptual system is vision. Traditionally, the study of visual processes has been directed at discrete dimensions of visual information, such as thresholds of luminance (e.g. König & Brodhun, 1889; Hecht, Shlaer, & Pirenne, 1942), mechanisms of stereopsis (e.g. Wheatstone, 1838), or color (e.g. Gregory, 1977). Hubel and Wiesel's (1959, 1962) Nobel prize-winning discovery of orientation-selective cells in the cat brain saw a broadening of such dimensions to include orientation of edges, length, spatial frequency and many other more complex visual attributes. Contemporary developments in these fields have been covered in depth in the previous chapter. In this chapter, however, we discuss a growing body of work that has addressed a different question: how do we see objects? That is, rather than examining the kinds of information gained through our sensory systems, how is this information used to derive knowledge about objects in the world around us? As we shall see, answering this question has proved very difficult. Consider that in low-level vision, a theorist can point to cells in the retina that respond selectively to different wavelengths of light, or neurons in primary visual cortex that spike most actively when lines of a particular orientation or disparity are presented within their receptive fields; said theorist can then build an account of the manner in which this information (wavelengths, orientations, disparities) is organized by perceptual processes. When thinking about objects, however, there is no simple mapping between neural codes and our phenomenology. Rather, single objects contain vast amounts of information that is somehow organized and recruited in a task-appropriate fashion according to higher-order functional considerations.
Original language | English |
---|---|
Title of host publication | Handbook of Cognition |
Editors | Koen LAMBERTS, Robert L. GOLDSTONE |
Publisher | Sage Publications |
Chapter | 2 |
Pages | 48-70 |
Number of pages | 23 |
ISBN (Electronic) | 9781848608177 |
ISBN (Print) | 9780761972778 |
DOIs | |
Publication status | Published - 2005 |
Externally published | Yes |