Cancel
© 2016 Easy Notecards
card-image

Psych 3513 Lecture 10

Set Details Share
Helpfulness: 0
created 4 years ago by mwilliams037
7 views
show more
1

Our senses are processed on largely different pathways,

each providing a unique window to the world.

2

ach providing a unique window to the world.Multimodal processing requires

at least two senses to converge.

3

One example is the superior temporal sulcus (STS), which contains unimodal, bimodal, and trimodal neurons.

Activates during lip reading when visual and auditory input match.

4

The superior colliculus in the midbrain also

performs multimodal processing.

5

The superior colliculus Contains neurons that respond preferentially to multimodal input

greater than the sum of its parts.

6

The superior colliculus Plays a role in

coordinated control of eyes, ears and head.

7

Sometimes multimodal processing can

get mixed up.

8

On the left, colored-grapheme synesthesia gives rise to

letters having colors.

9

On the right, this can actually lead to

pop-out effects that most people do not see.

10

Some synesthetes mix words and taste, such that

“exactly” tastes like eggs and “Derek” tastes like ear-wax.

11

Synesthesia also depends on

attention.

12

Focusing on the global structure gives rise to one color perception,

while focusing on local structure gives another.

13

Color synesthesia can manifest as induced neural activity in

both V4 and STS.

14

We combine our representations of color, shape, form, texture, motion, etc...

And we can recognize objects independent of view and in many contexts (though context can facilitate object identification.)

15

One theory of object recognition posits that we have so-called “grandmother” neurons that

represent high-level objects at the top of a hierarchy (left).

16

Another view is that ensembles of neurons represent

different high-level features of an object (right).

17

Lateral Occipital Cortex (LOC) responds to both familiar and novel objects

Even when induced by motion (right).

18

Some patients exhibit category-specific associative agnosia with

a bias towards living or non-living objects.

19

It is possible that non-living objects benefit from

sensorimotor information.

20

Living-objects also share more similar characteristics,

so selective damage might give rise to selective deficits.

21

Farah and McClelland tested this with a model of a property-based semantic representation.

They followed the distinction that semantic memory is both visually and functionally based (with a bias towards visual).

22

Lesioning the visual semantic memory gave rise to

a significant deficit in identifying living objects.
The opposite, only affected non-living, but to a lesser extent.

23

We do not need to conclude knowledge is organized across categories to

get category-specific deficits.

24

We are really biased toward

processing upright faces.

25

Faces do seem to

activate distinct regions.

26

Fusiform gyrus (especially on the right, which is the left in this image) is

active for faces vs. objects/scrambled faces.

27

EEG can detect differences in

face processing, as well.

28

The right hemisphere seems to play a specific role in

perception of the self.

29

Keenan et al. (2001) deactivated each hemisphere of the brain and tested whether patients saw a morphed image as themselves or a famous person.

When the right was anesthetized they showed a strong bias towards seeing famous people.

30

Sensory information converges in multimodal processing regions,

including the superior temporal sulcus

31

Synesthesia occurs when there are

mix-ups in this representation.

32

Object representation (i.e., our semantic knowledge) is thought to involve these multimodal processing regions coding for

features or categories of the objects.

33

Face processing seems to be distinct from other forms of object processing,

yet a debate lingers on whether this is due to expertise.