Knowledge Vault 3/21 - G.TEC BCI & Neurotechnology Spring School 2024 - Day 2
Decoding cross-modal information from the brain using intracranial recordings
David Brang, University of Michigan (USA)
<Resume Image >

Concept Graph & Resume using Claude 3 Opus | Chat GPT4 | Llama 3:

graph LR classDef main fill:#f9d4d4, font-weight:bold, font-size:14px; classDef auditory fill:#d4f9d4, font-weight:bold, font-size:14px; classDef visual fill:#d4d4f9, font-weight:bold, font-size:14px; classDef speech fill:#f9f9d4, font-weight:bold, font-size:14px; classDef methods fill:#f9d4f9, font-weight:bold, font-size:14px; A[David Brang] --> B[David's lab: audiovisual interactions,
cross-modal perception. 1] A --> C[Darkness: sounds aid object
detection, localization, identification. 2] C --> D[Visual cues restore degraded
acoustic speech. 2] A --> E[Multisensory neurons: hyper-additivity for
aligned visual-auditory stimuli. 3] A --> F[Cross-modal sensory expertise sharing,
predictive transfer. 4] A --> G[Pre-visual sounds enhance detection,
increase cortical excitability. 5] G --> H[Pre-TMS sounds increase phosphene
perception likelihood. 6] A --> I[Auditory relay to visual
regions: mechanisms, relevance. 7] I --> J[Visual cortex rapidly responds
to sounds. 8] I --> K[Widespread visual areas respond,
highest V1, MT+. 9] K --> L[Sound-evoked activity spatiotopically
aligned, encodes location. 10] I --> M[Visual cortex: onset/offset responses,
likely cortical inputs. 11] I --> N[Sounds modulate visual oscillations,
firing. High gamma increases. 12] I --> O[Sub-threshold effects, deprivation allows
threshold crossing, hallucinations. 13] A --> P[Speech perception: visual influence
on auditory processing. 14] P --> Q[Lip movements provide timing,
phoneme constraints, auditory enhancement. 15] P --> R[Lipreading activates auditory cortex,
STG/STS. fMRI evidence. 16] R --> S[Lipreading evokes auditory oscillations,
STG high gamma. 17] R --> T[Auditory cortex discriminates lipread
phonemes predictively. 18] R --> U[Similar auditory-visual phoneme confusability
in STG. 19] P --> V[Visual speech input: predictive
oscillations enhance perception. 20] A --> W[Abnormal sensory integration in
synesthesia, blindness. 21] A --> X[Intracranial recordings: epilepsy, tumor
patients. Results generalize. 22] A --> Y[Attention affects cross-modal timing,
may boost effects. 23] A --> Z[SVM classification used. Pre-trained
networks may help. 24] A --> AA[Right-handers mainly studied. Effects
in left-handers unknown. 25] A --> AB[Native vs non-native speaker
differences not compared. 26] A --> AC[Closing remarks: summarize findings,
express interest in discussion. 27] class A,B main; class C,D,E,F,G,H,W auditory; class I,J,K,L,M,N,O,P,Q,R,S,T,U,V visual; class X,Y,Z,AA,AB,AC methods;

Resume:

1.-David's lab at University of Michigan studies auditory-visual interactions and cross-modal perception using intracranial recordings.

2.-In darkness, sounds help detect, localize and identify objects. Visual cues like lip movements help restore degraded acoustic speech signals.

3.-Multisensory responses in superior colliculus neurons show hyper-additivity when combining spatially aligned visual and auditory stimuli.

4.-Each sensory modality has expertise it can share cross-modally. Timing differences between senses allow for predictive information transfer.

5.-Sounds presented before visual targets enhance visual detection thresholds and increase visual cortex excitability, seen in ERPs.

6.-Sounds prior to TMS over occipital cortex make phosphene perception more likely, indicating increased visual cortex excitability.

7.-Studied what auditory information is relayed to visual regions, underlying mechanisms, and behavioral relevance using human intracranial recordings.

8.-Visual cortex responds to sounds in 30-50ms, even faster than typical visual responses starting at 50ms.

9.-Most visual areas show some sound-evoked activity, highest in V1 and MT+. Not just general arousal, as micro-saccades ruled out.

10.-Lateralized sounds evoke spatiotopically-aligned visual cortex activity - contralateral sounds preferred over ipsilateral. Encodes spatial info.

11.-Visual cortex responds to sound onsets and offsets but not ongoing auditory dynamics, unlike auditory cortex. Inputs likely cortical.

12.-Sounds mainly modulate low-frequency oscillations in visual cortex, suppressing firing. High gamma (spiking) increases in higher visual areas.

13.-Sounds usually don't evoke visual qualia, as effects are subthreshold. But sensory deprivation may allow threshold crossing and hallucinations.

14.-Studied speech perception - how vision affects auditory speech processing. Visual cues (mouth movements) can disambiguate noisy speech.

15.-Model: Lip movements provide timing cues, constrain phoneme identity, and activate auditory representations to enhance speech intelligibility.

16.-fMRI showed lipreading activates auditory cortex, some speechreading info in STG/STS. But could reflect imagery rather than online processing.

17.-Intracranial recordings showed lipreading mainly evokes low-frequency oscillations in auditory cortex. Posterior STG also shows high gamma (spiking).

18.-Auditory cortex discriminates lipread phonemes as early or earlier than actual auditory speech. Suggests online predictive processing, not just imagery.

19.-Patterns of phoneme confusability are similar for auditory and visual speech in STG, different than visual areas like fusiform gyrus.

20.-Proposes visual speech input to auditory cortex is mainly carried by low-frequency oscillations, provides predictive cues to enhance speech perception.

21.-Has studied abnormal sensory integration in synesthesia, blindness. Suggests cross-modal connections exist but are unmasked/strengthened in atypical processing.

22.-Intracranial results from epilepsy and tumor patients - functional localization and replication across pathologies supports generalization to healthy population.

23.-Attention affects cross-modal timing. Some cross-modal effects may be partially driven by attention-related boosting rather than direct information transfer.

24.-Used mainly SVM classification, limited data precludes deep learning without overfitting. Pre-trained networks on more data may work well.

25.-Most patients studied are right-handed. Left-handers and atypical language dominance are underrepresented, so effects unknown.

26.-Has not directly compared native vs non-native speakers, though there are likely differences in audiovisual speech perception and lipreading.

27.-Closing remarks - thanking the hosts, summarizing the key findings, and expressing interest in further discussion at future meetings.

Knowledge Vault built byDavid Vivancos 2024