Concept Graph & Resume using Claude 3 Opus | Chat GPT4 | Llama 3:
Resume:
1.-David's lab at University of Michigan studies auditory-visual interactions and cross-modal perception using intracranial recordings.
2.-In darkness, sounds help detect, localize and identify objects. Visual cues like lip movements help restore degraded acoustic speech signals.
3.-Multisensory responses in superior colliculus neurons show hyper-additivity when combining spatially aligned visual and auditory stimuli.
4.-Each sensory modality has expertise it can share cross-modally. Timing differences between senses allow for predictive information transfer.
5.-Sounds presented before visual targets enhance visual detection thresholds and increase visual cortex excitability, seen in ERPs.
6.-Sounds prior to TMS over occipital cortex make phosphene perception more likely, indicating increased visual cortex excitability.
7.-Studied what auditory information is relayed to visual regions, underlying mechanisms, and behavioral relevance using human intracranial recordings.
8.-Visual cortex responds to sounds in 30-50ms, even faster than typical visual responses starting at 50ms.
9.-Most visual areas show some sound-evoked activity, highest in V1 and MT+. Not just general arousal, as micro-saccades ruled out.
10.-Lateralized sounds evoke spatiotopically-aligned visual cortex activity - contralateral sounds preferred over ipsilateral. Encodes spatial info.
11.-Visual cortex responds to sound onsets and offsets but not ongoing auditory dynamics, unlike auditory cortex. Inputs likely cortical.
12.-Sounds mainly modulate low-frequency oscillations in visual cortex, suppressing firing. High gamma (spiking) increases in higher visual areas.
13.-Sounds usually don't evoke visual qualia, as effects are subthreshold. But sensory deprivation may allow threshold crossing and hallucinations.
14.-Studied speech perception - how vision affects auditory speech processing. Visual cues (mouth movements) can disambiguate noisy speech.
15.-Model: Lip movements provide timing cues, constrain phoneme identity, and activate auditory representations to enhance speech intelligibility.
16.-fMRI showed lipreading activates auditory cortex, some speechreading info in STG/STS. But could reflect imagery rather than online processing.
17.-Intracranial recordings showed lipreading mainly evokes low-frequency oscillations in auditory cortex. Posterior STG also shows high gamma (spiking).
18.-Auditory cortex discriminates lipread phonemes as early or earlier than actual auditory speech. Suggests online predictive processing, not just imagery.
19.-Patterns of phoneme confusability are similar for auditory and visual speech in STG, different than visual areas like fusiform gyrus.
20.-Proposes visual speech input to auditory cortex is mainly carried by low-frequency oscillations, provides predictive cues to enhance speech perception.
21.-Has studied abnormal sensory integration in synesthesia, blindness. Suggests cross-modal connections exist but are unmasked/strengthened in atypical processing.
22.-Intracranial results from epilepsy and tumor patients - functional localization and replication across pathologies supports generalization to healthy population.
23.-Attention affects cross-modal timing. Some cross-modal effects may be partially driven by attention-related boosting rather than direct information transfer.
24.-Used mainly SVM classification, limited data precludes deep learning without overfitting. Pre-trained networks on more data may work well.
25.-Most patients studied are right-handed. Left-handers and atypical language dominance are underrepresented, so effects unknown.
26.-Has not directly compared native vs non-native speakers, though there are likely differences in audiovisual speech perception and lipreading.
27.-Closing remarks - thanking the hosts, summarizing the key findings, and expressing interest in further discussion at future meetings.
Knowledge Vault built byDavid Vivancos 2024