Concept Graph & Resume using Claude 3 Opus | Chat GPT4 | Llama 3:
Resume:
1.- Michele is a PhD student working on bringing brain-computer interface (BCI) technology into gaming.
2.- He defines a neuro game as one whose mechanics are partly or entirely controlled by brain input.
3.- Games have been a driving factor in the computer and gaming industry over the past 50 years.
4.- The first brain game dates back to the 1970s. BCI games have been used for training, treatment, rehabilitation, accessibility.
5.- Main applications are clinical (accessibility, rehabilitation) and non-clinical (serious gaming for research, entertainment).
6.- A BCI system has components of user, data processing, feature extraction, classifier training, and a control interface (the game).
7.- The games discussed use visually evoked potentials - single stimulus, steady-state (SSVEP), and code modulated.
8.- Michele developed a Unity interface to easily create BCI games, handling signal processing and classification.
9.- Brain Hockey (Pong-like) and Green Shield (Space Invaders-like) are two games he developed using the single stimulus method.
10.- Games have a training phase to calibrate the classifier to the player's brain signals. UI elements indicate good or bad calibration.
11.- Under the hood, the training collects brain data for the target stimuli, trains a classifier, tested with cross-validation.
12.- In-game, performance metrics are task accuracy (using the right input for the game goal) and information transfer rate.
13.- In pilot testing, players achieved 80%+ accuracy on their first try in Green Shield. Scores improved on repeat sessions.
14.- Brain Hockey felt harder to players. 60% accuracy can beat the AI. Scores also improved on additional tries.
15.- To show reliability, Michele scored 100% accuracy on 90 consecutive enemies over 3 rounds in Green Shield.
16.- Classifier probability plots matched the expected target sequence, validating the classification results.
17.- Common pitfalls in neuro games include poor signal quality, too short calibration, non-ideal confidence levels, suboptimal visual design.
18.- Signal quality can be checked pre-game. Poor quality makes the game unreliable.
19.- 30 trials minimum recommended for calibration, 60 is ideal. Confidence levels balance precision vs speed.
20.- High confidence means high precision but slower. Low confidence is faster but less precise. Design depends on number of targets.
21.- Transparent stimuli and overlapping/moving targets make classification harder and should be avoided.
22.- Design tips: understand BCI constraints, playtest yourself and with others, design around game affordances, choose the right BCI paradigm.
23.- Gamify the calibration phase to make it engaging. Can use both BCI and standard inputs in hybrid control schemes.
24.- Vary focus time required for classification to adjust difficulty level.
25.- Current challenges: reducing calibration time through persistent models and pre-training, improving reliability with fewer channels, reducing data waste.
26.- Classification and artifact removal are done on-device. Uses a modified linear discriminant analysis classifier.
27.- Each game session currently trains a new subject-specific model. Preserving data across sessions being explored.
28.- Pre-trained multi-subject models and auto-training during gameplay are avenues to reduce calibration time.
29.- Fatigue is an issue, especially with flashing stimuli. Breaks help. Alternative BCI paradigms like motor imagery are options.
30.- Only occipital channels over the visual cortex are needed for visually evoked BCI games. A minimal rear-head setup suffices.
Knowledge Vault built byDavid Vivancos 2024