Concept Graph & Resume using Claude 3 Opus | Chat GPT4 | Gemini Adv | Llama 3:
Resume:
1.-Cynthia Dwork is a renowned computer scientist who uses theoretical computer science to address societal problems.
2.-Algorithms can be unfair due to biased training data, historical bias in labels, and differentially expressive features.
3.-Algorithmic unfairness has significant real-world consequences, such as in child protection services and recidivism prediction.
4.-Group fairness definitions, while popular, often fail under scrutiny compared to individual fairness.
5.-Ilvento's work approximates a similarity metric for individual fairness using human knowledge and learning theory.
6.-Multi-accuracy achieves group fairness simultaneously for intersectional groups defined by a large collection of sets.
7.-Scoring functions produce probabilities, but the meaning is unclear for non-repeatable events like tumor metastasis.
8.-Calibration in forecasting requires predicted probabilities to match observed frequencies for each predicted value.
9.-Multi-accuracy retains expectations for predefined sets; solutions vary without training data or additional constraints.
10.-Complexity theory suggests considering all efficiently computable sets to capture historically disadvantaged groups.
11.-Multi-accuracy and multi-calibration together aim to capture all task-specific, semantically significant differences.
12.-Data collected is often differentially expressive for advantaged vs. disadvantaged groups.
13.-Ranking underlies many applications like triage, admissions, and affirmative action strategies.
14.-Fair ranking should prevent obviously unfair outcomes, e.g., all of one group ranked above another.
15.-Multi-accuracy prevents certain unfair rankings; multi-calibration is even stronger.
16.-Focus should be on what data is collected and measured, as unfairness often lies there.
17.-Computationally indistinguishable positive and negative examples suggest assigning base rate probabilities.
18.-Rich multi-calibration may justify treating predictions as pseudo-random "truth" with respect to the defining sets.
19.-Fair representation learning aims to hide sensitive attributes while enabling standard training.
20.-Adversarial approaches censor representations to achieve group fairness notions like statistical parity.
21.-Learned censored representations can enable transfer learning to other prediction tasks.
22.-Censoring techniques may identify commonalities across populations for out-of-distribution generalization.
23.-Synthetic data experiments show promise for learning common predictive signal across populations.
24.-Fair algorithms alone cannot fully address societal unfairness.
25.-Breakthroughs in metric learning enable individual fairness.
26.-Multi-calibration emerged as significant for fair scoring, ranking, and understanding individual probabilities.
27.-Representation and data collection are critical factors in algorithmic fairness.
28.-Censored representations offer a path to generalizing across populations.
29.-Achieving truly "superhuman" fairness remains an open challenge.
30.-Much work remains to deeply understand fairness and develop principled, broadly applicable solutions.
Knowledge Vault built byDavid Vivancos 2024