Knowledge Vault 2/51 - ICLR 2014-2023
Cynthia Dwork ICLR 2019 - Invited Talk - Highlights of Recent Developments in Algorithmic Fairness
<Resume Image >

Concept Graph & Resume using Claude 3 Opus | Chat GPT4 | Gemini Adv | Llama 3:

graph LR classDef cynthia fill:#f9d4d4, font-weight:bold, font-size:14px; classDef algorithms fill:#d4f9d4, font-weight:bold, font-size:14px; classDef fairness fill:#d4d4f9, font-weight:bold, font-size:14px; classDef representation fill:#f9f9d4, font-weight:bold, font-size:14px; classDef future fill:#f9d4f9, font-weight:bold, font-size:14px; A[Cynthia Dwork
ICLR 2019 ] --> B[Cynthia Dwork: renowned
computer scientist. 1] A --> C[Algorithms: unfair due to bias,
historical issues, features. 2] C --> D[Algorithmic unfairness: real-world
consequences. 3] A --> E[Group fairness: often
fails vs. individual. 4] E --> F[Ilvento: approximates individual
fairness metric. 5] A --> G[Multi-accuracy: group fairness
for intersectional groups. 6] A --> H[Scoring functions: unclear
probability meaning. 7] H --> I[Calibration: predicted probabilities
match frequencies. 8] G --> J[Multi-accuracy: retains set
expectations, varies without data/constraints. 9] G --> K[Capturing historical disadvantage:
consider computable sets. 10] G --> L[Multi-accuracy & calibration:
capture task-specific differences. 11] C --> M[Data: differentially expressive
for advantaged/disadvantaged. 12] A --> N[Ranking: underlies triage,
admissions, affirmative action. 13] N --> O[Fair ranking: prevent unfair
group outcomes. 14] O --> P[Multi-accuracy & calibration
prevent unfair rankings. 15] A --> Q[Focus on collected data
and measurements. 16] Q --> R[Indistinguishable examples:
assign base rate. 17] A --> S[Rich multi-calibration: predictions
as pseudo-random 'truth'. 18] A --> T[Fair representation: hide
sensitive attributes. 19] T --> U[Adversarial censoring achieves
group fairness notions. 20] T --> V[Censored representations enable
transfer learning. 21] T --> W[Censoring may identify
cross-population commonalities. 22] T --> X[Synthetic data promising for
learning common signal. 23] A --> Y[Fair algorithms alone
can't fix societal unfairness. 24] A --> Z[Metric learning breakthroughs
enable individual fairness. 25] A --> AA[Multi-calibration significant for
scoring, ranking, probabilities. 26] A --> AB[Representation and data
collection critical for fairness. 27] T --> AC[Censored representations generalize
across populations. 28] A --> AD['Superhuman' fairness remains
an open challenge. 29] A --> AE[Much work remains on
fairness and principled solutions. 30] class A,B cynthia; class C,D,M algorithms; class E,F,G,I,J,K,L,N,O,P,Q,R,S,Y,Z,AA,AD,AE fairness; class T,U,V,W,X,AB,AC representation; class H future;

Resume:

1.-Cynthia Dwork is a renowned computer scientist who uses theoretical computer science to address societal problems.

2.-Algorithms can be unfair due to biased training data, historical bias in labels, and differentially expressive features.

3.-Algorithmic unfairness has significant real-world consequences, such as in child protection services and recidivism prediction.

4.-Group fairness definitions, while popular, often fail under scrutiny compared to individual fairness.

5.-Ilvento's work approximates a similarity metric for individual fairness using human knowledge and learning theory.

6.-Multi-accuracy achieves group fairness simultaneously for intersectional groups defined by a large collection of sets.

7.-Scoring functions produce probabilities, but the meaning is unclear for non-repeatable events like tumor metastasis.

8.-Calibration in forecasting requires predicted probabilities to match observed frequencies for each predicted value.

9.-Multi-accuracy retains expectations for predefined sets; solutions vary without training data or additional constraints.

10.-Complexity theory suggests considering all efficiently computable sets to capture historically disadvantaged groups.

11.-Multi-accuracy and multi-calibration together aim to capture all task-specific, semantically significant differences.

12.-Data collected is often differentially expressive for advantaged vs. disadvantaged groups.

13.-Ranking underlies many applications like triage, admissions, and affirmative action strategies.

14.-Fair ranking should prevent obviously unfair outcomes, e.g., all of one group ranked above another.

15.-Multi-accuracy prevents certain unfair rankings; multi-calibration is even stronger.

16.-Focus should be on what data is collected and measured, as unfairness often lies there.

17.-Computationally indistinguishable positive and negative examples suggest assigning base rate probabilities.

18.-Rich multi-calibration may justify treating predictions as pseudo-random "truth" with respect to the defining sets.

19.-Fair representation learning aims to hide sensitive attributes while enabling standard training.

20.-Adversarial approaches censor representations to achieve group fairness notions like statistical parity.

21.-Learned censored representations can enable transfer learning to other prediction tasks.

22.-Censoring techniques may identify commonalities across populations for out-of-distribution generalization.

23.-Synthetic data experiments show promise for learning common predictive signal across populations.

24.-Fair algorithms alone cannot fully address societal unfairness.

25.-Breakthroughs in metric learning enable individual fairness.

26.-Multi-calibration emerged as significant for fair scoring, ranking, and understanding individual probabilities.

27.-Representation and data collection are critical factors in algorithmic fairness.

28.-Censored representations offer a path to generalizing across populations.

29.-Achieving truly "superhuman" fairness remains an open challenge.

30.-Much work remains to deeply understand fairness and develop principled, broadly applicable solutions.

Knowledge Vault built byDavid Vivancos 2024