Knowledge Vault 2/84 - ICLR 2014-2023
Jenny Davis ICLR 2022 - Invited Talk - ‘Affordances’ for Machine Learning
<Resume Image >

Concept Graph & Resume using Claude 3 Opus | Chat GPT4 | Gemini Adv | Llama 3:

graph LR classDef main fill:#f9d4d4, stroke:#333, stroke-width:2px, font-weight:bold, font-size:14px; classDef framework fill:#d4f9d4, stroke:#333, stroke-width:2px, font-weight:bold, font-size:14px; classDef ml fill:#d4d4f9, stroke:#333, stroke-width:2px, font-weight:bold, font-size:14px; classDef design fill:#f9f9d4, stroke:#333, stroke-width:2px, font-weight:bold, font-size:14px; classDef analysis fill:#f9d4f9, stroke:#333, stroke-width:2px, font-weight:bold, font-size:14px; A[Jenny Davis
ICLR 2022] --> B[Framework analyzes,
designs ML systems 28] B --> C[Operational model
clarifies connections 1] B --> D[Framework identifies,
observes, reimagines systems 2] B --> E[Tool for critical,
intentional design 3] B --> F[Framework specifies features,
outcomes interaction 8] B --> G[Framework articulates
mechanisms, conditions 11] G --> H[Mechanisms: request, demand,
encourage, discourage 12] B --> I[Pluriverse focus connects
design, effects 13] A --> J[ML socially embedded
defaults not inevitable 29] J --> K[Technologies reflect,
shape social choices 4] J --> L[ML systems learn,
change, snowball 5] J --> M[ML amplifies inequalities,
hard to undo 6] J --> N[MPS reproduces racial
profiling through ML 21] N --> O[MPS demands assumed
criminality-appearance links 22] N --> P[Some ML systems
need dismantling 23] J --> Q[Amazon uses ML to govern workers 15] Q --> R[Systems allow monitoring,
demand compliance 16] Q --> S[Reimagining could support
worker dignity 17] S --> T[Changes: depersonalize tracking,
remove rates 18] A --> U[Analysis, design linked
ML always unfinished 24] U --> V[Affordances balance shaping
effects, agency 9] U --> W[Problems: binary application,
universal subject 10] U --> X[Audit opportunities, power
reimagine equity 14] X --> Y[Worker-centric ideas concretize
labor movements 19] U --> Z[Scrutiny enables interventions,
challenges defaults 7] A --> AA[Standpoint affects analysis
diversity lacking 25] AA --> AB[Marginalized perspectives
recognize clouded realities 26] AB --> AC[Involve communities systems
reproduce inequities 27] AB --> AD[Center marginalized voices
dismantle some applications 30] class A main; class B,C,D,E,F,G,H,I framework; class J,K,L,M,N,O,P,Q,R,S,T ml; class U,V,W,X,Y,Z design; class AA,AB,AC,AD analysis;

Resume:

1.-The talk discusses how an operational model of technological affordances can clarify connections between technical features and social effects of machine learning systems.

2.-The affordance model, called the Mechanisms and Conditions Framework, can be used to identify, observe, reimagine, rebuild or dismantle ML systems as needed.

3.-It can also be a tool for intentional ground-up design of ML systems from a place of critical analysis.

4.-Technologies are social - they reflect and shape norms, values, patterns and structures. Design choices in ML are social choices.

5.-Machine learning systems learn, develop and change over time. Their effects can snowball, self-reinforce and codify the social worlds they operate in.

6.-ML systems project objectivity and are hard to pinpoint issues with and undo. They amplify inequalities by learning from unequal societal data.

7.-Machine learning defaults reinforce the status quo and margins, but defaults are not inevitable. Scrutiny through systemic frameworks can enable interventions.

8.-The Mechanisms and Conditions Framework of affordance specifies how technical features and social outcomes interact. It was introduced in the speaker's book.

9.-Affordances are how a technology's features affect its uses and functions, both direct utilities and social effects. The term balances tech's shaping effects and human agency.

10.-Two problems limit the analytic value of affordances: 1) Binary application (affords or not) 2) Presuming a universal subject (same effects for all).

11.-The framework corrects these flaws by articulating the mechanisms (how tech affords) and conditions (for whom, under what circumstances).

12.-Mechanisms indicate technologies request, demand, encourage, discourage, refuse, and allow social action. Conditions are perception, dexterity, cultural/institutional legitimacy.

13.-The framework assumes no universal design is possible, refocusing on a "pluriverse." It makes systems observable and accountable by connecting design and social effects.

14.-Analysts can use it to audit how systems open/foreclose opportunities and reflect power. Designers can use it to reimagine systems encouraging equity, inclusivity, etc.

15.-Example 1: Amazon fulfillment centers use ML to govern workers in a de-skilling, denigrating way. An affordance analysis clarifies how and for whom.

16.-The systems request task discretization, allowing easy monitoring. Management and workers lack legitimacy to alter metrics-focused systems demanding bodily/temporal compliance.

17.-Workers resist in small ways at a cost. The affordances still encourage compliance and control by default. Reimagining could support worker dignity instead.

18.-Specific changes: Depersonalize tracking to items vs individuals. Remove/recalibrate demanding time rates. Require intervention to change schedules. Collect different data, use it more respectfully.

19.-These worker-centric ideas come from labor movements, concretized through affordances. Imagining alternatives sets standards for debate and resistance that corporations must contend with.

20.-Example 2: Massively parallel sequencing (MPS), a policing tool that infers ancestry, gender, appearance from crime scene DNA. It constructs racialized suspect pools.

21.-Australia has no regulations on this biometric data use; police advocated to avoid regulations. MPS reproduces statistical racial profiling through machine learning.

22.-MPS demands integrating character and physiology, encouraging assumed criminality-appearance links. It discourages community input. The accused lack legitimacy to question it.

23.-MPS needs dismantling, not reconfiguring. Critical affordance analysis highlights when this is the case - not all ML systems should exist.

24.-Two caveats: 1) Analysis and design are linked - scrutiny should aim to remake or dismantle. ML objects are dynamic, always unfinished.

25.-2) Standpoint affects affordance analyses. Computational fields' lack of diversity is a "privilege hazard" - makers can't anticipate harms to people unlike them.

26.-Marginalized perspectives are uniquely valuable for recognizing clouded realities. Foreign affordance analysis needs a critical lens, best used by diverse people.

27.-Design and auditing should bring in affected communities, approach humbly, and be ready to respond when systems inevitably learn to reproduce social inequities.

28.-The talk introduces the Mechanisms and Conditions framework of affordances as a tool to critically analyze and intentionally design machine learning systems.

29.-It argues ML is socially embedded and its design has high stakes. Defaults matter but aren't inevitable. Systematic frameworks can enable productive interventions.

30.-Using affordances to reimagine systems requires centering marginalized voices and accepting the need to sometimes dismantle irredeemable applications. Responsible ML needs diverse critical lenses.

Knowledge Vault built byDavid Vivancos 2024