Knowledge Vault 2/71 - ICLR 2014-2023
Timnit Gebru ICLR 2021 - Invited Talk - Moving beyond the fairness rhetoric in machine learning
<Resume Image >

Concept Graph & Resume using Claude 3 Opus | Chat GPT4 | Gemini Adv | Llama 3:

graph LR classDef social fill:#f9d4d4, font-weight:bold, font-size:14px; classDef platforms fill:#d4f9d4, font-weight:bold, font-size:14px; classDef groups fill:#d4d4f9, font-weight:bold, font-size:14px; classDef issues fill:#f9f9d4, font-weight:bold, font-size:14px; classDef solutions fill:#f9d4f9, font-weight:bold, font-size:14px; A[Timnit Gebru
ICLR 2021] --> B[Social media: uneven data access,
worldview development. 1] A --> C[Ethiopian, Eritrean journalists migrate
to Twitter. 2] C --> D[Facebook moderation ineffective for
misinformation, propaganda. 2] A --> E[Twitter fails to suspend
death threat accounts. 3] E --> F[Impacts journalists, marginalized groups. 3] A --> G[Google Ethical AI highlights
platform failures. 4] G --> H[Non-English languages, regions neglected. 4] A --> I[Moroccan YouTube channel harasses
activists without consequences. 5] A --> J[Tech companies' insufficient action
on misinformation, hate speech. 6] J --> K[Focus on English, neglect
other languages, regions. 6] A --> L[Platform data trains language
models, restricts participation. 7] A --> M[AI risk analysis must
center impacted voices. 8] M --> N[Lived experience provides crucial
perspective. 8] A --> O[AI ethics community must
recognize racism, sexism. 9] O --> P[Dismissal of work by
Black women, marginalized groups. 9] A --> Q[Anthropological photo caption exemplifies
problematic Western gaze. 10] Q --> R[Ethnic scholars discussing own
communities provide better alternatives. 10] A --> S[Google discredited Dr. Gebru's
team after paper review. 11] S --> T[Firing shows power dynamics. 11] A --> U[Undergrads, student movements push
for accountability. 12] U --> V[Academics, institutions demand ethical
practices from tech partners. 12] A --> W[Papers should not be
censored by company lawyers. 13] W --> X[Conferences should refuse censored
work as propaganda. 13] A --> Y[Academia lacks true academic
freedom for most. 14] Y --> Z[Students, postdocs, junior faculty
fear retaliation for speaking out. 14] A --> AA[Researchers must push back
on unethical corporate influence. 15] AA --> AB[Physicists did regarding nuclear
weapons development work. 15] A --> AC[Language models actively harm
people, especially impacted regions. 16] AC --> AD[Perspectives of impacted communities
must be heard. 16] A --> AE[AI risk analysis: listen
to most impacted first. 17] AE --> AF[Don't shut them down. 17] A --> AG[Technology shifts power dynamics. 18] AG --> AH[Powerful ignore concerns of
less powerful. 18] AH --> AI[Amplifying marginalized voices crucial
for accountability. 18] A --> AJ[Institutional change needed to
enable individual ethical actions. 19] AJ --> AK[Building worker power enables
pushback. 19] A --> AL[Academia must change incentives
to value data work. 20] AL --> AM[New conference tracks are
positive steps. 20] A --> AN[Companies lack incentive without
regulation, legal accountability. 21] AN --> AO[Massive lawsuits may be
necessary. 21] A --> AP[Anthropology, history, sociology offer
data ethics frameworks. 22] AP --> AQ[But lower in 'hierarchy
of knowledge'. 22] A --> AR[Elevate scholars from these
fields into positions of power. 23] AR --> AS[So tech industry must
heed their input. 23] A --> AT[Dr. Gebru excited about
Black women-centered research institute. 24] AT --> AU[Provides alternative perspective. 24] A --> AV[Analyzing apartheid evolution with
satellite imagery, despite long process. 25] class A,B,C,D,E,F,G,H,I,J,K,L social; class M,N,O,P,Q,R,AP,AQ,AR,AS groups; class S,T,U,V,W,X,Y,Z,AA,AB,AC,AD issues; class AE,AF,AG,AH,AI,AJ,AK,AL,AM,AN,AO,AT,AU,AV solutions;


1.-Social media platforms allow some groups to easily add data and develop worldviews, while other groups face harassment and death threats.

2.-Many Ethiopian and Eritrean journalists have migrated from Facebook to Twitter due to ineffective moderation of misinformation, fake accounts, and propaganda.

3.-Twitter fails to suspend accounts issuing death threats, while suspending accounts of those receiving the threats. This impacts journalists and marginalized groups.

4.-The Ethical AI team at Google aimed to highlight how social media fails marginalized groups in non-English languages outside the U.S.

5.-Examples given of a popular Moroccan YouTube channel publishing blackmail videos and private documents to harass activists, without consequences from platforms.

6.-Tech companies take insufficient action on misinformation and hate speech, focusing efforts on English while neglecting other languages and regions.

7.-Their platforms and data are used to train large language models, while people in impacted regions cannot freely participate due to harassment.

8.-Work analyzing AI risks and harms must center voices of people most impacted, not just powerful entities. Lived experience provides crucial perspective.

9.-The AI ethics community must recognize how racism and sexism lead to the dismissal of work by Black women and marginalized groups.

10.-Sharing an anthropological photo caption exemplifies problematic white/Western gazes; ethnic scholars discussing their own communities provide better alternatives.

11.-Google followed a playbook to attempt discrediting Dr. Gebru's team after their paper passed peer review. Her firing shows power dynamics.

12.-Undergrads and student movements push for accountability. Academics and institutions need to demand ethical practices from tech companies they work with.

13.-Papers should not be edited by company lawyers to censor concerns. Conferences should refuse to review such censored work as propaganda.

14.-Academia lacks true academic freedom for most beyond a privileged few. Students, postdocs and junior faculty fear retaliation for speaking out.

15.-Researchers have a responsibility to push back on unethical corporate influence, like physicists did regarding nuclear weapons development work.

16.-Building large language models actively harms people, especially in regions not benefiting from the technology. Perspectives of impacted communities must be heard.

17.-When analyzing AI risks, the first step is listening to the people most impacted by the negative consequences, not shutting them down.

18.-Technology shifts power dynamics. Currently powerful entities ignore concerns of the less powerful. Amplifying marginalized voices is crucial for accountability.

19.-Institutional change is needed so individuals don't have to sacrifice careers to do the right thing. Building worker power enables pushback.

20.-Academia needs to change incentive structures to properly value vital areas like data-related work. New tracks at conferences are positive steps.

21.-Companies lack incentive to change unethical practices if not sufficiently regulated and held accountable legally. Massive lawsuits may be necessary.

22.-Academic disciplines like anthropology, history, sociology have useful frameworks for analyzing data ethics issues but are lower in the "hierarchy of knowledge."

23.-Elevating scholars from these fields into positions of power, like in government, is important so the tech industry must heed their input.

24.-Dr. Gebru is excited about proposals for a research institute centering needs of Black women and communities, providing an alternative perspective.

25.-Another project she's enthusiastic about analyzes the evolution of spatial apartheid in South Africa using satellite imagery, despite the long process.

Knowledge Vault built byDavid Vivancos 2024