Concept Graph & Resume using Claude 3 Opus | Chat GPT4o | Llama 3:
Resume:
1.- Gebru used Google Street View images to predict demographics like education, voting patterns, and income segregation in US cities.
2.- Publicly available data used for predictive analytics can be beneficial but also problematic if not carefully considered.
3.- Crime prediction algorithms trained on biased policing data can exacerbate societal biases and inequality through runaway feedback loops.
4.- Predictive algorithms are currently used in high-stakes scenarios like immigration vetting, but AI tools aren't robust enough for this.
5.- Facebook mistranslated an Arabic "good morning" post, leading to someone's wrongful arrest, showing costly mistakes in AI translation.
6.- Gebru analyzes how facial recognition is used by law enforcement in an unregulated manner, with half of US adults in databases.
7.- Two key questions: should facial recognition be used this way, and are current AI tools accurate enough for high-stakes use.
8.- Gebru and Joy Buolamwini found facial analysis error rates approached random chance for darker-skinned females, performing worst on this group.
9.- This occurred because training datasets were overwhelmingly made up of lighter-skinned males, so they created a more balanced dataset.
10.- Race is an unstable social construct; skin type was used instead as a more meaningful characteristic in their facial analysis research.
11.- Bringing diverse researcher backgrounds is important; Gebru and Buolamwini as Black women understood impacts of colorism.
12.- Their paper led to calls for regulation of facial analysis tools and reaction from companies. Lessons included:
13.- Researchers can't ignore societal problems; vulnerable groups are often unfairly targeted by technology.
14.- Groups selling facial analysis tools to law enforcement rarely include vulnerable populations subject to the technology.
15.- Machine learning conferences overwhelmingly lack women and minorities; those developing the technology must represent the world it impacts.
16.- A follow-up showed Amazon's Rekognition had similar skin type biases; lead author almost left the field due to discrimination until finding Black in AI.
17.- Gebru co-founded Black in AI to address structural issues in the field, though it wasn't her original research focus.
18.- No laws restrict use of AI APIs; the flawed translation system can be used in high-stakes scenarios without oversight.
19.- San Francisco recently banned government use of facial recognition, but comprehensive regulations and standards are lacking.
20.- Other industries have standards/datasheets specifying ideal use cases and limitations; AI datasets and models need similar documentation.
21.- Some AI tools like gender classifiers may be inherently harmful to groups like transgender people and shouldn't exist.
22.- Gebru's team proposed "Datasheets for Datasets" and "Model Cards" to document dataset and model characteristics, biases, appropriate uses.
23.- Bias enters AI at every stage: problem formulation, data collection, model architecture, deployment impact analysis.
24.- Questions of "AI working" depend on "for whom"--e.g. if gender classifiers "work" but harm trans people.
25.- Problems pursued depend on who formulates them; Gebru is analyzing evolution of spatial apartheid in South Africa via satellite imagery.
26.- African colleagues were empowered to drive locally relevant projects like cassava disease monitoring when given resources and agency.
27.- This contrasts with "parachute/helicopter research" where outsiders exploit community data/knowledge without centering their voices or providing reciprocal benefit.
28.- Centering affected communities makes for better, more ethical science than extractive approaches.
29.- As AI is used for social good, impacted communities must have a central voice in the process.
30.- Likewise in AI ethics, voices of those affected by the technology must be at the forefront.
Knowledge Vault built byDavid Vivancos 2024