Knowledge Vault 6 /85 - ICML 2023
Learning Fair Representations
Richard Zemel · Yu Wu · Kevin Swersky · Toniann Pitassi · Cynthia Dwork
< Resume Image >

Concept Graph & Resume using Claude 3.5 Sonnet | Chat GPT4o | Llama 3:

graph LR classDef recognition fill:#d4f9d4, font-weight:bold, font-size:14px classDef attacks fill:#f9d4d4, font-weight:bold, font-size:14px classDef defenses fill:#d4d4f9, font-weight:bold, font-size:14px classDef relevance fill:#f9f9d4, font-weight:bold, font-size:14px A[Learning Fair Representations] --> B[Test of Time:
recognition for
impactful papers. 1] A --> C[Poisoning attacks:
malicious data
manipulation. 2] A --> D[SVMs:
targeted by
poisoning attacks. 3] A --> E[Adversarial ML:
security vulnerabilities
focus. 4] A --> F[Incremental learning:
update SVM without
retraining. 5] A --> G[Gradient-based attacks:
optimize poisoning
via gradients. 6] C --> H[Bi-level optimization:
formalizes poisoning
attacks. 7] H --> I[Detectability vs.
impact:
trade-off in attacks. 8] I --> J[Robust learning:
mitigate poisoning
impacts. 9] I --> K[Evasion attacks:
manipulate
test data. 10] H --> L[Adversarial examples:
small perturbations
misclassify. 11] C --> M[Model interpretability:
understand decision
processes. 12] M --> N[Deep learning
security:
vulnerabilities, defenses. 13] N --> O[Attack categorization:
classify attack
types. 14] O --> P[Adversarial training:
incorporate attacks
in training. 15] D --> Q[Game-theoretical models:
analyze classifier-
adversary interactions. 16] Q --> R[Targeted poisoning:
specific misclassification
attacks. 17] R --> S[Backdoor attacks:
hidden vulnerabilities
triggered. 18] S --> T[Practical relevance:
real-world applicability
debate. 19] A --> U[Non-ML vulnerabilities:
exploit preprocessing,
hardware. 20] U --> V[In vitro vs.
in vivo:
controlled vs.
real-world attacks. 21] V --> W[Adversarial ML future:
long-term research
impact. 22] W --> X[Industrial challenges:
practical industry
applications. 23] X --> Y[Model robustness:
stability, reliability
improvement. 24] A --> Z[Out-of-distribution:
identify differing
input data. 25] Z --> AA[Model maintainability:
ease of updates,
management. 26] AA --> AB[Noisy data:
train on
imperfect datasets. 27] AB --> AC[Practical impact:
real-world
improvements questioned. 28] A --> AD[Collaborative research:
importance of
teamwork. 29] AD --> AE[Evolving threats:
realistic attack
scenarios. 30] class B recognition class C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,W,X,Y,Z,AA,AB,AC attacks class J,K,L defenses class AD,AE relevance

Resume:

1.- Test of Time Award: Recognition given to influential papers that have had lasting impact in machine learning research.

2.- Poisoning attacks: Malicious manipulation of training data to compromise the performance of machine learning models.

3.- Support Vector Machines (SVMs): Popular machine learning algorithm targeted in the award-winning paper on poisoning attacks.

4.- Adversarial machine learning: Research area focusing on security vulnerabilities of machine learning models against malicious attacks.

5.- Incremental learning: Technique for updating SVM models when adding or removing training points without full retraining.

6.- Gradient-based attacks: Method of optimizing poisoning attacks by computing gradients of the model with respect to input features.

7.- Bi-level optimization: Formalization of poisoning attacks as a two-level optimization problem for finding optimal attack points.

8.- Detectability vs. impact: Trade-off between the effectiveness of poisoning attacks and their likelihood of being detected.

9.- Robust learning techniques: Defensive methods to mitigate the impact of poisoning attacks on machine learning models.

10.- Evasion attacks: Attacks aimed at fooling trained classifiers by manipulating test data to cause misclassification.

11.- Adversarial examples: Small, often imperceptible perturbations to input data that cause misclassification in deep neural networks.

12.- Model interpretability: Efforts to understand and explain the decision-making process of machine learning models.

13.- Security of deep learning: Research on vulnerabilities and defenses for deep neural networks against various types of attacks.

14.- Categorization of attacks: Systematic classification of different attack types in adversarial machine learning.

15.- Adversarial training: Defensive technique incorporating adversarial examples into the training process to improve model robustness.

16.- Game-theoretical models: Frameworks for analyzing interactions between classifiers and adversaries in machine learning security.

17.- Targeted poisoning attacks: Attacks aiming to cause specific misclassifications rather than general performance degradation.

18.- Backdoor attacks: Poisoning attacks that insert hidden vulnerabilities activated by specific triggers known only to the attacker.

19.- Practical relevance: Ongoing debate about the real-world applicability and impact of academic research on adversarial machine learning.

20.- Non-ML vulnerabilities: Exploiting weaknesses in preprocessing or hardware components rather than the ML model itself.

21.- In vitro vs. in vivo attacks: Distinction between attacks demonstrated in controlled settings versus those effective in real-world conditions.

22.- Future of adversarial ML: Uncertainty about the long-term impact and direction of research in adversarial machine learning.

23.- Industrial challenges: Potential applications of adversarial ML techniques to solve practical problems in industry.

24.- Model robustness: Improving the stability and reliability of machine learning models over time and across different conditions.

25.- Out-of-distribution detection: Identifying when input data differs significantly from the training distribution for more reliable predictions.

26.- Model maintainability: Enhancing the ease of updating and managing deployed machine learning models.

27.- Learning from noisy data: Improving model performance when training on incomplete or imperfect labeled datasets.

28.- Practical impact: Questioning whether academic research in adversarial ML will lead to meaningful improvements in real-world applications.

29.- Collaborative research: Importance of working with various collaborators and building on others' work in the field.

30.- Evolving threat models: Need to consider realistic attack scenarios and adapt research focus to address practical security concerns.

Knowledge Vault built byDavid Vivancos 2024