Knowledge Vault 6 /75 - ICML 2022
Poisoning Attacks Against Support Vector Machines
Battista Biggio, Blaine Nelson, Pavel Laskov
< Resume Image >

Concept Graph & Resume using Claude 3.5 Sonnet | Chat GPT4o | Llama 3:

graph LR classDef recognition fill:#f9d4d4, font-weight:bold, font-size:14px classDef attacks fill:#d4f9d4, font-weight:bold, font-size:14px classDef defenses fill:#d4d4f9, font-weight:bold, font-size:14px classDef applications fill:#f9f9d4, font-weight:bold, font-size:14px A[Poisoning Attacks Against
Support Vector Machines] --> B[Recognition] A --> C[Attacks] A --> D[Defenses] A --> E[Applications] B --> B1[Recognition for
impactful ICML
papers. 1] B --> B2[Real-world applicability
debate. 13] B --> B3[Importance of
research
teamwork. 22] B --> B4[Development of poisoning
attacks in
history. 21] C --> C1[Malicious data
manipulation compromising
models. 2] C --> C2[SVMs targeted by
poisoning
attacks. 3] C --> C3[Study of ML model
vulnerabilities. 4] C --> C4[Optimizing attack points
using
gradients. 5] C --> C5[Adversarial examples fool
trained
classifiers. 8] C --> C6[Small perturbations causing
misclassification. 9] D --> D1[Studying ML systems
attacks and
defenses. 10] D --> D2[Improving resilience to
distribution
shifts. 17] D --> D3[Identifying significant test
data
differences. 18] D --> D4[Handling incomplete, inaccurate
training
data. 20] D --> D5[Incorporating adversarial examples
in
training. 25] D --> D6[Using strategic decision-making
in
security. 26] E --> E1[Deploying and maintaining
ML
systems. 28] E --> E2[Reliable measures of
model
uncertainty. 29] E --> E3[Applications of ML security
techniques. 30] E --> E4[Making ML models
more
understandable. 19] E --> E5[Exploiting microphone nonlinearities
in
attacks. 14] E --> E6[Lab versus real-world
attack
settings. 15] class A,B,B1,B2,B3,B4 recognition class C,C1,C2,C3,C4,C5,C6 attacks class D,D1,D2,D3,D4,D5,D6 defenses class E,E1,E2,E3,E4,E5,E6 applications

Resume:

1.- Test of Time Award: Recognition for impactful papers presented at ICML, with honorable mentions and a main award winner.

2.- Poisoning Attacks: Malicious manipulation of training data to compromise machine learning models' performance.

3.- Support Vector Machines (SVMs): Popular machine learning algorithm targeted by poisoning attacks in the award-winning paper.

4.- Adversarial Machine Learning: Research field studying vulnerabilities and security of machine learning models against attacks.

5.- Gradient-Based Attacks: Method to optimize attack points using gradients of the model's loss function.

6.- Incremental Learning: Technique to update SVM solutions when adding or removing training points without full retraining.

7.- Bi-level Optimization: Formulation of poisoning attacks as a problem with nested optimization objectives.

8.- Evasion Attacks: Adversarial examples designed to fool trained classifiers at test time.

9.- Adversarial Examples: Small perturbations to input data that cause misclassification in machine learning models.

10.- Machine Learning Security: Rapidly growing field studying various attacks and defenses for ML systems.

11.- Targeted Poisoning: Attacks aiming to cause misclassification of specific test points or classes.

12.- Backdoor Attacks: Poisoning that creates hidden vulnerabilities triggered by specific patterns known to the attacker.

13.- Practical Relevance: Ongoing debate about the real-world applicability of academic research on ML security.

14.- Dolphin Attack: Real-world attack exploiting microphone nonlinearities to inject inaudible voice commands.

15.- In Vitro vs. In Vivo: Distinction between attacks demonstrated in controlled lab settings versus real-world operational conditions.

16.- Pasteur's Quadrant: Approach to research focusing on practical problems that require fundamental understanding.

17.- Robustness: Goal of improving ML models' resilience to distribution shifts and reducing need for frequent retraining.

18.- Out-of-Distribution Detection: Identifying when test data differs significantly from training data for more reliable predictions.

19.- Interpretability: Making ML models more understandable and maintainable in deployed settings.

20.- Learning from Noisy Data: Improving ML models' ability to handle incomplete or inaccurate training data.

21.- Historical Context: Tracing the development of poisoning attacks and related concepts in ML security.

22.- Collaboration: Importance of teamwork in research, acknowledged by the award recipient.

23.- Threat Models: Defining realistic attack scenarios and capabilities for meaningful security research.

24.- Model Stealing: Potential threat of extracting a copy of a deployed ML model through queries.

25.- Adversarial Training: Defensive technique incorporating adversarial examples during model training.

26.- Game Theory: Application of strategic decision-making concepts to model attacker-defender interactions in ML security.

27.- Nash Equilibrium: Stable state in adversarial scenarios where neither attacker nor defender can unilaterally improve.

28.- Machine Learning Operations (MLOps): Practices for deploying and maintaining ML systems in production environments.

29.- Confidence Estimation: Providing reliable measures of model uncertainty for predictions.

30.- Future Directions: Considering potential applications of ML security techniques beyond direct attack prevention.

Knowledge Vault built byDavid Vivancos 2024