Concept Graph & Resume using Claude 3.5 Sonnet | Chat GPT4o | Llama 3:
Resume:
1.- Dawn Song is the first invited speaker at an event and is welcomed to the podium.
2.- By 2020, over 50 billion Internet of Things devices are expected to be deployed, but they have vulnerabilities from third-party code.
3.- The speaker's recent work used deep learning and neural network-based graph embedding to identify code similarity for detecting IoT firmware vulnerabilities.
4.- This vulnerability detection approach significantly improves accuracy and training/streaming time compared to previous methods.
5.- Deep learning can also help develop agents for attack detection and defense, and protect humans from social engineering attacks.
6.- A DARPA project is developing chatbots to detect social engineering attacks in real-time conversations and challenge potential attackers.
7.- Deep learning and reinforcement learning could help automatically verify software correctness and security by training agents to prove theorems.
8.- As AI controls more systems, attackers have increasing incentives to compromise AI and consequences of misuse become more severe.
9.- Attackers can attack AI system integrity to produce incorrect or targeted results, or attack confidentiality to learn sensitive personal information.
10.- To defend, security of learning systems themselves must improve. Attackers may also misuse AI to find vulnerabilities and devise new attacks.
11.- In self-driving cars, computer vision usually recognizes traffic signs well, but adversarial examples with subtle modifications can cause misclassification.
12.- Experiments show adversarial traffic sign images remain effective at fooling computer vision under various distances and conditions.
13.- Adversarial examples are prevalent in different domains and model types. GAN-generated and spatially transformed examples are particularly realistic.
14.- Over 100 recent papers proposed defenses, but strong adaptive attackers can evade them, so security remains a major AI deployment challenge.
15.- Improving AI security requires addressing issues at the software, learning, and distributed system levels. Learning and distribution present unique challenges.
16.- At the learning level, systems must be evaluated on adversarial inputs, not just normal data. Compositional reasoning for non-symbolic programs is needed.
17.- The author's work on neural programming synthesis uses recursion to enable provable generalization and learns faster than previous approaches.
18.- Program architecture impacts generalization. More work is needed on architectures with strong generalization and security for broader tasks.
19.- Attackers may also try to learn sensitive personal information from AI systems. A language model memorized secrets in the Enron email dataset.
20.- Differential privacy during training can prevent a language model from memorizing and exposing sensitive information later.
21.- Differential privacy makes neighboring databases indistinguishable and protects individual data contributions. It is useful but needs more general deployment.
22.- The author developed methods for easy integration of differential privacy into SQL-like queries and machine learning, with minimal accuracy loss.
23.- Hardware-based secure enclaves can protect against an untrusted computational infrastructure by providing strong isolation, attestation, and encryption.
24.- The Keystone project aims to create an open-source secure enclave design for the RISC-V architecture to enable transparent verification.
25.- The Oasis platform leverages secure computation and differential privacy for privacy-preserving smart contracts and machine learning on blockchain.
26.- An application allows patients to contribute data for privacy-preserving medical research smart contracts, rewarding them while protecting privacy.
27.- Privacy-preserving smart contracts on blockchain platforms like Oasis can enable user-controlled AI agents that provide benefits without compromising privacy.
28.- AI and security intersect in many open challenges around robustness, detecting compromise, privacy preservation, and democratization. A community effort is required.
29.- The first best paper award went to "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples."
30.- The authors discuss how they defeated several proposed defenses to adversarial examples and give advice for evaluating the true robustness of defenses.
Knowledge Vault built byDavid Vivancos 2024