Looks like the publisher may have taken this series offline or changed its URL. Please contact support if you believe it should be working, the feed URL is invalid, or you have any other concerns about it.
Treceți offline cu aplicația Player FM !
Podcasturi care merită ascultate
SPONSORIZAT
ABS: Scanning Neural Networks for Back-Doors by Artificial Brain Stimulation
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on January 02, 2025 12:05 (
What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 424087975 series 3498845
This paper presents a technique to scan neural network based AI models to determine if they are trojaned. Pre-trained AI models may contain back-doors that are injected through training or by transforming inner neuron weights. These trojaned models operate normally when regular inputs are provided, and mis-classify to a specific output label when the input is stamped with some special pattern called trojan trigger. We develop a novel technique that analyzes inner neuron behaviors by determining how output acti- vations change when we introduce different levels of stimulation to a neuron. The neurons that substantially elevate the activation of a particular output label regardless of the provided input is considered potentially compromised. Trojan trigger is then reverse-engineered through an optimization procedure using the stimulation analysis results, to confirm that a neuron is truly compromised. We evaluate our system ABS on 177 trojaned models that are trojaned with vari-ous attack methods that target both the input space and the feature space, and have various trojan trigger sizes and shapes, together with 144 benign models that are trained with different data and initial weight values. These models belong to 7 different model structures and 6 different datasets, including some complex ones such as ImageNet, VGG-Face and ResNet110. Our results show that ABS is highly effective, can achieve over 90% detection rate for most cases (and many 100%), when only one input sample is provided for each output label. It substantially out-performs the state-of-the-art technique Neural Cleanse that requires a lot of input samples and small trojan triggers to achieve good performance.
Source:
https://www.cs.purdue.edu/homes/taog/docs/CCS19.pdf
Narrated for AI Safety Fundamentals the Effective Altruism Forum Joseph Carlsmith LessWrong 80,000 Hours by Perrin Walker of TYPE III AUDIO.
---
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.
Capitole
1. ABS: Scanning Neural Networks for Back-Doors by Artificial Brain Stimulation (00:00:00)
2. ABSTRACT (00:00:17)
3. 1 INTRODUCTION (00:01:37)
4. 2 LEAST-TO-MOST PROMPTING (00:05:38)
5. 3 RESULTS (00:07:41)
85 episoade
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on January 02, 2025 12:05 (
What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 424087975 series 3498845
This paper presents a technique to scan neural network based AI models to determine if they are trojaned. Pre-trained AI models may contain back-doors that are injected through training or by transforming inner neuron weights. These trojaned models operate normally when regular inputs are provided, and mis-classify to a specific output label when the input is stamped with some special pattern called trojan trigger. We develop a novel technique that analyzes inner neuron behaviors by determining how output acti- vations change when we introduce different levels of stimulation to a neuron. The neurons that substantially elevate the activation of a particular output label regardless of the provided input is considered potentially compromised. Trojan trigger is then reverse-engineered through an optimization procedure using the stimulation analysis results, to confirm that a neuron is truly compromised. We evaluate our system ABS on 177 trojaned models that are trojaned with vari-ous attack methods that target both the input space and the feature space, and have various trojan trigger sizes and shapes, together with 144 benign models that are trained with different data and initial weight values. These models belong to 7 different model structures and 6 different datasets, including some complex ones such as ImageNet, VGG-Face and ResNet110. Our results show that ABS is highly effective, can achieve over 90% detection rate for most cases (and many 100%), when only one input sample is provided for each output label. It substantially out-performs the state-of-the-art technique Neural Cleanse that requires a lot of input samples and small trojan triggers to achieve good performance.
Source:
https://www.cs.purdue.edu/homes/taog/docs/CCS19.pdf
Narrated for AI Safety Fundamentals the Effective Altruism Forum Joseph Carlsmith LessWrong 80,000 Hours by Perrin Walker of TYPE III AUDIO.
---
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.
Capitole
1. ABS: Scanning Neural Networks for Back-Doors by Artificial Brain Stimulation (00:00:00)
2. ABSTRACT (00:00:17)
3. 1 INTRODUCTION (00:01:37)
4. 2 LEAST-TO-MOST PROMPTING (00:05:38)
5. 3 RESULTS (00:07:41)
85 episoade
Toate episoadele
×1 Introduction to Mechanistic Interpretability 11:45
1 Illustrating Reinforcement Learning from Human Feedback (RLHF) 22:32
1 Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback 32:19
1 Constitutional AI Harmlessness from AI Feedback 1:01:49
1 Intro to Brain-Like-AGI Safety 1:02:10
1 Chinchilla’s Wild Implications 24:57
1 Eliciting Latent Knowledge 1:00:27
1 Empirical Findings Generalize Surprisingly Far 11:32
1 Two-Turn Debate Doesn’t Help Humans Answer Hard Reading Comprehension Questions 16:39
1 Least-To-Most Prompting Enables Complex Reasoning in Large Language Models 16:08
1 ABS: Scanning Neural Networks for Back-Doors by Artificial Brain Stimulation 16:08
1 Imitative Generalisation (AKA ‘Learning the Prior’) 18:14
Bun venit la Player FM!
Player FM scanează web-ul pentru podcast-uri de înaltă calitate pentru a vă putea bucura acum. Este cea mai bună aplicație pentru podcast și funcționează pe Android, iPhone și pe web. Înscrieți-vă pentru a sincroniza abonamentele pe toate dispozitivele.