Listen to resources from the AI Safety Fundamentals: Alignment course! https://aisafetyfundamentals.com/alignment
…
continue reading
The Into AI Safety podcast aims to make it easier for everyone, regardless of background, to get meaningfully involved with the conversations surrounding the rules and regulations which should govern the research, development, deployment, and use of the technologies encompassed by the term "artificial intelligence" or "AI" For better formatted show notes, additional resources, and more, go to https://into-ai-safety.github.io For even more content and community engagement, head over to my Pat ...
…
continue reading
1
Constitutional AI Harmlessness from AI Feedback
1:01:49
1:01:49
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
1:01:49
This paper explains Anthropic’s constitutional AI approach, which is largely an extension on RLHF but with AIs replacing human demonstrators and human evaluators. Everything in this paper is relevant to this week's learning objectives, and we recommend you read it in its entirety. It summarises limitations with conventional RLHF, explains the const…
…
continue reading
1
Illustrating Reinforcement Learning from Human Feedback (RLHF)
22:32
22:32
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
22:32
This more technical article explains the motivations for a system like RLHF, and adds additional concrete details as to how the RLHF approach is applied to neural networks. While reading, consider which parts of the technical implementation correspond to the 'values coach' and 'coherence coach' from the previous video. A podcast by BlueDot Impact. …
…
continue reading
1
Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
32:19
32:19
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
32:19
This paper explains Anthropic’s constitutional AI approach, which is largely an extension on RLHF but with AIs replacing human demonstrators and human evaluators. Everything in this paper is relevant to this week's learning objectives, and we recommend you read it in its entirety. It summarises limitations with conventional RLHF, explains the const…
…
continue reading
1
Intro to Brain-Like-AGI Safety
1:02:10
1:02:10
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
1:02:10
(Sections 3.1-3.4, 6.1-6.2, and 7.1-7.5) Suppose we someday build an Artificial General Intelligence algorithm using similar principles of learning and cognition as the human brain. How would we use such an algorithm safely? I will argue that this is an open technical problem, and my goal in this post series is to bring readers with no prior knowle…
…
continue reading
1
Chinchilla’s Wild Implications
24:57
24:57
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
24:57
This post is about language model scaling laws, specifically the laws derived in the DeepMind paper that introduced Chinchilla. The paper came out a few months ago, and has been discussed a lot, but some of its implications deserve more explicit notice in my opinion. In particular: Data, not size, is the currently active constraint on language mode…
…
continue reading
We show that the double descent phenomenon occurs in CNNs, ResNets, and transformers: performance first improves, then gets worse, and then improves again with increasing model size, data size, or training time. This effect is often avoided through careful regularization. While this behavior appears to be fairly universal, we don’t yet fully unders…
…
continue reading
1
Eliciting Latent Knowledge
1:00:27
1:00:27
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
1:00:27
In this post, we’ll present ARC’s approach to an open problem we think is central to aligning powerful machine learning (ML) systems: Suppose we train a model to predict what the future will look like according to cameras and other sensors. We then use planning algorithms to find a sequence of actions that lead to predicted futures that look good t…
…
continue reading
It would be very convenient if the individual neurons of artificial neural networks corresponded to cleanly interpretable features of the input. For example, in an “ideal” ImageNet classifier, each neuron would fire only in the presence of a specific visual feature, such as the color red, a left-facing curve, or a dog snout. Empirically, in models …
…
continue reading
1
Discovering Latent Knowledge in Language Models Without Supervision
37:09
37:09
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
37:09
Abstract: Existing techniques for training language models can be misaligned with the truth: if we train models with imitation learning, they may reproduce errors that humans make; if we train them to generate text that humans rate highly, they may output errors that human evaluators can't detect. We propose circumventing this issue by directly fin…
…
continue reading
1
Imitative Generalisation (AKA ‘Learning the Prior’)
18:14
18:14
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
18:14
This post tries to explain a simplified version of Paul Christiano’s mechanism introduced here, (referred to there as ‘Learning the Prior’) and explain why a mechanism like this potentially addresses some of the safety problems with naïve approaches. First we’ll go through a simple example in a familiar domain, then explain the problems with the ex…
…
continue reading
1
ABS: Scanning Neural Networks for Back-Doors by Artificial Brain Stimulation
16:08
16:08
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
16:08
This paper presents a technique to scan neural network based AI models to determine if they are trojaned. Pre-trained AI models may contain back-doors that are injected through training or by transforming inner neuron weights. These trojaned models operate normally when regular inputs are provided, and mis-classify to a specific output label when t…
…
continue reading
1
An Investigation of Model-Free Planning
8:11
8:11
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
8:11
The field of reinforcement learning (RL) is facing increasingly challenging domains with combinatorial complexity. For an RL agent to address these challenges, it is essential that it can plan effectively. Prior work has typically utilized an explicit model of the environment, combined with a specific planning algorithm (such as tree search). More …
…
continue reading
1
Gradient Hacking: Definitions and Examples
9:15
9:15
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
9:15
Gradient hacking is a hypothesized phenomenon where: A model has knowledge about possible training trajectories which isn’t being used by its training algorithms when choosing updates (such as knowledge about non-local features of its loss landscape which aren’t taken into account by local optimization algorithms). The model uses that knowledge to …
…
continue reading
1
Least-To-Most Prompting Enables Complex Reasoning in Large Language Models
16:08
16:08
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
16:08
Chain-of-thought prompting has demonstrated remarkable performance on various natural language reasoning tasks. However, it tends to perform poorly on tasks which requires solving problems harder than the exemplars shown in the prompts. To overcome this challenge of easy-to-hard generalization, we propose a novel prompting strategy, least-to-most p…
…
continue reading
1
Two-Turn Debate Doesn’t Help Humans Answer Hard Reading Comprehension Questions
16:39
16:39
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
16:39
Using hard multiple-choice reading comprehension questions as a testbed, we assess whether presenting humans with arguments for two competing answer options, where one is correct and the other is incorrect, allows human judges to perform more accurately, even when one of the arguments is unreliable and deceptive. If this is helpful, we may be able …
…
continue reading
Right now I’m working on finding a good objective to optimize with ML, rather than trying to make sure our models are robustly optimizing that objective. (This is roughly “outer alignment.”) That’s pretty vague, and it’s not obvious whether “find a good objective” is a meaningful goal rather than being inherently confused or sweeping key distinctio…
…
continue reading
1
Empirical Findings Generalize Surprisingly Far
11:32
11:32
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
11:32
Previously, I argued that emergent phenomena in machine learning mean that we can’t rely on current trends to predict what the future of ML will be like. In this post, I will argue that despite this, empirical findings often do generalize very far, including across “phase transitions” caused by emergent behavior. This might seem like a contradictio…
…
continue reading
1
Compute Trends Across Three Eras of Machine Learning
13:50
13:50
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
13:50
This article explains key drivers of AI progress, explains how compute is calculated, as well as looks at how the amount of compute used to train AI models has increased significantly in recent years. Original text: https://epochai.org/blog/compute-trends Author(s): Jaime Sevilla, Lennart Heim, Anson Ho, Tamay Besiroglu, Marius Hobbhahn, Pablo Vill…
…
continue reading
1
INTERVIEW: Scaling Democracy w/ (Dr.) Igor Krawczuk
2:58:46
2:58:46
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
2:58:46
The almost Dr. Igor Krawczuk joins me for what is the equivalent of 4 of my previous episodes. We get into all the classics: eugenics, capitalism, philosophical toads... Need I say more? If you're interested in connecting with Igor, head on over to his website, or check out placeholder for thesis (it isn't published yet). Because the full show note…
…
continue reading
1
Worst-Case Thinking in AI Alignment
11:35
11:35
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
11:35
Alternative title: “When should you assume that what could go wrong, will go wrong?” Thanks to Mary Phuong and Ryan Greenblatt for helpful suggestions and discussion, and Akash Wasil for some edits. In discussions of AI safety, people often propose the assumption that something goes as badly as possible. Eliezer Yudkowsky in particular has argued f…
…
continue reading
Feedback is essential for learning. Whether you’re studying for a test, trying to improve in your work or want to master a difficult skill, you need feedback. The challenge is that feedback can often be hard to get. Worse, if you get bad feedback, you may end up worse than before. Original text: https://www.scotthyoung.com/blog/2019/01/24/how-to-ge…
…
continue reading
1
Public by Default: How We Manage Information Visibility at Get on Board
9:50
9:50
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
9:50
I’ve been obsessed with managing information, and communications in a remote team since Get on Board started growing. Reducing the bus factor is a primary motivation — but another just as important is diminishing reliance on synchronicity. When what I know is documented and accessible to others, I’m less likely to be a bottleneck for anyone else in…
…
continue reading
(In the process of answering an email, I accidentally wrote a tiny essay about writing. I usually spend weeks on an essay. This one took 67 minutes—23 of writing, and 44 of rewriting.) Original text: https://paulgraham.com/writing44.html Author: Paul Graham A podcast by BlueDot Impact. Learn more on the AI Safety Fundamentals website.…
…
continue reading
1
Being the (Pareto) Best in the World
6:46
6:46
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
6:46
This introduces the concept of Pareto frontiers. The top comment by Rob Miles also ties it to comparative advantage. While reading, consider what Pareto frontiers your project could place you on. Original text: https://www.lesswrong.com/posts/XvN2QQpKTuEzgkZHY/being-the-pareto-best-in-the-world Author: John Wentworth A podcast by BlueDot Impact. Le…
…
continue reading
1
How to Succeed as an Early-Stage Researcher: The “Lean Startup” Approach
15:16
15:16
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
15:16
I am approaching the end of my AI governance PhD, and I’ve spent about 2.5 years as a researcher at FHI. During that time, I’ve learnt a lot about the formula for successful early-career research. This post summarises my advice for people in the first couple of years. Research is really hard, and I want people to avoid the mistakes I’ve made. Origi…
…
continue reading
1
Become a Person who Actually Does Things
5:14
5:14
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
5:14
The next four weeks of the course are an opportunity for you to actually build a thing that moves you closer to contributing to AI Alignment, and we're really excited to see what you do! A common failure mode is to think "Oh, I can't actually do X" or to say "Someone else is probably doing Y." You probably can do X, and it's unlikely anyone is doin…
…
continue reading
1
Planning a High-Impact Career: A Summary of Everything You Need to Know in 7 Points
11:02
11:02
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
11:02
We took 10 years of research and what we’ve learned from advising 1,000+ people on how to build high-impact careers, compressed that into an eight-week course to create your career plan, and then compressed that into this three-page summary of the main points. (It’s especially aimed at people who want a career that’s both satisfying and has a signi…
…
continue reading
This guide is written for people who are considering direct work on technical AI alignment. I expect it to be most useful for people who are not yet working on alignment, and for people who are already familiar with the arguments for working on AI alignment. If you aren’t familiar with the arguments for the importance of AI alignment, you can get a…
…
continue reading
1
Computing Power and the Governance of AI
26:49
26:49
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
26:49
This post summarises a new report, “Computing Power and the Governance of Artificial Intelligence.” The full report is a collaboration between nineteen researchers from academia, civil society, and industry. It can be read here. GovAI research blog posts represent the views of their authors, rather than the views of the organisation. Source: https:…
…
continue reading
1
AI Control: Improving Safety Despite Intentional Subversion
20:51
20:51
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
20:51
We’ve released a paper, AI Control: Improving Safety Despite Intentional Subversion. This paper explores techniques that prevent AI catastrophes even if AI instances are colluding to subvert the safety techniques. In this post: We summarize the paper; We compare our methodology to the methodology of other safety papers. Source: https://www.alignmen…
…
continue reading
1
Challenges in Evaluating AI Systems
22:33
22:33
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
22:33
Most conversations around the societal impacts of artificial intelligence (AI) come down to discussing some quality of an AI system, such as its truthfulness, fairness, potential for misuse, and so on. We are able to talk about these characteristics because we can technically evaluate models for their performance in these areas. But what many peopl…
…
continue reading
1
AI Watermarking Won’t Curb Disinformation
8:05
8:05
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
8:05
Generative AI allows people to produce piles upon piles of images and words very quickly. It would be nice if there were some way to reliably distinguish AI-generated content from human-generated content. It would help people avoid endlessly arguing with bots online, or believing what a fake image purports to show. One common proposal is that big c…
…
continue reading
1
Emerging Processes for Frontier AI Safety
18:20
18:20
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
18:20
The UK recognises the enormous opportunities that AI can unlock across our economy and our society. However, without appropriate guardrails, such technologies can pose significant risks. The AI Safety Summit will focus on how best to manage the risks from frontier AI such as misuse, loss of control and societal harms. Frontier AI organisations play…
…
continue reading
1
Interpretability in the Wild: A Circuit for Indirect Object Identification in GPT-2 Small
24:48
24:48
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
24:48
Research in mechanistic interpretability seeks to explain behaviors of machine learning (ML) models in terms of their internal components. However, most previous work either focuses on simple behaviors in small models or describes complicated behaviors in larger models with broad strokes. In this work, we bridge this gap by presenting an explanatio…
…
continue reading
1
Zoom In: An Introduction to Circuits
44:03
44:03
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
44:03
By studying the connections between neurons, we can find meaningful algorithms in the weights of neural networks. Many important transition points in the history of science have been moments when science “zoomed in.” At these points, we develop a visualization or tool that allows us to see the world in a new level of detail, and a new field of scie…
…
continue reading
1
Towards Monosemanticity: Decomposing Language Models With Dictionary Learning
8:53
8:53
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
8:53
Using a sparse autoencoder, we extract a large number of interpretable features from a one-layer transformer. Mechanistic interpretability seeks to understand neural networks by breaking them into components that are more easily understood than the whole. By understanding the function of each component, and how they interact, we hope to be able to …
…
continue reading
1
Weak-To-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision
35:05
35:05
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
35:05
Widely used alignment techniques, such as reinforcement learning from human feedback (RLHF), rely on the ability of humans to supervise model behavior—for example, to evaluate whether a model faithfully followed instructions or generated safe outputs. However, future superhuman models will behave in complex ways too difficult for humans to reliably…
…
continue reading
1
Can We Scale Human Feedback for Complex AI Tasks?
20:06
20:06
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
20:06
Reinforcement learning from human feedback (RLHF) has emerged as a powerful technique for steering large language models (LLMs) toward desired behaviours. However, relying on simple human feedback doesn’t work for tasks that are too complex for humans to accurately judge at the scale needed to train AI models. Scalable oversight techniques attempt …
…
continue reading
1
INTERVIEW: StakeOut.AI w/ Dr. Peter Park (3)
1:42:00
1:42:00
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
1:42:00
As always, the best things come in 3s: dimensions, musketeers, pyramids, and... 3 installments of my interview with Dr. Peter Park, an AI Existential Safety Post-doctoral Fellow working with Dr. Max Tegmark at MIT. As you may have ascertained from the previous two segments of the interview, Dr. Park cofounded StakeOut.AI along with Harry Luk and on…
…
continue reading
1
INTERVIEW: StakeOut.AI w/ Dr. Peter Park (2)
1:06:23
1:06:23
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
1:06:23
Join me for round 2 with Dr. Peter Park, an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. Dr. Park was a cofounder of StakeOut.AI, a non-profit focused on making AI go well for humans, along with Harry Luk and one other individual, whose name has been removed due to requirements of her current position. In addition …
…
continue reading
UPDATE: Contrary to what I say in this episode, I won't be removing any episodes that are already published from the podcast RSS feed. After getting some advice and reflecting more on my own personal goals, I have decided to shift the direction of the podcast towards accessible content regarding "AI" instead of the show's original focus. I will sti…
…
continue reading
1
INTERVIEW: StakeOut.AI w/ Dr. Peter Park (1)
54:11
54:11
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
54:11
Dr. Peter Park is an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. In conjunction with Harry Luk and one other cofounder, he founded StakeOut.AI, a non-profit focused on making AI go well for humans. 00:54 - Intro 03:15 - Dr. Park, x-risk, and AGI 08:55 - StakeOut.AI 12:05 - Governance scorecard 19:34 - Hollywood w…
…
continue reading
Take a trip with me through the paper Large Language Models, A Survey, published on February 9th of 2024. All figures and tables mentioned throughout the episode can be found on the Into AI Safety podcast website. 00:36 - Intro and authors 01:50 - My takes and paper structure 04:40 - Getting to LLMs 07:27 - Defining LLMs & emergence 12:12 - Overvie…
…
continue reading
1
FEEDBACK: Applying for Funding w/ Esben Kran
45:13
45:13
Redă mai târziu
Redă mai târziu
Liste
Like
Plăcut
45:13
Esben reviews an application that I would soon submit for Open Philanthropy's Career Transitition Funding opportunity. Although I didn't end up receiving the funding, I do think that this episode can be a valuable resource for both others and myself when applying for funding in the future. Head over to Apart Research's website to check out their wo…
…
continue reading