scroll down

Make AGI Safe

We are a team of researchers dedicated to applied, scalable AI alignment research.

We believe we will see transformative artificial intelligence within our lifetime. In light of AI’s recent progress, we also believe that this AI is likely to derive from modern machine learning architectures and techniques like gradient descent.

But today’s AI models are black boxes - optimized for mathematical objectives only tenuously related to what we actually care about as humans. Powerful language models such as GPT3 cannot currently be prevented from producing undesired outputs and complete fabrications to factual questions. Because we lack a fundamental understanding of the internal mechanisms of current models, we have few guarantees on what our models might do when encountering situations outside their training data, with potentially catastrophic results on a global scale.

Learn more about us here.

Our Research Agenda

Making sure future AI systems are interpretable, controllable, and produce good outcomes in the real world is a fundamental part of the alignment problem. Our R&D aims directly at gaining a better understanding of, and ability to control, current AI models.

We aim to conduct both conceptual and applied research that addresses the prosaic alignment problem. This will involve training state of the art models that we will use to study applied research problems like model interpretability, value alignment, and steerability. On the conceptual side, we aim to build new frames for reasoning about large language models, and investigate meta-level strategies for making good research bets. While we aim to match the state of the art in NLP and surrounding areas, we are committed to avoiding dangerous AI race dynamics.

TEAM

Connor Leahy

CEO

Sid Black

CTO

Gabriel Alfour

COO

Rachel Stockton

Chief of Staff to the CEO

Chris Scammell

Head of Operations

Andrea Miotti

Head of Policy and Governance

Katrina Joslin

Executive Assistant to the CEO and Office Manager

Kyle McDonell

Research Scientist

Laria Reynolds

Research Scientist

Kip Parker

ML Research Engineer

Jacob Merizian

ML Research Engineer

Adam Shimi

Research Scientist

Lee Sharkey

ML Research Engineer

Caelum Forder

ML Ops Engineer

Carlos Guevara

ML Research Engineer

Janko Prester

Web Developer

Myriame Honnay

Executive Advisor

SPECIAL PROGRAMS

Refine is a 3-month incubator for conceptual AI alignment research in London, hosted by Conjecture. Alignment research aims to ensure that AI systems are aligned with human interests and values, a difficult problem for which no one currently has a general solution. We expect that AI systems are by default misaligned with human values and that deploying sufficiently powerful systems without a solution to the problem would be catastrophic. This is a fully-paid program for helping aspiring independent researchers find, formulate, and get funding for new research bets, ideas that are promising enough to try out for a few months to see if they have more potential. Refine was developed to assist relentlessly resourceful individuals with diverse research backgrounds who want the support and resources to drive their own ideas forward.

You can learn more about the program here. Applications for the current cohort are closed at the moment. You can still apply for the incubator (and we encourage you to do so in order to have a trace of your interest), but we aren’t accepting any new applications for the first cohort, and we will not give feedback on your application.