Conjecture

Full-Time On-Site

Product Engineer

As a product engineer at Conjecture, you’ll build the frontend and backend of the tools that accelerate the next era of cognition. Practically this means building ML APIs alongside our ML engineers, easy to use front-end interfaces alongside talking to customers to understand their pain points and building solutions

Full-Time On-Site

Unusual Talent

Conjecture aims to capitalize on pooling skills from various specializations. Whether or not you are specialised in one of the major fields of predilection (e.g., AI Safety, ML), you can still bring your talent to the team. If your mind is brilliant enough, we assume you can contribute to

Mosaic and Palimpsests: Two Shapes of Research

Mosaic and Palimpsests: Two Shapes of Research

By Adam Shimi Introduction In Old Masters and Young Geniuses, economist-turned-art-data-analyst David Galenson investigates a striking regularity in the careers of painters: art history and markets favors either their early pieces or the complete opposite — their last ones. From this pattern and additional data, Galenson extracts and defends a separation

Epistemological Vigilance for Alignment

Epistemological Vigilance for Alignment

By Adam Shimi Nothing hampers Science and Engineering like unchecked assumptions. As a concrete example of a field ridden with hidden premises, let's look at sociology. Sociologist must deal with the feedback of their object of study (people in social situations), their own social background, as well as the myriad

Productive Mistakes, Not Perfect Answers

Productive Mistakes, Not Perfect Answers

By Adam Shimi. I wouldn’t bet on any current alignment proposal. Yet I think that the field is making progress and abounds with interesting opportunities to do even more, giving us a shot. Isn’t there a contradiction? No, because research progress so rarely looks like having a clearly

Cognitive Emulation: A Naive AI Safety Proposal

Cognitive Emulation: A Naive AI Safety Proposal

This is part of the work done at Conjecture. This post has been reviewed before publication as per our infohazard policy. We thank our external reviewers for their comments and feedback. This post serves as a signpost for Conjecture’s new primary safety proposal and research direction, which we call

Christiano (ARC) and GA (Conjecture) Discuss Alignment Cruxes

The following are the summary and transcript of a discussion between Paul Christiano (ARC) and Gabriel Alfour, hereafter GA (Conjecture), which took place on December 11, 2022 on Slack. It was held as part of a series of discussions between Conjecture and people from other organizations in the AGI and

AGI in sight: our look at the game board

AGI in sight: our look at the game board

From our point of view, we are now in the end-game for AGI, and we (humans) are losing. When we share this with other people, they reliably get surprised. That’s why we believe it is worth writing down our beliefs on this. 1. AGI is happening soon. Significant probability

Come work with us!

Check out our current open positions!