Mark Zuckerberg texts Elon Musk
Welcome to Internal Tech Emails: internal tech industry emails that surface in public records. 🔍
December 13, 2024
Mark Zuckerberg
Quick heads up that Meta sent a letter to the California AG supporting your lawsuit against OpenAI. Someone (not us) leaked leaked the letter and it will be public in the next hour. Wanted to make sure you heard this from me.
Elon Musk
Ok
Mark Zuckerberg
I have an idea to run by you. Not urgent, but let me know if there’s a good time to call in the next few days.
[This document is from Musk v. Altman (2026).]
February 3, 2025
Mark Zuckerberg
Looks like DOGE is making progress. I’ve got our teams on alert to take down content doxxing or threatening the people on your team. Let me know if there’s anything else I can do to help.
Elon Musk
[Reacted ❤️ to “Looks like DOGE is making progress. I’ve got our t…”]
Elon Musk
Are you open to the idea of bidding on the OpenAI IP with me and some others?
Mark Zuckerberg
Want to discuss live?
Elon Musk
[Liked “Want to discuss live?”]
Elon Musk
Will call in the morning
[This document is from Musk v. Altman (2026).]
Previously: 25+ documents from Mark Zuckerberg in the full archive
Previously: 25+ documents from Elon Musk in the full archive
Inside the Tech Emails Library
See how the biggest companies in tech actually operate — in their own words.
The Tech Emails Library is 250+ internal documents pulled from court filings we track year-round. Strategy memos. Board emails. Messages between CEOs and execs at Apple, Google, Meta, Microsoft, OpenAI, Tesla, and more.
Investors use it to understand how leadership actually thinks. Journalists use it for primary sources. Founders and operators use it to study how the biggest companies make decisions.
Upgrade to a paid subscription, and you’ll unlock access to the full archive:
P.S. Every year, we track hundreds of court cases and review more than 10,000 filings to bring you Internal Tech Emails. New documents are added to the archive as cases unfold.
Anthropic draft memo: "Product visions for early 2025"
Jun 15, 2023
We want to have a concrete and inspiring vision of the transformative role AI can play in society in 2025. This will serve several purposes.
First, it gives us a way of sharing with the broader world the positive transformations we believe could happen with TAI. Architecture decisions we make over the next year or two could shape the longer term relationship that human and machine intelligence have going forward. This could be impactful in the same way as, say, the Von Neumann architecture was for computers, or Tim Berners-Lee’s proposal for the World Wide Web shaped the consumer adoption of the internet.
Second, for the product team, it’s also an exciting point on the horizon to work towards; it will inform medium term decisions about what initiatives to pursue and what technologies to develop to get us there. We’ll want to find clear stepping stones between where we are now and the long term vision, where we get technological proof points and market feedback along the way. Companies like Magic Leap that aim straight for their super-ambitious final destination end up disconnected from reality and burn an obscene amount of money, ultimately falling behind competitors who were able to absorb market feedback and build organizational muscle. SpaceX is probably a great example of doing this right. The founders (yes, there’s more than one!) had the borderline ridiculous goal of achieving human settlement on Mars, but they broke it down into a series of steps that achieved proof points and profit along the way. First they got good at building small, profitable, low cost rockets, then they got good at building medium sized, profitable, low cost rockets, then they figured out how to make rockets reusable to further lower the cost, then they became the world’s largest provider of orbital launch capability with their superior product, then they built the world’s largest rocket and almost got it into space on the first try… and if they can generate enough demand for heavy lift launch capability, then they can mass produce these giant rockets and start sending them to Mars. The long term goal kept the team inspired and focused, and the steps along the way gave the team the skills and the revenue to succeed.
Dario’s vision doc has highlighted (or implied) some key characteristics that a long term product vision should possess:
We need to solve difficult alignment issues, particularly around stability over long term planning horizons as well as minimal supervision (ie does the agent start going off course on increasingly long horizon tasks). Focusing work on products that require breakthroughs in these areas will help ensure positive TAI.
We intend to hit $100M ARR this year and achieve enough market share (30%) to have a significant impact on TAI outcomes. This means we have to be responsive to market needs and pursue the areas of greatest demand.
The Teal model will be significantly more powerful than Lark
but also slower and more expensive, and its market niche will involve taking advantage of its vast raw reasoning power and information processing ability to automate intellectually demanding tasks that Claude and GPT4 cannot accomplish.
There have been three major visions for TAI articulated. Better descriptions of these visions exist elsewhere, but I’ve tried to capture the key points of each here.
Claude the Virtual Employee (AKA Claude the Robot)
Claude is able to take a well-defined role at a company, whether it’s in software development, marketing, operations, or any other part of the team, and just do the job. It would use the same interfaces humans do if need be, or it could directly access APIs to get the work done faster. It would scrupulously document its work, be audited by a mixture of automated and human systems, and check in with its manager on any decisions above a certain level of importance or irreversibility so that the humans in the org are confident it’s doing what they want. The first versions of this virtual employee would probably spike in different areas than its human counterparts – eg Claude the marketer might be able to read, respond to, and summarize thousands of tweets and blog posts in a day, but a human marketing employee is more likely to know whether now is the right time to do a top-to-bottom rebranding of the company, or whether adopting an animal mascot in the logo is the right idea.
Claude the personal assistant (Claude the Cyborg)
Claude is an extension of your will, an AI chief of staff that dramatically amplifies your ability to get things done. If you need to document your code, write a difficult email, research a topic of relevance to a project you’re doing, do your taxes, confer with your advisers on an issue, Claude makes it happen. You’re free to focus on high level intellectual tasks. There is some overlap with the Virtual Employee here, but the main difference is that this Claude would have far less agency or autonomy to develop its own goal hierarchy over long time horizons. Its goals are your subgoals, and while it might be proactive at your request, by default it’s there as an intelligent tool that you can pick up when you want. This Claude would need to be able to have access to most of your accounts and maintain a deep mental model of everything you know, do, and care about so that it can best model your needs and desires. There are issues of impersonation and trust that are very important to solve. Claude will need to know how to earn your trust by presenting the right information at the right time and checking in before taking actions it needs your input on.
Claude as intelligent org infrastructure
Claude is proactive, living collaboration infrastructure that makes everything in an organization better. Existing collaboration tools like Slack and Asana are passive and don’t take anything more than simple actions on our behalf. AI-first versions of these tools would proactively take steps to make organizations and their employees as effective as possible while still respecting everyone’s individual and collective goals. Humans’ abilities to understand what’s happening in large (or even medium size 200 person) organizations is limited. This leads to all the problems we see in larger orgs – siloing, miscommunication, duplication of effort, goal misalignment between teams, factions, execs unaware of what’s actually happening in the org, employees not knowing who to go to in order to get approval for something, etc. Because Claude can consume, understand, and produce information at a much faster rate than humans, Claude is uniquely situated to deal with the information overload that orgs experience. For example, a Claude that could read and understand everything that happens in Slack, Google Docs, and Google Meet could do a wide range of things to help employees be more productive and happier. Since this is a relatively new concept, there are many concrete use cases described at the bottom of the linked doc. Claude the Cyborg and Claude the Robot are both systems that work 1 on 1 with you, whereas this is a system that works 1 on 1 with an entire organization. Claude the Cyborg and Robot differ on the axis of “do you treat it as an intelligent tool, or do you treat it as a standalone entity with its own goals?” as well as “where does Claude operate: your computer or somewhere else?” By comparison, this conception is agnostic to where Claude operates, and instead is opinionated about what Claude focuses on and why: coordination between humans as well as organization and routing of knowledge and insights.
The following table contains some comparisons between the three visions across key points of consideration.
Notes:
Either Claude as Cyborg or Claude as Org Infrastructure can evolve into Claude the Robot with time, with Claude taking on more aggressive levels of autonomy as we start to feel we have the AI alignment capabilities to support it.
Some open questions
Do we go straight for Claude the Robot, forcing us to confront long horizon alignment and alignment with minimal human supervision as early as possible, with the theory that this is the way the world will eventually go, and it’s important that we solve it before the rest of the world does it poorly?
Is Claude the Robot too scary for the general public? It’s harder to tell a compelling vision for it, as it literally replaces jobs. It only seems good if it’s accompanied with a rosy UBI vision. By comparison, both Claude the Cyborg (everyone gets a chief of staff!) and Claude as Org Infrastructure (every org is just better!) have clear compelling stories associated with them.
All PR aside, which of these three visions actually leads to positive TAI? What happens if we pursue one and our competitors pursue a different one?
TAI will probably first emerge inside a large org because it has tremendous financial resources, access to a huge pile of GPUs, and tremendous access to information. This might happen at a point in time where Claude the Robot still isn’t feasible yet. Basically Claude as Org Infrastructure could be an alien intelligence that is incredibly good at figuring out how to get 10,000 people to work together productively and happily given a set of humans providing the vision and longer term insights, but still couldn’t do all the tasks required to be a startup’s first marketing hire. (One comparison would be to the Facebook News Feed algorithm on a good day, which seems to find me consistently good content from 2000 friends if I’ve been away for a couple of days.) If this is true, maybe we should aim at Claude as Org Infrastructure so we’re in a position to influence TAI.
[This document is from Bartz v. Anthropic (2026).]
Twitter link
Threads link
[Full original document:]



