Mark Zuckerberg - Llama 3, $10B Models, Caesar Augustus, & 1 GW Datacenters

Dwarkesh Podcast
18 Apr 202478:38

Summary

TLDRIn a thought-provoking interview, the speaker, presumably Mark Zuckerberg, discusses the future of AI with a focus on Meta AI's advancements. He highlights the release of Llama-3, an open-source AI model integrated with Google and Bing for real-time knowledge, emphasizing its capabilities in image generation and natural language processing. Zuckerberg also addresses the challenges of building large-scale data centers, the risks of centralized AI control, and the importance of open-source contributions. He stresses the potential of AI to revolutionize various sectors, including science and healthcare, and shares his vision of AI as a tool that enhances human productivity rather than replacing it. The conversation delves into the implications of AI development, the balance between innovation and safety, and the significance of open-source software in democratizing AI technology.

Takeaways

  • ūü§Ė The new version of Meta AI, Llama-3, is set to be the most intelligent, freely-available AI assistant, integrating with Google and Bing for real-time knowledge and featuring enhanced creation capabilities like animations and real-time image generation.
  • ūüöÄ Meta is training multiple versions of the Llama model, including an 8 billion parameter model released for the developer community and a 405 billion parameter model still in training, aiming to push the boundaries of AI capabilities.
  • ūüĆź The release of Llama-3 is not global but will start in a few countries, with plans for a wider rollout in the coming months, reflecting a strategic approach to introducing advanced AI technologies.
  • ūüďą Mark Zuckerberg emphasizes the importance of open-source AI, believing it to be beneficial for the community and for Meta, allowing for broader innovation and a more level playing field in the AI industry.
  • ūüõ°ÔłŹ There is a commitment to responsible AI development, with considerations for not releasing certain models if they present irresolvable negative behaviors or risks, highlighting a cautious approach to AI's potential downsides.
  • ‚öôÔłŹ Meta is investing in custom silicon to improve the efficiency of AI model training and inference, which could significantly reduce costs and improve performance for their AI-driven services.
  • ūüĆü Zuckerberg shares his passion for building new things and his belief in the potential of AI to enable creativity and productivity, reflecting his personal drive and the company's mission.
  • ūüĒģ The potential of AI is compared to the creation of computing itself, suggesting a fundamental shift in how people work and live, with AI becoming an integral part of various industries and aspects of life.
  • ūüí° Open source contributions, such as PyTorch and React, are considered powerful drivers of innovation and have possibly had a significant impact on the world, potentially rivaling the reach of Meta's social media products.
  • ‚öĖÔłŹ There's a discussion on the balance of power in AI development, with concerns about the risks of having a single entity with disproportionately strong AI capabilities, advocating for a decentralized approach.
  • ūüŹõ Zuckerberg draws an analogy between historical shifts in understanding, like the concept of peace under Augustus, and current paradigm shifts in technology and business models, emphasizing the importance of challenging conventional thinking.

Q & A

  • What is the main update to Meta AI that Mark Zuckerberg discusses in the interview?

    -The main update is the rollout of Llama-3, an AI model that is both open source and will power Meta AI. It is considered the most intelligent, freely-available AI assistant at the time of the interview.

  • How does Meta AI integrate with other search engines?

    -Meta AI integrates with Google and Bing for real-time knowledge, making it more prominent across apps like Facebook and Messenger.

  • What new creation features does Meta AI introduce?

    -Meta AI introduces features like animations, where any image can be animated, and real-time high-quality image generation as users type their queries.

  • What are the technical specifications of the Llama-3 model that Mark Zuckerberg finds exciting?

    -Mark Zuckerberg is excited about the Llama-3 model, which includes an 8 billion parameter model and a 70 billion parameter model. There's also a 405 billion parameter model in training.

  • What is the roadmap for future releases of Meta AI?

    -The roadmap includes new releases that will bring multimodality, more multi-linguality, and bigger context windows. There are plans to roll out the 405B model later in the year.

  • How does Mark Zuckerberg perceive the risk of having a few companies controlling closed AI models?

    -He sees it as a significant risk, as it could lead to these companies dictating what others can build, creating a situation similar to the control exerted by Apple over app features.

  • What is the strategy behind Meta's acquisition of GPUs like the H100?

    -The strategy was to ensure they had enough capacity to build something they couldn't foresee on the horizon yet, doubling the order to be prepared for future needs beyond the immediate requirements for Reels and content ranking.

  • Why did Mark Zuckerberg decide not to sell Facebook in 2006 for $1 billion?

    -Mark felt a deep conviction in what they were building and believed that if he sold the company, he would just build another similar one. He also lacked the financial sophistication to engage in the billion-dollar valuation debate.

  • What is the role of Facebook AI Research (FAIR) in the development of Meta's AI?

    -FAIR, established about 10 years prior, has been instrumental in creating innovations that improved Meta's products. It transitioned from a pure research group to a key player in integrating AI into Meta's products, with the creation of the gen AI group.

  • How does Meta plan to approach the development of more advanced AI models like Llama-4?

    -Meta plans to continue training larger models, incorporating more capabilities like reasoning and memory, and focusing on multimodality and emotional understanding. They aim to make AI more integrated into various aspects of their products and services.

  • What are the potential future challenges in scaling AI models?

    -Challenges include physical constraints like energy limitations for training large models, regulatory hurdles for building new power plants and transmission lines, and the balance between open sourcing models and potential risks associated with them.

  • How does Mark Zuckerberg view the future of AI and its impact on society?

    -He sees AI as a fundamental shift, similar to the creation of computing, that will enable new applications and experiences. However, he also acknowledges the need for careful consideration of risks and the importance of a balanced approach to AI development and deployment.

Outlines

00:00

ūüöÄ AI Innovation and Meta AI's New Features

The speaker expresses an inherent drive to continually innovate and build new features, despite challenges from entities like Apple. The conversation introduces Meta AI's latest advancements, highlighting the release of Llama-3, an open-source AI model that integrates with Google and Bing for real-time knowledge. New features include image animation and real-time high-quality image generation based on user queries. The speaker emphasizes Meta AI's commitment to making AI more accessible and enhancing its capabilities across various applications.

05:00

ūü§Ė The Future of AI and Meta's Strategic Investments

The discussion delves into the strategic foresight behind Meta's investment in GPUs for AI model training. The speaker reflects on the importance of capacity planning for unforeseen technological advancements, drawing parallels with past decisions that have shaped the company's direction. The conversation also touches on the speaker's personal philosophy on company valuation and the significance of Facebook AI Research (FAIR) in driving product innovation.

10:01

ūü߆ AGI and the Evolution of Meta's AI Strategy

The speaker outlines the evolution of Meta's approach to AI, from the inception of FAIR to the current focus on general AI (AGI). The importance of coding and reasoning in training AI models is emphasized, highlighting how these capabilities enhance the AI's performance across various domains. The conversation explores the concept of AI as a progressive tool that augments human capabilities rather than replacing them.

15:01

ūüĆź Multimodal AI and the Future of Interaction

The speaker envisions a future where AI capabilities become more integrated and sophisticated, covering emotional understanding and multimodal interactions. The potential for personalized AI models and the impact of AI on industrial-scale operations are discussed. The conversation also addresses the idea of AI agents representing businesses and creators, and the importance of open-source AI in maintaining a balanced technological landscape.

20:05

ūüďą Scaling AI Models and Meta's Computational Challenges

The speaker discusses the challenges and strategies related to scaling AI models, including the physical and computational constraints of training large models like Llama-3. The conversation explores the concept of using inference to generate synthetic data for training and the potential for smaller, fine-tuned models to play a significant role in various applications. The speaker also addresses the importance of community contributions in advancing AI technology.

25:06

ūüĆü The Impact of Open Source on AI and Technology

The speaker reflects on the impact of open-source contributions from Meta, such as PyTorch and React, and their potential long-term significance. The conversation considers whether open-source efforts could have a more profound impact than Meta's social media products, given their widespread use across the internet. The speaker also discusses the future integration of Llama models with custom silicon for more efficient training.

30:07

ūü§Ē Navigating Open Source Risks and Future AI Developments

The speaker addresses concerns about the potential risks of open sourcing powerful AI models, including the possibility of misuse. The conversation focuses on the importance of balancing theoretical risks with practical, everyday harms, and the responsibility to mitigate these risks. The speaker also shares thoughts on the future of AI, including the potential for AI to become a commodified training resource and the economic considerations of open sourcing high-value models.

35:17

ūüĆü The Value of Focus and Meta's Management Strategy

The speaker discusses the concept of focus as a scarce commodity, especially for large companies, and its importance in driving the company's success. The conversation touches on the challenges of managing multiple projects and the need to maintain a sharp focus on key priorities. The speaker also reflects on the unpredictability of success in technology and the importance of trying new things.

Mindmap

Keywords

ūüí°AI Assistant

An AI assistant is an artificial intelligence software that performs tasks or services for users, such as answering questions, setting reminders, or providing recommendations. In the script, the development of Meta AI's Llama-3 model is discussed, which is designed to be an intelligent, freely-available AI assistant that integrates with platforms like Facebook and Messenger, allowing users to interact with it through search boxes for real-time queries and responses.

ūüí°Open Source

Open source refers to a type of software where the source code is made available to the public, allowing anyone to view, use, modify, and distribute the software. The script discusses Meta's decision to release the Llama-3 model as open source, emphasizing the benefits of community contributions and the prevention of a single entity having control over advanced AI capabilities.

ūüí°Data Center

A data center is a facility that houses a large number of servers, storage systems, and other components connected through a network. The script mentions the construction of data centers with high energy consumption, such as 300 Megawatts or 1 Gigawatt, which are necessary for training large AI models like Llama-3.

ūüí°Parameter

In the context of AI, a parameter is a variable in a model that the machine learning algorithm can adjust to improve the model's performance. The script discusses different versions of the Llama model with varying numbers of parameters, such as an 8 billion parameter model and a 70 billion parameter model, highlighting the scale and complexity of these AI systems.

ūüí°Multimodality

Multimodality in AI refers to the ability of a system to process and understand information from multiple different modes of input, such as text, images, and video. The script mentions Meta's focus on developing multimodal capabilities in their AI models to enhance their functionality and user interaction.

ūüí°Benchmark

A benchmark is a standard or point of reference against which things may be compared or assessed. In AI, benchmarks are used to evaluate the performance of models against specific tasks. The script discusses the Llama-3 model's performance on benchmarks, indicating its effectiveness and reasoning capabilities.

ūüí°Inference

In AI, inference is the process of deriving conclusions or making decisions based on known information. The script talks about the significant role of inference in serving a large user base, as it requires a substantial amount of computational resources to apply the trained AI models to new data or situations.

ūüí°Meta AI

Meta AI refers to the artificial intelligence division within the company Meta (formerly known as Facebook, Inc.). The script discusses the advancements in Meta AI, particularly the release of the Llama-3 model, which is intended to be the most intelligent AI assistant available to the public.

ūüí°Training Cluster

A training cluster is a group of interconnected computers that work together to train machine learning models. The script mentions the development and scaling of training clusters, which are essential for handling the large-scale computations required to train complex AI models like Llama-3.

ūüí°Content Risks

Content risks refer to the potential negative outcomes or harms that can arise from the use of AI systems, such as the spread of misinformation, promotion of harmful behavior, or facilitation of violence. The script emphasizes the importance of mitigating content risks associated with AI models, particularly in preventing the use of these models to cause harm to individuals or society.

ūüí°Economic Constraints

Economic constraints refer to the limitations or restrictions faced by an organization due to financial considerations. The script discusses how economic constraints, such as the cost of GPUs and energy, impact the development and scaling of AI models and data centers.

Highlights

Meta AI is releasing an upgraded model called Llama-3, which is set to be the most intelligent, freely-available AI assistant.

Llama-3 will be available as open source for developers and will also power Meta AI, integrating with Google and Bing for real-time knowledge.

New creation features have been added, including the ability to animate any image and generate high-quality images in real time as you type your query.

Meta AI's new version is initially rolling out in a few countries, with plans for broader availability in the coming weeks and months.

Technically, Llama-3 comes in three versions: an 8 billion parameter model, a 70 billion parameter model released today, and a 405 billion parameter model still in training.

The 70 billion parameter model of Llama-3 has scored highly on benchmarks for math and reasoning, while the 405 billion parameter model is expected to lead in benchmarks upon completion.

Meta has a roadmap for future releases that include multimodality, more multilinguality, and larger context windows.

The decision to invest in GPUs for AI was driven by the need for more capacity to train models for content recommendation in services like Reels.

The capability of showing content from unconnected sources on platforms like Instagram and Facebook represents a significant unlock for user engagement.

The importance of open source in AI development, ensuring a balanced and competitive ecosystem, and the potential risks of concentrated AI power.

The potential for AI to surpass human intelligence in most domains progressively, and the focus on capabilities like emotional understanding and reasoning.

Meta's commitment to addressing the risks of misinformation and the importance of building AI systems to combat adversarial uses.

The vision of AI as a tool that enhances human capabilities rather than replacing them, aiming for increased productivity and creativity.

The significance of the metaverse in enabling realistic digital presence and its potential impact on socializing, working, and various industries.

Mark Zuckerberg's personal drive to continuously build new things and the philosophy behind investing in large-scale projects like AI and the metaverse.

The historical perspective on the development of peace and economy, drawing parallels to modern innovations in tech and the concept of open source.

The potential for custom silicon to revolutionize the training of large AI models and the strategic move to first optimize inference processes.

Transcripts

00:00

That's not even a question for me - whether  we're going to go take a swing at building  

00:03

the next thing. I'm just incapable of not doing  that. There's a bunch of times when we wanted to  

00:08

launch features and then Apple's just like  nope you're not launching that I was like  

00:12

that sucks. Are we set up for that with AI where  you're going to get a handful of companies that  

00:19

run these closed models that are going to be in  control of the apis and therefore are going to be  

00:22

able to tell you what you can build? Then when  you start getting into building a data center  

00:27

that's like 300 Megawatts or 500 Megawatts or a  Gigawatt - just no one has built a single Gigawatt  

00:33

data center yet. From wherever you sit there's  going to be some actor who you don't trust - if  

00:37

they're the ones who have the super strong AI I  think that that's potentially a much bigger risk

00:43

Mark, welcome to the podcast. Thanks for having me. Big fan of your podcast. 

00:47

Thank you, that's very nice of you to say.  Let's start by talking about the releases  

00:52

that will go out when this interview  goes out. Tell me about the models and  

00:57

Meta AI. What’s new and exciting about them? I think the main thing that most people in the  

01:02

world are going to see is the new version of  Meta AI. The most important thing that we're  

01:08

doing is the upgrade to the model. We're  rolling out Llama-3. We're doing it both  

01:12

as open source for the dev community and it is  now going to be powering Meta AI. There's a lot  

01:19

that I'm sure we'll get into around Llama-3,  but I think the bottom line on this is that  

01:24

we think now that Meta AI is the most intelligent,  freely-available AI assistant that people can use.  

01:30

We're also integrating Google  and Bing for real-time knowledge. 

01:34

We're going to make it a lot more prominent across  our apps. At the top of Facebook and Messenger,  

01:42

you'll be able to just use the search box right  there to ask any question. There's a bunch of new  

01:48

creation features that we added that I think are  pretty cool and that I think people will enjoy.  

01:54

I think animations is a good one. You can  basically take any image and just animate it. 

02:00

One that people are going to find pretty wild  is that it now generates high quality images  

02:07

so quickly that it actually generates it as  you're typing and updates it in real time.  

02:12


So you're typing your query and it's honing  in. It’s like “show me a picture of a cow in  

02:21

a field with mountains in the background, eating¬† macadamia nuts, drinking beer‚ÄĚ and it's updating¬†¬†

02:29

the image in real time. It's pretty wild. I  think people are going to enjoy that. So I  

02:35

think that's what most people are going to see in  the world. We're rolling that out, not everywhere,  

02:39

but we're starting in a handful of countries and  we'll do more over the coming weeks and months.  

02:46

I think that’s going to be a pretty big deal  and I'm really excited to get that in people's  

02:50

hands. It's a big step forward for Meta AI. But I think if you want to get under the hood  

02:57

a bit, the Llama-3 stuff is obviously the most  technically interesting. We're training three  

03:05

versions: an 8 billion parameter model and a 70  billion, which we're releasing today, and a 405  

03:11

billion dense model, which is still training. So  we're not releasing that today, but I'm pretty  

03:20

excited about how the 8B and the 70B turned out.  They're leading for their scale. We'll release a  

03:31

blog post with all the benchmarks so people can  check it out themselves. Obviously it's open  

03:34

source so people get a chance to play with it. We have a roadmap of new releases coming that  

03:41

are going to bring multimodality, more  multi-linguality, and bigger context  

03:46

windows as well. Hopefully, sometime later in the  year we'll get to roll out the 405B. For where it  

03:59

is right now in training, it is already  at around 85 MMLU and we expect that it's  

04:09

going to have leading benchmarks on a bunch of the  benchmarks. I'm pretty excited about all of that.  

04:14

The 70 billion is great too. We're releasing that  today. It's around 82 MMLU and has leading scores  

04:22

on math and reasoning. I think just getting this  in people's hands is going to be pretty wild. 
 

04:26

Oh, interesting. That's the first I’m hearing  of it as a benchmark. That's super impressive. 

04:30

The 8 billion is nearly as powerful as the  biggest version of Llama-2 that we released.  

04:38

So the smallest Llama-3 is basically  as powerful as the biggest Llama-2. 

04:43

Before we dig into these models, I want to go  back in time. I'm assuming 2022 is when you  

04:49

started acquiring these H100s, or you can tell me  when. The stock price is getting hammered. People  

04:56

are asking what's happening with all this  capex. People aren't buying the metaverse.  

05:00

Presumably you're spending that capex to get  these H100s. How did you know back then to get the  

05:04

H100s? How did you know that you’d need the GPUs? I think it was because we were working on Reels.  

05:14

We always want to have enough capacity to build  something that we can't quite see on the horizon  

05:23

yet. We got into this position with Reels where we  needed more GPUs to train the models. It was this  

05:31

big evolution for our services. Instead of just  ranking content from people or pages you follow,  

05:41

we made this big push to start recommending what  we call unconnected content, content from people  

05:49

or pages that you're not following. 
 The corpus of content candidates that  

05:56

we could potentially show you expanded from  on the order of thousands to on the order of  

06:01

hundreds of millions. It needed a completely  different infrastructure. We started working  

06:08

on doing that and we were constrained on  the infrastructure in catching up to what  

06:14

TikTok was doing as quickly as we wanted to. I  basically looked at that and I was like “hey,  

06:19

we have to make sure that we're never in this  situation again. So let's order enough GPUs to do  

06:25

what we need to do on Reels and ranking content¬† and feed. But let's also double that.‚ÄĚ Again,¬†¬†

06:31

our normal principle is that there's going to be  something on the horizon that we can't see yet. 

06:35

Did you know it would be AI? We thought it was going to be something that  

06:40

had to do with training large models. At the time  I thought it was probably going to be something  

06:44

that had to do with content. It’s just the pattern  matching of running the company, there's always  

06:52

another thing. At that time I was so deep into  trying to get the recommendations working for  

07:00

Reels and other content. That’s just such a big  unlock for Instagram and Facebook now, being  

07:05

able to show people content that's interesting to  them from people that they're not even following. 

07:09

But that ended up being a very good decision  in retrospect. And it came from being behind.  

07:18

It wasn't like ‚Äúoh, I was so far ahead.‚Ä̬† Actually, most of the times where we make¬†¬†

07:25

some decision that ends up seeming good  is because we messed something up before  

07:29

and just didn't want to repeat the mistake. This is a total detour, but I want to ask  

07:32

about this while we're on this. We'll get back  to AI in a second. In 2006 you didn't sell for  

07:37

$1 billion but presumably there's some amount you  would have sold for, right? Did you write down  

07:41

in your head like “I think the actual valuation  of Facebook at the time is this and they're not  

07:45

actually getting the valuation right‚ÄĚ? If they‚Äôd¬† offered you $5 trillion, of course you would have¬†¬†

07:48

sold. So how did you think about that choice? 
 I think some of these things are just personal.  

07:58

I don't know that at the time I was sophisticated  enough to do that analysis. I had all these people  

08:03

around me who were making all these arguments for  a billion dollars like “here's the revenue that  

08:10

we need to make and here's how big we need to be.¬† It's clearly so many years in the future.‚ÄĚ It was¬†¬†

08:16

very far ahead of where we were at the time. I  didn't really have the financial sophistication  

08:23

to really engage with that kind of debate. Deep down I believed in what we were doing.  

08:30


I did some analysis like “what would I do if I  weren’t doing this? Well, I really like building  

08:40

things and I like helping people communicate. I  like understanding what's going on with people and  

08:46

the dynamics between people. So I think if I sold  this company, I'd just go build another company  

08:51

like this and I kind of like the one I have.¬† So why?‚ÄĚ I think a lot of the biggest bets that¬†¬†

09:03

people make are often just based on conviction and  values. It's actually usually very hard to do the  

09:12

analyses trying to connect the dots forward. You've had Facebook AI Research for a long  

09:18

time. Now it's become seemingly central to  your company. At what point did making AGI,  

09:26

or however you consider that mission,  become a key priority of what Meta is doing? 

09:33

It's been a big deal for a while. We started  FAIR about 10 years ago. The idea was that,  

09:41

along the way to general intelligence or whatever  you wanna call it, there are going to be all these  

09:48

different innovations and that's going to  just improve everything that we do. So we  

09:52

didn't conceive of it as a product. It was  more of a research group. Over the last 10  

10:00

years it has created a lot of different things  that have improved all of our products. It’s  

10:07

advanced the field and allowed other people in  the field to create things that have improved our  

10:11

products too. I think that that's been great. There's obviously a big change in the last  

10:17

few years with ChatGPT and the diffusion  models around image creation coming out.  

10:24

This is some pretty wild stuff that is  pretty clearly going to affect how people  

10:29

interact with every app that's out there. At that  point we started a second group, the gen AI group,  

10:40

with the goal of bringing that stuff into our  products and building leading foundation models  

10:46

that would power all these different products. 
 When we started doing that the theory initially  

10:54

was that a lot of the stuff we're doing is  pretty social. It's helping people interact  

11:01

with creators, helping people interact with  businesses, helping businesses sell things or  

11:07

do customer support. There’s also basic assistant  functionality, whether it's for our apps or the  

11:13

smart glasses or VR. So it wasn't completely  clear at first that you were going to need full  

11:24

AGI to be able to support those use cases. But in  all these subtle ways, through working on them,  

11:29

I think it's actually become clear that you do.  For example, when we were working on Llama-2,  

11:37

we didn't prioritize coding because people  aren't going to ask Meta AI a lot of coding  

11:42

questions in WhatsApp. Now they will, right? 

11:44

I don't know. I'm not sure that WhatsApp, or  Facebook or Instagram, is the UI where people are  

11:47

going to be doing a lot of coding questions. Maybe  the website, meta.ai, that we’re launching. But  

12:00

the thing that has been a somewhat surprising  result over the last 18 months is that it turns  

12:08

out that coding is important for a lot of domains,  not just coding. Even if people aren't asking  

12:14

coding questions, training the models on coding  helps them become more rigorous in answering the  

12:21

question and helps them reason across a lot of  different types of domains. That's one example  

12:26

where for Llama-3, we really focused on training  it with a lot of coding because that's going  

12:30

to make it better on all these things even if  people aren't asking primarily coding questions. 

12:36

Reasoning is another example. Maybe you want  to chat with a creator or you're a business and  

12:43

you're trying to interact with a customer.  That interaction is not just like “okay,  

12:47

the person sends you a message and you¬† just reply.‚ÄĚ It's a multi-step interaction¬†¬†

12:53

where you're trying to think through ‚Äúhow do I¬† accomplish the person's goals?‚ÄĚ A lot of times¬†¬†

12:57

when a customer comes, they don't necessarily  know exactly what they're looking for or how  

13:01

to ask their questions. So it's not really the  job of the AI to just respond to the question. 

13:06

You need to kind of think about it  more holistically. It really becomes  

13:09

a reasoning problem. So if someone else solves  reasoning, or makes good advances on reasoning,  

13:14

and we're sitting here with a basic chat bot,  then our product is lame compared to what other  

13:19

people are building. At the end of the day, we  basically realized we've got to solve general  

13:26

intelligence and we just upped the ante and the  investment to make sure that we could do that. 

13:32

So the version of
Llama that's going to solve  all these use cases for users, is that the  

13:41

version that will be powerful enough to replace  a programmer you might have in this building? 

13:46

I just think that all this stuff is  going to be progressive over time. 
 

13:49

But in the end case: Llama-10. I think that there's a lot baked  

13:55

into that question. I'm not sure that we're  replacing people as much as we’re giving  

14:00

people tools to do more stuff. Is the programmer in this building  

14:03

10x more productive after Llama-10? 
 I would hope more. I don't believe that  

14:09

there's a single threshold of intelligence for  humanity because people have different skills.  

14:14

I think that at some point AI is probably going to  surpass people at most of those things, depending  

14:21

on how powerful the models are. But I think it's  progressive and I don't think AGI is one thing.  

14:29

You're basically adding different capabilities.  Multimodality is a key one that we're focused on  

14:34

now, initially with photos and images and text but  eventually with videos. Because we're so focused  

14:40

on the metaverse, 3D type stuff is important  too. One modality that I'm pretty focused on,  

14:46

that I haven't seen as many other people in the  industry focus on, is emotional understanding. So  

14:54

much of the human brain is just dedicated  to understanding people and understanding  

15:00

expressions and emotions. I think that's  its own whole modality, right? You could  

15:06

say that maybe it's just video or image, but it's  clearly a very specialized version of those two. 

15:10

So there are all these different capabilities  that you want to train the models to focus  

15:17

on, in addition to getting a lot better at  reasoning and memory, which is its own whole  

15:22

thing. I don't think in the future we're going to  be primarily shoving things into a query context  

15:29

window to ask more complicated questions. There  will be different stores of memory or different  

15:35

custom models that are more personalized to  people. These are all just different capabilities.  

15:42

Obviously then there’s making them big and small.  We care about both. If you're running something  

15:47

like Meta AI, that's pretty server-based. We also  want it running on smart glasses and there's not  

15:55

a lot of space in smart glasses. So you want to  have something that's very efficient for that. 

16:01

If you're doing $10Bs worth of  inference or even eventually $100Bs,  

16:06

if you're using intelligence in an industrial  scale what is the use case? Is it simulations?  

16:11

Is it the AIs that will be in the metaverse?  What will we be using the data centers for? 

16:19

Our bet is that it's going to basically change  all of the products. I think that there's going  

16:24

to be a kind of Meta AI general assistant  product. I think that that will shift from  

16:32

something that feels more like a chatbot, where  you ask a question and it formulates an answer,  

16:37

to things where you're giving it more complicated  tasks and then it goes away and does them. That's  

16:43

going to take a lot of inference and it's going  to take a lot of compute in other ways too. 

16:48

Then I think interacting with other agents for  other people is going to be a big part of what  

16:56

we do, whether it's for businesses or creators. A  big part of my theory on this is that there's not  

17:02

going to be just one singular AI that you interact  with. Every business is going to want an AI that  

17:09

represents their interests. They're not going to  want to primarily interact with you through an AI  

17:13

that is going to sell their competitors’ products. I think creators is going to be a big one. There  

17:25

are about 200 million creators on our platforms.  They basically all have the pattern where they  

17:31

want to engage their community but they're limited  by the hours in the day. Their community generally  

17:35

wants to engage them, but they don't know that  they're limited by the hours in the day. If  

17:40

you could create something where that creator  can basically own the AI, train it in the way  

17:47

they want, and engage their community, I think  that's going to be super powerful. There's going  

17:55

to be a ton of engagement across all these things. These are just the consumer use cases. My wife and  

18:04

I run our foundation, Chan Zuckerberg Initiative.  We're doing a bunch of stuff on science and  

18:12

there's obviously a lot of AI work that is going  to advance science and healthcare and all these  

18:17

things. So it will end up affecting basically  every area of the products and the economy. 

18:25

You mentioned AI that can just go out and do  something for you that's multi-step. Is that  

18:30

a bigger model? With Llama-4 for example, will  there still be a version that's 70B but you'll  

18:36

just train it on the right data and that will  be super powerful? What does the progression  

18:40

look like? Is it scaling? Is it just the same size  but different banks like you were talking about? 

18:49

I don't know that we know the answer to that. I  think one thing that seems to be a pattern is that  

18:56

you have the Llama model and then you build some  kind of other application specific code around it.  

19:06

Some of it is the fine-tuning for the use case,  but some of it is, for example, logic for how  

19:14

Meta AI should work with tools like Google or Bing  to bring in real-time knowledge. That's not part  

19:21

of the base Llama model. For Llama-2, we had some  of that and it was a little more hand-engineered.  

19:30

Part of our goal for Llama-3 was to bring more  of that into the model itself. For Llama-3,  

19:36

as we start getting into more of these agent-like  behaviors, I think some of that is going to be  

19:41

more hand-engineered. Our goal for Llama-4  will be to bring more of that into the model. 

19:48

At each step along the way you have a sense of  what's going to be possible on the horizon. You  

19:54

start messing with it and hacking around it. I  think that helps you then hone your intuition  

19:59

for what you want to try to train into the next  version of the model itself. That makes it more  

20:04

general because obviously for anything that you're  hand-coding you can unlock some use cases, but  

20:10

it's just inherently brittle and non-general. ‚Ä® When you say ‚Äúinto the model itself,‚ÄĚ you train it¬†¬†

21:21

on the thing that you want in the model itself?¬† What do you mean by ‚Äúinto the model itself‚ÄĚ?¬†

21:33

For Llama- 2, the tool use was very specific,  whereas Llama-3 has much better tool use. We  

21:41

don't have to hand code all the stuff to have  it use Google and go do a search. It can just do  

21:49

that. Similarly for coding and running code and  a bunch of stuff like that. Once you kind of get  

22:00

that capability, then you get a peek at what we  can start doing next. We don't necessarily want  

22:06

to wait until Llama-4 is around to start building  those capabilities, so we can start hacking around  

22:10

it. You do a bunch of hand coding and that  makes the products better, if only for the  

22:16

interim. That helps show the way then of what we  want to build into the next version of the model. 

22:21

What is the community fine tune of Llama-3  that you're most excited for? Maybe not the  

22:25

one that will be most useful to you, but the  one you'll just enjoy playing with the most.  

22:29

They fine-tune it on antiquity and  you'll just be talking to Virgil  

22:32

or something. What are you excited about? I think the nature of the stuff is that you  

22:39

get surprised. Any specific thing that I thought  would be valuable, we'd probably be building. I  

22:53

think you'll get distilled versions. I  think you'll get smaller versions. One  

22:58

thing is that I think 8B isn’t quite small  enough for a bunch of use cases. Over time I'd  

23:07

love to get a 1-2B parameter model, or even a 500M  parameter model and see what you can do with that. 

23:18

If with 8B parameters we’re nearly as  powerful as the largest Llama-2 model,  

23:23

then with a billion parameters you should be able  to do something that's interesting, and faster.  

23:28

It’d be good for classification, or a lot of  basic things that people do before understanding  

23:35

the intent of a user query and feeding it  to the most powerful model to hone in on  

23:41

what the prompt should be. I think that's one  thing that maybe the community can help fill  

23:46

in. We're also thinking about getting around to  distilling some of these ourselves but right now  

23:52

the GPUs are pegged training the 405B. 
 So you have all these GPUs. I think you  

24:00

said 350,000 by the end of the year. 
 That's the whole fleet. We built two,  

24:06

I think 22,000 or 24,000 clusters that are the  single clusters that we have for training the big  

24:13

models, obviously across a lot of the stuff that  we do. A lot of our stuff goes towards training  

24:18

Reels models and Facebook News Feed and Instagram  Feed. Inference is a huge thing for us because we  

24:24

serve a ton of people. Our ratio of inference  compute required to training is probably much  

24:33

higher than most other companies that are doing  this stuff just because of the sheer volume of  

24:37

the community that we're serving. In the material they shared with  

24:41

me before, it was really interesting that you  trained it on more data than is compute optimal  

24:45

just for training. The inference is such a big  deal for you guys, and also for the community,  

24:49

that it makes sense to just have this thing  and have trillions of tokens in there. 

24:53

Although one of the interesting  things about it, even with the 70B,  

24:57

is that we thought it would get more saturated. We  trained it on around 15 trillion tokens. I guess  

25:06

our prediction going in was that it was going  to asymptote more, but even by the end it was  

25:12

still learning.
We probably could have fed it more  tokens and it would have gotten somewhat better. 

25:19

At some point you're running a company and you  need to do these meta reasoning questions. Do I  

25:24

want to spend our GPUs on training the 70B model  further? Do we want to get on with it so we can  

25:31

start testing hypotheses for Llama-4? We needed  to make that call and I think we got a reasonable  

25:39

balance for this version of the 70B. There'll  be others in the future, the 70B multimodal one,  

25:45

that'll come over the next period. But that  was fascinating that the architectures at  

25:53

this point can just take so much data. That's really interesting. What does this  

25:57

imply about future models? You mentioned that  the Llama-3 8B is better than the Llama-2 70B. 

26:03

No, no, it's nearly as good.  I don’t want to overstate  

26:06

it. It’s in a similar order of magnitude. Does that mean the Llama-4 70B will be  

26:10

as good as the Llama-3 405B? What  does the future of this look like? 

26:14

This is one of the great questions, right? I think  no one knows. One of the trickiest things in the  

26:22

world to plan around is an exponential  curve. How long does it keep going for?  

26:29

I think it's likely enough that we'll keep going.  I think it’s worth investing the $10Bs or $100B+  

26:37

in building the infrastructure and assuming that  if it keeps going you're going to get some really  

26:43

amazing things that are going to make amazing  products. I don't think anyone in the industry  

26:49

can really tell you that it will continue scaling  at that rate for sure. In general in history,  

26:56

you hit bottlenecks at certain points.  Now there's so much energy on this that  

27:01

maybe those bottlenecks get knocked over pretty  quickly. I think that’s an interesting question.
 

27:08

What does the world look like where there aren't  these bottlenecks? Suppose progress just continues  

27:13

at this pace, which seems plausible.  Zooming out and forgetting about Llamas… 

27:18

Well, there are going to be different bottlenecks.  Over the last few years, I think there was this  

27:28

issue of GPU production. Even companies that had  the money to pay for the GPUs couldn't necessarily  

27:39

get as many as they wanted because there were all  these supply constraints. Now I think that's sort  

27:44

of getting less. So you're seeing a bunch of  companies thinking now about investing a lot  

27:52

of money in building out these things. I think  that that will go on for some period of time.  

28:00

There is a capital question. At what point does  it stop being worth it to put the capital in? 

28:06

I actually think before we hit that, you're  going to run into energy constraints. I don't  

28:14

think anyone's built a gigawatt single training  cluster yet. You run into these things that just  

28:21

end up being slower in the world. Getting energy  permitted is a very heavily regulated government  

28:30

function. You're going from software, which  is somewhat regulated and I'd argue it’s more  

28:37

regulated than a lot of people in the tech  community feel. Obviously it’s different if  

28:42

you're starting a small company, maybe you  feel that less. We interact with different  

28:47

governments and regulators and we have lots  of rules that we need to follow and make sure  

28:53

we do a good job with around the world. But  I think that there's no doubt about energy. 

28:59

If you're talking about building large new  power plants or large build-outs and then  

29:04

building transmission lines that cross other  private or public land, that’s just a heavily  

29:11

regulated thing. You're talking about many  years of lead time. If we wanted to stand up  

29:17

some massive facility, powering that is a very  long-term project. I think people do it but I  

29:31

don't think this is something that can be quite  as magical as just getting to a level of AI,  

29:36

getting a bunch of capital and putting it in, and  then all of a sudden the models are just going to…  

29:42

You do hit different bottlenecks along the way. Is there something, maybe an AI-related project or  

29:47

maybe not, that even a company like Meta doesn't  have the resources for? Something where if your  

29:51

R&D budget or capex budget were 10x what it is  now, then you could pursue it? Something that’s  

29:56

in the back of your mind but with Meta today,  you can't even issue stock or bonds for it?  

30:01

It's just like 10x bigger than your budget? I think energy is one piece. I think we  

30:07

would probably build out bigger clusters than we  currently can if we could get the energy to do it. 

30:18

That's fundamentally money-bottlenecked  in the limit? If you had $1 trillion… 

30:23

I think it’s time. It depends on how far the  exponential curves go. Right now a lot of  

30:36

data centers are on the order of 50 megawatts or  100MW, or a big one might be 150MW. Take a whole  

30:42

data center and fill it up with all the stuff  that you need to do for training and you build  

30:46

the biggest cluster you can. I think a bunch  of companies are running at stuff like that. 

30:53

But when you start getting into building a  data center that's like 300MW or 500MW or 1 GW,  

31:04

no one has built a 1GW data center yet. I think  it will happen. This is only a matter of time but  

31:09

it's not going to be next year. Some of these  things will take some number of years to build  

31:18

out. Just to put this in perspective, I think a  gigawatt would be the size of a meaningful nuclear  

31:31

power plant only going towards training a model. ‚Ä® Didn't Amazon do this? They have a 950MW‚Äst

31:39

I'm not exactly sure what they  did. You'd have to ask them. 
 

31:44

But it doesn’t have to be in the  same place, right? If distributed  

31:45

training works, it can be distributed. Well, I think that is a big question, how  

31:49

that's going to work. It seems quite possible that  in the future, more of what we call training for  

31:56

these big models is actually more along the lines  of inference generating synthetic data to then go  

32:05

feed into the model. I don't know what that ratio  is going to be but I consider the generation of  

32:11

synthetic data to be more inference than training  today. Obviously if you're doing it in order  

32:16

to train a model, it's part of the broader  training process. So that's an open question,  

32:24

the balance of that and how that plays out. Would that potentially also be the case with  

32:30

Llama-3, and maybe Llama-4 onwards? As in, you  put this out and if somebody has a ton of compute,  

32:36

then they can just keep making these things  arbitrarily smarter using the models that  

32:37

you've put out. Let’s say there’s some  random country, like Kuwait or the UAE,  

32:43

that has a ton of compute and they can actually  just use Llama-4 to make something much smarter. 

32:52

I do think there are going to be  dynamics like that, but I also think  

32:59

there is a fundamental limitation on the model  architecture. I think like a 70B model that we  

33:13

trained with a Llama-3 architecture can get  better, it can keep going. As I was saying,  

33:18

we felt that if we kept on feeding it more data  or rotated the high value tokens through again,  

33:24

then it would continue getting better. We've  seen a bunch of different companies around  

33:31

the world basically take the Llama-2 70B model  architecture and then build a new model. But it's  

33:41

still the case that when you make a generational  improvement to something like the Llama-3 70B or  

33:46

the Llama-3 405B, there isn’t anything like  that open source today. I think that's a big  

33:54

step function. What people are going to be able to  build on top of that I think can’t go infinitely  

33:59

from there. There can be some optimization in  that until you get to the next step function. 

34:05

Let's zoom out a little bit from specific  models and even the multi-year lead times  

34:11

you would need to get energy approvals and so  on. Big picture, what's happening with AI these  

34:15

next couple of decades? Does it feel like  another technology like the metaverse or  

34:21

social, or does it feel like a fundamentally  different thing in the course of human history? 

34:29

I think it's going to be pretty fundamental. I  think it's going to be more like the creation  

34:34

of computing in the first place. You'll get all  these new apps in the same way as when you got  

34:44

the web or you got mobile phones. People basically  rethought all these experiences as a lot of things  

34:50

that weren't possible before became possible.  So I think that will happen, but I think it's  

34:56

a much lower-level innovation. My sense is  that it's going to be more like people going  

35:01

from not having computers to having computers. It’s very hard to reason about exactly how this  

35:16

goes. In the cosmic scale obviously it'll happen  quickly, over a couple of decades or something.  

35:27

There is some set of people who are afraid of it  really spinning out and going from being somewhat  

35:33

intelligent to extremely intelligent overnight.  I just think that there's all these physical  

35:37

constraints that make that unlikely to happen. I  just don't really see that playing out. I think  

35:45

we'll have time to acclimate a bit. But it will  really change the way that we work and give people  

35:51

all these creative tools to do different things.  I think it's going to really enable people to do  

36:00

the things that they want a lot more. So maybe not overnight, but is it your  

36:05

view that on a cosmic scale we can think of  these milestones in this way? Humans evolved,  

36:09

and then AI happened, and then they went out  into the galaxy. Maybe it takes many decades,  

36:15

maybe it takes a century, but is that the grand  scheme of what's happening right now in history? 
 

36:22

Sorry, in what sense? In the sense that there were  

36:25

other technologies, like computers and even  fire, but the development of AI itself is as  

36:29

significant as humans evolving in the first place. I think that's tricky.
The history of humanity  

36:39

has been people basically thinking that certain  aspects of humanity are really unique in different  

36:50

ways and then coming to grips with the fact that  that's not true, but that humanity is actually  

36:57

still super special. We thought that the earth  was the center of the universe and it's not,  

37:06

but humans are still pretty  awesome and pretty unique, right? 

37:12

I think another bias that people tend  to have is thinking that intelligence  

37:17

is somehow fundamentally connected to life.  It's not actually clear that it is. I don't  

37:32

know that we have a clear enough definition of  consciousness or life to fully interrogate this.  

37:42

There's all this science fiction about creating  intelligence where it starts to take on all these  

37:47

human-like behaviors and things like that. The  current incarnation of all this stuff feels like  

37:54

it's going in a direction where intelligence  can be pretty separated from consciousness,  

37:59

agency, and things like that, which I  think just makes it a super valuable tool. 

38:06

Obviously it's very difficult to predict  what direction this stuff goes in over time,  

38:10

which is why I don't think anyone should be  dogmatic about how they plan to develop it  

38:16

or what they plan to do. You want to look  at it with each release. We're obviously  

38:20

very pro open source, but I haven't committed  to releasing every single thing that we do.  

38:27

I’m basically very inclined to think that  open sourcing is going to be good for the  

38:32

community and also good for us because we'll  benefit from the innovations. If at some point  

38:38

however there's some qualitative change in what  the thing is capable of, and we feel like it's  

38:43

not responsible to open source it, then we  won't. It's all very difficult to predict. 

38:52

What is a kind of specific qualitative change  where you'd be training Llama-5 or Llama-4,  

38:57

and if you see it, it‚Äôd make you think ‚Äúyou know¬† what, I'm not sure about open sourcing it‚ÄĚ?‚Ä®¬†

39:05

It's a little hard to answer that in  the abstract because there are negative  

39:09

behaviors that any product can exhibit  where as long as you can mitigate it,  

39:15

it's okay. There’s bad things about social media  that we work to mitigate. There's bad things about  

39:23

Llama-2 where we spend a lot of time trying  to make sure that it's not like helping people  

39:28

commit violent acts or things like that. That  doesn't mean that it's a kind of autonomous or  

39:34

intelligent agent. It just means that it's learned  a lot about the world and it can answer a set of  

39:38

questions that we think would be unhelpful for it  to answer. I think the question isn't really what  

39:49

behaviors would it show, it's what things would  we not be able to mitigate after it shows that. 

39:59

I think that there's so many ways in which  something can be good or bad that it's hard  

40:03

to actually enumerate them all up front. Look at  what we've had to deal with in social media and  

40:10

the different types of harms. We've basically  gotten to like 18 or 19 categories of harmful  

40:15

things that people do and we've basically built  AI systems to identify what those things are and  

40:23

to make sure that doesn't happen on our network  as much as possible. Over time I think you'll  

40:29

be able to break this down into more of a  taxonomy too. I think this is a thing that  

40:34

we spend time researching as well, because we  want to make sure that we understand that. 
 

41:46

It seems to me that it would be a good idea.  I would be disappointed in a future where AI  

41:50

systems aren't broadly deployed and everybody  doesn't have access to them. At the same time,  

41:55

I want to better understand the mitigations.  If the mitigation is the fine-tuning,  

42:00

the whole thing about open weights is that you  can then remove the fine-tuning, which is often  

42:06

superficial on top of these capabilities. If it's  like talking on Slack with a biology researcher…  

42:12

I think models are very far from this. Right  now, they’re like Google search. But if I can  

42:17

show them my Petri dish and they can explain why  my smallpox sample didn’t grow and what to change,  

42:23

how do you mitigate that? Because somebody  can just fine-tune that in there, right? 

42:29

That's true. I think a lot of people will  basically use the off-the-shelf model and some  

42:35

people who have basically bad faith are going to  try to strip out all the bad stuff. So I do think  

42:41

that's an issue. On the flip side, one of the  reasons why I'm philosophically so pro open source  

42:52

is that I do think that a concentration of AI in  the future has the potential to be as dangerous as  

43:02

it being widespread. I think a lot of people think  about the questions of “if we can do this stuff,  

43:08

is it bad for it to be out in the wild and just¬† widely available?‚ÄĚ I think another version of¬†¬†

43:15

this is that it's probably also pretty bad  for one institution to have an AI that is  

43:25

way more powerful than everyone else's AI. There’s one security analogy that I think  

43:31

of. There are so many security holes in so many  different things. If you could travel back in  

43:42

time a year or two years, let's say you just have  one or two years more knowledge of the security  

43:50

holes. You can pretty much hack into any system.  That’s not AI. So it's not that far-fetched to  

43:55

believe that a very intelligent AI probably would  be able to identify some holes and basically  

44:03

be like a human who could go back in time a  year or two and compromise all these systems. 

44:07

So how have we dealt with that as a society?  One big part is open source software that  

44:13

makes it so that when improvements are made to  the software, it doesn't just get stuck in one