The Possibilities of AI [Entire Talk] - Sam Altman (OpenAI)

Stanford eCorner
1 May 202445:48

Summary

TLDR在斯坦福大学举办的企业家思想领袖研讨会上,Sam Altman,OpenAI的联合创始人兼首席执行官,分享了他对于人工智能发展、创业精神以及个人成长的见解。Altman讨论了他对OpenAI的愿景,包括创建一个能够惠及全人类的通用人工智能(AGI)。他还强调了在技术快速发展的时代,作为企业家和研究者,需要拥有的韧性和自我意识。此外,Altman也提到了对社会影响的考虑,以及如何在创新与责任之间找到平衡。

Takeaways

  • 😀 Sam Altman是OpenAI的联合创始人兼CEO,OpenAI是一个研究和部署通用人工智能(AI)的非营利性研究实验室,旨在惠及全人类。
  • 🌟 OpenAI推出了聊天机器人Chat GPT,该应用在推出后两个月内就达到了1亿活跃用户,成为历史上增长最快的应用程序。
  • 🏆 Sam Altman曾被《时代》杂志评为全球100位最具影响力的人物之一,并在2023年被评为年度CEO,同时他也是福布斯全球亿万富翁名单上的一员。
  • 🎓 在斯坦福大学学习计算机科学期间,Sam Altman参与了Y Combinator,并创立了社交移动应用公司Loopt,该公司后来被收购。
  • 🤖 Sam Altman认为,目前是至少自互联网以来,或者可能是技术历史上最好的创业时机,因为AI的可能性每年都在变得更加非凡。
  • 🚀 如果Sam Altman回到19岁,他会选择从事AI研究,并且可能会选择在产业界而非学术界进行研究,因为产业界拥有大量的计算资源。
  • 💡 Sam Altman鼓励人们追求非共识的想法,并相信自己的直觉和思考过程,即使这些想法在一开始可能并不显而易见。
  • 🔮 他强调,尽管OpenAI的模型如Chat GPT非常强大,但它们仍然有改进的空间,并且OpenAI致力于通过迭代部署来学习并做得更好。
  • 🌐 Sam Altman认为,全球公平地使用AI技术非常重要,并且随着AI模型变得更加强大,我们需要更细致地考虑如何负责任地部署它们。
  • 🔑 他提到,随着AI的发展,我们可能会需要重新考虑社会契约,并关注社会如何适应这些新变化,以及我们如何管理这种变化的速度。
  • 💡 关于AGI(人工通用智能),Sam Altman认为我们每年都将拥有比当前更强大的系统,但他也提醒人们,对于构建比人类更聪明的系统,我们需要更精确的定义和理解。

Q & A

  • Sam Altman 是谁,他有哪些成就?

    -Sam Altman 是 OpenAI 的联合创始人和 CEO,OpenAI 是一个研究和部署通用人工智能(AI)的非营利性研究实验室,旨在让全人类受益。Sam 曾在斯坦福大学学习计算机科学,并在 Y Combinator 担任过总裁。他还被《时代》杂志评为年度最具影响力人物之一,并被列入福布斯世界亿万富翁名单。

  • OpenAI 背后的核心理念是什么?

    -OpenAI 的核心理念是构建通用目的的人工智能(AGI),以造福全人类。OpenAI 旨在推动 AI 研究的边界,并负责任地部署其技术,以确保其对人类社会产生积极的影响。

  • Sam Altman 在斯坦福大学的经历对他有什么影响?

    -Sam Altman 在斯坦福大学学习计算机科学,并参加了斯坦福的创业思想领袖研讨会(ETL)。这段经历为他后来的创业之路打下了基础,尤其是在他加入 Y Combinator 并创立 Loopt 公司之后。

  • Sam Altman 如何看待当前的创业环境?

    -Sam Altman 认为当前是创业的最佳时机之一,尤其是对于 AI 领域的创业。他认为,随着 AI 技术的发展,将会出现更多创新的产品和服务,并且这个时代的创业机会可能比互联网初期还要多。

  • Sam Altman 对于想要进入 AI 领域的创业者有什么建议?

    -Sam Altman 建议创业者应该追求自己独特的想法,而不是追随他人的足迹。他认为,最有价值的创新往往来自于那些非共识的想法,因此创业者应该学会信任自己的直觉,并勇于尝试那些尚未被广泛认可的创意。

  • OpenAI 如何平衡研发成本和社会责任?

    -OpenAI 通过迭代部署和紧密的反馈循环来平衡研发成本和社会责任。Sam Altman 强调,尽管研发成本可能很高,但如果能够为社会创造更大的价值,那么这些投入是值得的。此外,OpenAI 还通过免费提供某些服务来促进全球范围内的公平访问。

  • Sam Altman 对于 AGI 的未来有哪些预测?

    -Sam Altman 认为 AGI 的发展将会是渐进的,每年都会有更加强大的系统出现。他强调,尽管 AGI 可能会带来巨大的变化,但人类社会的日常生活和工作可能会保持相对稳定。

  • OpenAI 在 AI 安全和伦理方面采取了哪些措施?

    -OpenAI 采取了多种措施来确保 AI 的安全和伦理,包括进行红队测试、外部审计以及与社会紧密合作,以确保 AI 技术的负责任部署。

  • Sam Altman 如何看待 AI 对全球政治格局的影响?

    -Sam Altman 认为 AI 可能会极大地改变全球政治格局,但他没有给出具体的影响方式。他强调 AI 将是一个重要的全球性技术,可能会对国家间的力量平衡产生深远的影响。

  • OpenAI 如何处理技术进步与社会适应之间的关系?

    -OpenAI 认为技术进步应该与社会适应同步进行。Sam Altman 强调,随着 AI 模型变得更加强大,需要更加细致地迭代部署,并保持与社会的紧密反馈,以确保技术的发展能够被社会所接受和适应。

  • Sam Altman 对于年轻创业者有什么建议?

    -Sam Altman 建议年轻创业者要勇于追求自己的独特想法,并且学会信任自己的直觉。他认为,创业者应该在社会中寻找自己的位置,并为社会做出贡献。同时,他也鼓励创业者在面对挑战时展现出韧性和适应性。

Outlines

00:00

🎓 斯坦福大学创业思想领袖研讨会开场

斯坦福大学的创业思想领袖(ETL)研讨会由斯坦福技术创新工程中心(STVP)和斯坦福商业协会共同举办。Rvie Balani作为管理科学与工程系讲师及Alchemist加速器的负责人,欢迎了OpenAI的联合创始人兼CEO Sam Altman。Sam Altman在技术和创业领域有着卓越成就,包括在Y Combinator的角色以及创立OpenAI。OpenAI是开发了聊天机器人ChatGBT等革命性技术的非盈利研究实验室。Sam Altman的生活和职业经历表明了他对超越可能性和打破界限的承诺。

05:02

🤖 对话Sam Altman:OpenAI的愿景与挑战

在与Sam Altman的对话中,讨论了OpenAI的使命——创建造福全人类的通用人工智能(AGI)。Sam分享了他对于AI未来的看法,包括对AGI的期待和OpenAI在AI技术快速发展中所面临的挑战。他强调了在AI研究和产品开发中保持迭代和社会责任的重要性,同时也提到了OpenAI如何通过提供强大的工具来赋予人们创造未来的能力。

10:04

🚀 对话Sam Altman:AI技术的未来与机遇

Sam Altman在讨论中提到了AI技术未来的巨大潜力,他认为当前是创业的最佳时期,特别是对于AI领域。他鼓励年轻创业者和学生把握时机,投身于AI研究和创业。Sam还分享了他对于如何识别和追求非共识想法的看法,强调了独立思考和创新的重要性。

15:06

💡 OpenAI的创新与社会影响

Sam Altman讨论了OpenAI如何通过创新对社会产生积极影响。他提到了OpenAI的快速增长,特别是ChatGBT在短时间内达到1亿活跃用户。Sam还提到了他个人的一些荣誉,包括被《时代》杂志评为年度最具影响力人物之一。他强调了OpenAI在推动技术发展的同时,也致力于确保技术进步能够惠及全人类。

20:08

🌐 全球创新与AI基础设施的挑战

Sam Altman讨论了全球创新在AI领域面临的挑战,特别是关于如何使世界各地的人们都能访问和利用AI技术。他强调了提供公平的AI技术访问权的重要性,并提到了OpenAI如何通过免费提供ChatGBT等工具来支持这一点。此外,Sam还提到了不同国家建立自己的AI基础设施的重要性,以及OpenAI在帮助实现这一目标方面可能扮演的角色。

25:09

🛰️ AI在空间探索和殖民化中的潜在作用

Sam Altman探讨了AI在未来空间探索和殖民化中的潜在作用。他认为,鉴于太空对生物体并不友好,使用机器人进行探索和殖民可能更为合适。Sam对AI在这一领域的应用持乐观态度,并认为AI技术将有助于推动人类在太空探索方面取得更大进展。

30:11

💡 创业、创新与非共识思维的重要性

Sam Altman分享了他对于创业和创新的看法,特别是关于如何识别和追求非共识的想法。他强调了在共识和非共识之间找到平衡的重要性,并提到了在追求创新时,需要有独立思考和信任自己直觉的能力。Sam还讨论了如何通过与志同道合的人合作,同时保持独立思考,来推动创新。

35:13

🔋 能源需求的未来与可再生能源的角色

Sam Altman对能源需求的未来趋势和可再生能源的角色进行了讨论。他认为能源需求将会增长,并表达了对核聚变作为未来主要电力来源的乐观态度。Sam还提到了太阳能加储能作为另一种可能的能源解决方案,并强调了实现全球规模、成本低廉的能源供应的重要性。

40:14

🤝 OpenAI的组织结构和使命

Sam Altman解释了OpenAI独特的组织结构,包括非营利组织拥有营利性公司的模式。他讨论了这种结构的起源和演变,以及它是如何适应OpenAI的使命和目标的。Sam强调了OpenAI的使命——推动AI研究的发展,并确保AI技术的发展能够造福全人类。他还提到了OpenAI如何适应不断变化的环境,并在未来继续调整其结构。

45:15

🌍 AGI对地缘政治和全球力量平衡的影响

Sam Altman探讨了AGI可能对全球政治和力量平衡产生的影响。尽管他没有给出具体的答案,但他承认AGI可能会在很大程度上改变国际关系和全球力量的平衡。Sam强调了AI技术的重要性,以及它如何可能被用于塑造未来世界的政治面貌。

🤖 AGI的自我认识和负责任的AI部署

Sam Altman讨论了AGI系统能够认识到自身不确定性和与外界沟通的重要性。他认为,构建能够进行自我反思和识别错误的AI系统是非常重要的。此外,他还强调了在部署AGI时采取负责任的方法的重要性,包括确保AI系统的使用是安全和负责任的。

👥 OpenAI的企业文化和团队精神

Sam Altman分享了OpenAI的企业文化和团队精神。他强调了团队成员之间的紧密合作和对共同使命的忠诚。Sam提到了OpenAI员工在面对挑战时的韧性,以及他们如何共同努力实现公司的长期目标。他还讨论了如何通过共享的使命感和目标来建立强大的企业文化。

🚨 AI的滥用和短期内的挑战

Sam Altman讨论了AI在短期内可能被滥用的问题,以及行业、政府和个人如何共同努力应对这些挑战。他强调了AI技术既有积极的一面也有可能被滥用的风险,并提出了建立反馈循环和平衡安全与自由的重要性。Sam还提到了社会如何共同努力,以确保AI技术被用于促进人类福祉。

🎉 Sam Altman的生日和对未来的展望

在Sam Altman的生日之际,他分享了对未来的展望,特别是对创建比人类更智能的AGI的期待和担忧。Sam认为,虽然创造超越人类的智能可能会让人感到害怕,但这是人类进步和社会发展的一部分。他强调了社会如何通过建立更智能的基础设施来使下一代更加强大,并鼓励人们以积极的态度看待AI技术的发展。

Mindmap

Keywords

💡创业思想领袖研讨会(ETL)

ETL是斯坦福大学为有志于创业的人举办的研讨会,由斯坦福创业工程中心stvp和斯坦福创业学生商业协会Basis共同举办。在视频中,ETL为Sam Altman的演讲提供了平台,他是OpenAI的联合创始人和CEO,这表明ETL是一个汇聚创业者和创新思想的重要场所。

💡OpenAI

OpenAI是一个研究和部署公司,背后有著名的聊天机器人ChatGBT、Dolly和Sora等产品。Sam Altman作为OpenAI的联合创始人和CEO,致力于构建能够普惠全人类的通用人工智能。OpenAI的快速发展和对技术的贡献是视频中讨论的重点之一。

💡Y Combinator

Y Combinator是一个知名的创业孵化器,Sam Altman在斯坦福大学读书期间参与了Y Combinator的首届项目,并在后来成为其总裁。Y Combinator在视频中被提及,突显了其在创业生态系统中的重要性以及与Sam职业发展的关系。

💡通用人工智能(AGI)

AGI指的是能够执行任何智能任务的人工智能,Sam Altman提到OpenAI的使命是构建AGI。视频中,AGI作为讨论的核心,探讨了其对未来社会和经济的潜在影响。

💡迭代部署

迭代部署是Sam Altman提到的一种产品开发和发布策略,它允许不完美的产品先发布,然后通过紧密的反馈循环不断学习和改进。这种方法被用于OpenAI的产品开发,尤其是在AI模型的推出上。

💡计算成本

随着AI模型变得越来越复杂,如ChatGBT 3和4的推出,计算成本显著增加。视频中讨论了这种成本增长对AI发展的影响,以及如何平衡技术创新和经济可行性。

💡半导体晶圆厂

Sam Altman提到了OpenAI可能涉足半导体晶圆厂的计划,这表明公司正寻求控制更多AI基础设施,以支持其研究和产品开发。这个概念在视频中被提及,显示了OpenAI对整个AI生态系统的长远考虑。

💡非共识思维

非共识思维是指持有与众不同的观点或想法。Sam Altman鼓励创业者发展自己的思考方式,寻找那些不是显而易见的创意。在视频中,他强调了在创业和技术创新中追求非共识思维的重要性。

💡人工智能的伦理

视频中讨论了人工智能的伦理问题,包括对社会的影响和如何负责任地部署AI。Sam Altman提到了OpenAI在确保AI技术不被滥用方面的努力,以及建立适当的监管框架的重要性。

💡技术基础设施

技术基础设施在视频中被提到,作为支持AI发展的关键要素之一。Sam Altman强调了数据中心、芯片设计和新型网络等基础设施对未来AI技术进步的重要性。

💡人类适应性

Sam Altman表达了对社会适应新技术速度的担忧,特别是在AI领域。他提到了人类适应性的重要性,并讨论了社会如何适应AI带来的快速变化,以及这种适应性对未来社会结构的影响。

💡全球创新

视频中提到了全球创新的概念,Sam Altman讨论了AI技术如何影响全球范围内的创新,以及如何确保不同地区都能公平地访问和利用AI技术进行创新。

💡可再生能源

在讨论未来能源需求时,Sam Altman提到了可再生能源的重要性,并预测了核聚变或太阳能加储能可能成为地球上主要的电力来源。这与AI和科技进步如何支持可持续发展相关。

💡人工智能的自我意识

视频中讨论了AI系统是否能识别并传达自己的不确定性或不安全因素。Sam Altman认为,构建能够进行自我反省和错误识别的AI系统是非常重要的,这关系到AI的安全性和可靠性。

💡OpenAI的组织结构

OpenAI的组织结构在视频中被提及,特别是其非营利组织拥有营利公司的模式。Sam Altman解释了这种结构是如何随着公司的发展和需求而逐渐形成的,并强调了使命感和适应性在组织发展中的重要性。

💡人工智能的地缘政治影响

视频中探讨了人工智能可能对全球政治格局和权力平衡产生的影响。Sam Altman表达了对AI技术如何改变国际关系的不确定性,同时也暗示了AI可能比任何其他技术都更具有变革性。

💡人工智能的滥用

Sam Altman在视频中提到了对AI被滥用的担忧,尤其是在全球冲突和选举等敏感问题上。他强调了行业、政府和个人在确保AI技术不被用于不良目的方面的责任。

💡人工智能的自我改进

视频中提到了AI的自我改进能力,Sam Altman强调了每年AI系统都在变得更加强大,并且这种进步没有停止的迹象。他讨论了这种不断增长的能力对未来社会和经济的深远影响。

Highlights

山姆·阿尔特曼(Sam Altman)作为OpenAI的联合创始人和CEO,介绍了OpenAI的使命:构建能够惠及全人类的通用人工智能。

OpenAI推出了聊天机器人Chat GPT,该应用在推出后两个月内就达到了1亿活跃用户,成为历史上增长最快的应用程序。

山姆·阿尔特曼被评为《时代》杂志年度最具影响力人物之一,并在2023年被《时代》杂志评为年度CEO。

山姆·阿尔特曼强调,当前可能是自互联网以来,甚至可能是技术历史上最好的创业时期,因为人工智能的可能性每年都在变得更加非凡。

山姆·阿尔特曼认为,如果回到19岁,他会选择投身于人工智能研究,因为这是一个能够产生巨大影响的领域。

山姆·阿尔特曼提到,他对于建立非常大的计算机和人工智能系统的愿景,这将是人类技术历史上的一个重要里程碑。

OpenAI的商业模式和资金问题被讨论,山姆·阿尔特曼表示,他们更关注于创造社会价值而不是短期的盈利。

山姆·阿尔特曼讨论了人工智能的未来发展,他预测未来几年我们将拥有越来越强大的系统。

关于人工智能的潜在危险,山姆·阿尔特曼表示他更担心的是那些我们可能忽视的微妙危险,而不是那些已经被广泛讨论的灾难性事件。

山姆·阿尔特曼分享了他对人工智能在太空探索和殖民中可能扮演的角色的看法,他认为机器人可能更适合在恶劣的太空环境中工作。

山姆·阿尔特曼讨论了如何识别和追求非共识的想法,并强调了信任自己直觉的重要性。

山姆·阿尔特曼分享了他对人工智能在能源需求变化中的作用的看法,以及他对未来能源生成方式的预测。

山姆·阿尔特曼讨论了OpenAI的组织结构,包括其非营利组织拥有营利公司的俄罗斯套娃模式。

山姆·阿尔特曼强调了OpenAI团队的韧性和对使命的忠诚,他认为这是公司成功的关键文化力量。

山姆·阿尔特曼讨论了人工智能在短期内可能被滥用的问题,以及行业、政府和个人如何共同努力以负责任地部署AI。

山姆·阿尔特曼分享了他对人工智能如何改变世界政治格局和权力平衡的看法,尽管他没有给出具体的答案。

山姆·阿尔特曼讨论了人工智能系统是否能够识别自身的不确定性,并将其传达给外部世界的重要性。

山姆·阿尔特曼对于创造比人类更聪明的实体的前景表示了担忧,但同时也认为这是推动社会前进的积极力量。

Transcripts

00:01

[Music]

00:13

welcome to the entrepreneurial thought

00:15

leader seminar at Stanford

00:21

University this is the Stanford seminar

00:23

for aspiring entrepreneurs ETL is

00:25

brought to you by stvp the Stanford

00:27

entrepreneurship engineering center and

00:29

basis The Business Association of

00:31

Stanford entrepreneurial students I'm

00:33

rvie balani a lecturer in the management

00:35

science and engineering department and

00:36

the director of Alchemist and

00:38

accelerator for Enterprise startups and

00:40

today I have the pleasure of welcoming

00:42

Sam Altman to ETL

00:50

um Sam is the co-founder and CEO of open

00:53

AI open is not a word I would use to

00:55

describe the seats in this class and so

00:57

I think by virtue of that that everybody

00:58

already play knows open AI but for those

01:00

who don't openai is the research and

01:02

deployment company behind chat gbt Dolly

01:05

and Sora um Sam's life is a pattern of

01:08

breaking boundaries and transcending

01:10

what's possible both for himself and for

01:13

the world he grew up in the midwest in

01:15

St Louis came to Stanford took ETL as an

01:19

undergrad um for any and we we held on

01:22

to Stanford or Sam for two years he

01:24

studied computer science and then after

01:26

his sophomore year he joined the

01:27

inaugural class of Y combinator with a

01:29

Social Mobile app company called looped

01:32

um that then went on to go raise money

01:33

from Sequoia and others he then dropped

01:36

out of Stanford spent seven years on

01:38

looped which got Acquired and then he

01:40

rejoined Y combinator in an operational

01:42

role he became the president of Y

01:44

combinator from 2014 to 2019 and then in

01:48

2015 he co-founded open aai as a

01:50

nonprofit research lab with the mission

01:52

to build general purpose artificial

01:54

intelligence that benefits all Humanity

01:57

open aai has set the record for the

01:58

fastest growing app in history with the

02:01

launch of chat gbt which grew to 100

02:03

million active users just two months

02:05

after launch Sam was named one of

02:08

times's 100 most influential people in

02:10

the world he was also named times CEO of

02:12

the year in 2023 and he was also most

02:15

recently added to Forbes list of the

02:17

world's billionaires um Sam lives with

02:19

his husband in San Francisco and splits

02:20

his time between San Francisco and Napa

02:22

and he's also a vegetarian and so with

02:24

that please join me in welcoming Sam

02:27

Altman to the stage

02:35

and in full disclosure that was a longer

02:36

introduction than Sam probably would

02:37

have liked um brevity is the soul of wit

02:40

um and so we'll try to make the

02:41

questions more concise but this is this

02:44

is this is also Sam's birth week it's it

02:47

was his birthday on Monday and I

02:49

mentioned that just because I think this

02:50

is an auspicious moment both in terms of

02:52

time you're 39 now and also place you're

02:55

at Stanford in ETL that I would be

02:57

remiss if this wasn't sort of a moment

02:59

of just some reflection and I'm curious

03:01

if you reflect back on when you were

03:03

half a lifee younger when you were 19 in

03:05

ETL um if there were three words to

03:08

describe what your felt sense was like

03:09

as a Stanford undergrad what would those

03:11

three words be it's always hard

03:13

questions

03:17

um I was like ex uh you want three words

03:20

only okay uh you can you can go more Sam

03:23

you're you're the king of brevity uh

03:25

excited optimistic and curious okay and

03:29

what would be your three words

03:30

now I guess the same which is terrific

03:33

so there's been a constant thread even

03:35

though the world has changed and you

03:37

know a lot has changed in the last 19

03:39

years but that's going to pale in

03:40

comparison what's going to happen in the

03:41

next 19 yeah and so I need to ask you

03:44

for your advice if you were a Stanford

03:46

undergrad today so if you had a Freaky

03:47

Friday moment tomorrow you wake up and

03:49

suddenly you're 19 in inside of Stanford

03:52

undergrad knowing everything you know

03:54

what would you do would you drop be very

03:55

happy um I would feel like I was like

03:58

coming of age at the luckiest time

04:00

um like in several centuries probably I

04:03

think the degree to which the world is

04:05

is going to change and the the

04:07

opportunity to impact that um starting a

04:10

company doing AI research any number of

04:13

things is is like quite remarkable I

04:15

think this is probably the best time to

04:20

start I yeah I think I would say this I

04:22

think this is probably the best time to

04:23

start a companies since uh the internet

04:25

at least and maybe kind of like in the

04:27

history of technology I think with what

04:29

you can do with AI is like going to just

04:33

get more remarkable every year and the

04:35

greatest companies get created at times

04:38

like this the most impactful new

04:40

products get built at times like this so

04:43

um I would feel incredibly lucky uh and

04:46

I would be determined to make the most

04:47

of it and I would go figure out like

04:50

where I wanted to contribute and do it

04:52

and do you have a bias on where would

04:53

you contribute would you want to stay as

04:55

a student um would and if so would you

04:56

major in a certain major giving the pace

04:58

of of change probably I would not stay

05:01

as a student but only cuz like I didn't

05:04

and I think it's like reasonable to

05:05

assume people kind of are going to make

05:06

the same decisions they would make again

05:09

um I think staying as a student is a

05:11

perfectly good thing to do I just I it

05:13

would probably not be what I would have

05:15

picked no this is you this is you so you

05:17

have the Freaky Friday moment it's you

05:18

you're reborn and as a 19-year-old and

05:20

would you

05:22

yeah what I think I would again like I

05:25

think this is not a surprise cuz people

05:27

kind of are going to do what they're

05:28

going to do I think I would go work on

05:31

research and and and where might you do

05:33

that Sam I think I mean obviously I have

05:36

a bias towards open eye but I think

05:37

anywhere I could like do meaningful AI

05:39

research I would be like very thrilled

05:40

about but you'd be agnostic if that's

05:42

Academia or Private Industry

05:46

um I say this with sadness I think I

05:48

would pick

05:50

industry realistically um I think it's I

05:53

think to you kind of need to be the

05:55

place with so much compute M MH okay and

05:59

um if you did join um on the research

06:02

side would you join so we had kazer here

06:04

last week who was a big advocate of not

06:06

being a Founder but actually joining an

06:08

existing companies sort of learn learn

06:09

the chops for the for the students that

06:11

are wrestling with should I start a

06:13

company now at 19 or 20 or should I go

06:15

join another entrepreneurial either

06:17

research lab or Venture what advice

06:19

would you give them well since he gave

06:22

the case to join a company I'll give the

06:24

other one um which is I think you learn

06:28

a lot just starting a company and if

06:29

that's something you want to do at some

06:30

point there's this thing Paul Graham

06:32

says but I think it's like very deeply

06:34

true there's no pre-startup like there

06:36

is Premed you kind of just learn how to

06:38

run a startup by running a startup and

06:40

if if that's what you're pretty sure you

06:42

want to do you may as well jump in and

06:43

do it and so let's say so if somebody

06:45

wants to start a company they want to be

06:46

in AI um what do you think are the

06:48

biggest near-term challenges that you're

06:52

seeing in AI that are the ripest for a

06:54

startup and just to scope that what I

06:56

mean by that are what are the holes that

06:58

you think are the top priority needs for

07:00

open AI that open AI will not solve in

07:03

the next three years um yeah

07:08

so I think this is like a very

07:10

reasonable question to ask in some sense

07:13

but I think it's I'm not going to answer

07:15

it because I think you should

07:19

never take this kind of advice about

07:21

what startup to start ever from anyone

07:24

um I think by the time there's something

07:26

that is like the kind of thing that's

07:29

obvious enough that me or somebody else

07:31

will sit up here and say it it's

07:33

probably like not that great of a

07:34

startup idea and I totally understand

07:37

the impulse and I remember when I was

07:38

just like asking people like what

07:39

startup should I start

07:42

um but I I think like one of the most

07:46

important things I believe about having

07:48

an impactful career is you have to chart

07:50

your own course if if the thing that

07:53

you're thinking about is something that

07:54

someone else is going to do anyway or

07:57

more likely something that a lot of

07:58

people are going to do anyway

08:00

um you should be like somewhat skeptical

08:01

of that and I think a really good muscle

08:04

to build is coming up with the ideas

08:07

that are not the obvious ones to say so

08:09

I don't know what the really important

08:12

idea is that I'm not thinking of right

08:13

now but I'm very sure someone in this

08:15

room does it knows what that answer is

08:18

um and I think learning to trust

08:21

yourself and come up with your own ideas

08:24

and do the very like non-consensus

08:26

things like when we started open AI that

08:27

was an extremely non-consensus thing to

08:30

do and now it's like the very obvious

08:31

thing to do um now I only have the

08:34

obvious ideas CU I'm just like stuck in

08:36

this one frame but I'm sure you all have

08:38

the other

08:38

ones but are there so can I ask it

08:41

another way and I don't know if this is

08:42

fair or not but are what questions then

08:44

are you wrestling with that no one else

08:47

is talking

08:49

about how to build really big computers

08:51

I mean I think other people are talking

08:52

about that but we're probably like

08:54

looking at it through a lens that no one

08:56

else is quite imagining yet um

09:02

I mean we're we're definitely wrestling

09:05

with how we when we make not just like

09:09

grade school or middle schooler level

09:11

intelligence but like PhD level

09:12

intelligence and Beyond the best way to

09:14

put that into a product the best way to

09:16

have a positive impact with that on

09:19

society and people's lives we don't know

09:20

the answer to that yet so I think that's

09:22

like a pretty important thing to figure

09:23

out okay and can we continue on that

09:25

thread then of how to build really big

09:27

computers if that's really what's on

09:28

your mind can you share I know there's

09:30

been a lot of speculation and probably a

09:33

lot of here say too about um the

09:35

semiconductor Foundry Endeavor that you

09:38

are reportedly embarking on um can you

09:41

share what would make what what's the

09:43

vision what would make this different

09:45

than it's not just foundies although

09:47

that that's part of it it's like if if

09:50

you believe which we increasingly do at

09:52

this point that AI infrastructure is

09:55

going to be one of the most important

09:57

inputs to the Future this commodity that

09:58

everybody's going to want and that is

10:01

energy data centers chips chip design

10:04

new kinds of networks it's it's how we

10:06

look at that entire ecosystem um and how

10:09

we make a lot more of that and I don't

10:12

think it'll work to just look at one

10:13

piece or another but we we got to do the

10:15

whole thing okay so there's multiple big

10:18

problems yeah um I think like just this

10:21

is the Arc of human technological

10:25

history as we build bigger and more

10:26

complex systems and does it gross so you

10:29

know in terms of just like the compute

10:30

cost uh correct me if I'm wrong but chat

10:33

gbt 3 was I've heard it was $100 million

10:36

to do the model um and it was 100 175

10:41

billion parameters gbt 4 was cost $400

10:44

million with 10x the parameters it was

10:47

almost 4X the cost but 10x the

10:49

parameters correct me adjust me you know

10:52

it I I do know it but I won oh you can

10:54

you're invited to this is Stanford Sam

10:57

okay um uh but the the even if you don't

11:00

want to correct the actual numbers if

11:01

that's directionally correct um does the

11:05

cost do you think keep growing with each

11:07

subsequent yes and does it keep growing

11:12

multiplicatively uh probably I mean and

11:15

so the question then becomes how do we

11:18

how do you capitalize

11:20

that well look I I kind of think

11:26

that giving people really capable tools

11:30

and letting them figure out how they're

11:32

going to use this to build the future is

11:34

a super good thing to do and is super

11:36

valuable and I am super willing to bet

11:39

on the Ingenuity of you all and

11:42

everybody else in the world to figure

11:44

out what to do about this so there is

11:46

probably some more business-minded

11:48

person than me at open AI somewhere that

11:50

is worried about how much we're spending

11:52

um but I kind of

11:53

don't okay so that doesn't cross it so

11:55

you

11:56

know open ey is phenomenal chat gbt is

11:59

phenomenal um everything else all the

12:01

other models are

12:02

phenomenal it burned you've earned $520

12:05

million of cash last year that doesn't

12:07

concern you in terms of thinking about

12:09

the economic model of how do you

12:11

actually where's going to be the

12:12

monetization source well first of all

12:14

that's nice of you to say but Chachi PT

12:16

is not phenomenal like Chachi PT is like

12:20

mildly embarrassing at best um gp4 is

12:24

the dumbest model any of you will ever

12:26

ever have to use again by a lot um but

12:29

you know it's like important to ship

12:31

early and often and we believe in

12:33

iterative deployment like if we go build

12:35

AGI in a basement and then you know the

12:38

world is like kind

12:40

of blissfully walking blindfolded along

12:44

um I don't think that's like I don't

12:46

think that makes us like very good

12:47

neighbors um so I think it's important

12:49

given what we believe is going to happen

12:51

to express our view about what we

12:52

believe is going to happen um but more

12:54

than that the way to do it is to put the

12:56

product in people's hands um

13:00

and let Society co-evolve with the

13:03

technology let Society tell us what it

13:06

collectively and people individually

13:08

want from the technology how to

13:09

productize this in a way that's going to

13:11

be useful um where the model works

13:13

really well where it doesn't work really

13:14

well um give our leaders and

13:17

institutions time to react um give

13:20

people time to figure out how to

13:21

integrate this into their lives to learn

13:23

how to use the tool um sure some of you

13:25

all like cheat on your homework with it

13:27

but some of you all probably do like

13:28

very amazing amazing wonderful things

13:29

with it too um and as each generation

13:32

goes on uh I think that will expand

13:38

and and that means that we ship

13:40

imperfect products um but we we have a

13:43

very tight feedback loop and we learn

13:45

and we get better um and it does kind of

13:49

suck to ship a product that you're

13:50

embarrassed about but it's much better

13:52

than the alternative um and in this case

13:54

in particular where I think we really

13:56

owe it to society to deploy tively

14:00

um one thing we've learned is that Ai

14:02

and surprise don't go well together

14:03

people don't want to be surprised people

14:05

want a gradual roll out and the ability

14:07

to influence these systems um that's how

14:10

we're going to do it and there may

14:13

be there could totally be things in the

14:15

future that would change where we' think

14:17

iterative deployment isn't such a good

14:19

strategy um but it does feel like the

14:24

current best approach that we have and I

14:26

think we've gained a lot um from from

14:29

doing this and you know hopefully s the

14:31

larger world has gained something too

14:34

whether we burn 500 million a year or 5

14:38

billion or 50 billion a year I don't

14:40

care I genuinely don't as long as we can

14:43

I think stay on a trajectory where

14:45

eventually we create way more value for

14:47

society than that and as long as we can

14:49

figure out a way to pay the bills like

14:51

we're making AGI it's going to be

14:52

expensive it's totally worth it and so

14:54

and so do you have a I hear you do you

14:56

have a vision in 2030 of what if I say

14:58

you crushed it Sam it's 2030 you crushed

15:01

it what does the world look like to

15:03

you

15:06

um you know maybe in some very important

15:08

ways not that different uh

15:12

like we will be back here there will be

15:15

like a new set of students we'll be

15:17

talking about how startups are really

15:19

important and technology is really cool

15:21

we'll have this new great tool in the

15:23

world it'll

15:25

feel it would feel amazing if we got to