The Possibilities of AI [Entire Talk] - Sam Altman (OpenAI)

Stanford eCorner
1 May 202445:48

Summary

TLDR在斯坦福大学举办的企业家思想领袖研讨会上,Sam Altman,OpenAI的联合创始人兼首席执行官,分享了他对于人工智能发展、创业精神以及个人成长的见解。Altman讨论了他对OpenAI的愿景,包括创建一个能够惠及全人类的通用人工智能(AGI)。他还强调了在技术快速发展的时代,作为企业家和研究者,需要拥有的韧性和自我意识。此外,Altman也提到了对社会影响的考虑,以及如何在创新与责任之间找到平衡。

Takeaways

  • 😀 Sam Altman是OpenAI的联合创始人兼CEO,OpenAI是一个研究和部署通用人工智能(AI)的非营利性研究实验室,旨在惠及全人类。
  • 🌟 OpenAI推出了聊天机器人Chat GPT,该应用在推出后两个月内就达到了1亿活跃用户,成为历史上增长最快的应用程序。
  • 🏆 Sam Altman曾被《时代》杂志评为全球100位最具影响力的人物之一,并在2023年被评为年度CEO,同时他也是福布斯全球亿万富翁名单上的一员。
  • 🎓 在斯坦福大学学习计算机科学期间,Sam Altman参与了Y Combinator,并创立了社交移动应用公司Loopt,该公司后来被收购。
  • 🤖 Sam Altman认为,目前是至少自互联网以来,或者可能是技术历史上最好的创业时机,因为AI的可能性每年都在变得更加非凡。
  • 🚀 如果Sam Altman回到19岁,他会选择从事AI研究,并且可能会选择在产业界而非学术界进行研究,因为产业界拥有大量的计算资源。
  • 💡 Sam Altman鼓励人们追求非共识的想法,并相信自己的直觉和思考过程,即使这些想法在一开始可能并不显而易见。
  • 🔮 他强调,尽管OpenAI的模型如Chat GPT非常强大,但它们仍然有改进的空间,并且OpenAI致力于通过迭代部署来学习并做得更好。
  • 🌐 Sam Altman认为,全球公平地使用AI技术非常重要,并且随着AI模型变得更加强大,我们需要更细致地考虑如何负责任地部署它们。
  • 🔑 他提到,随着AI的发展,我们可能会需要重新考虑社会契约,并关注社会如何适应这些新变化,以及我们如何管理这种变化的速度。
  • 💡 关于AGI(人工通用智能),Sam Altman认为我们每年都将拥有比当前更强大的系统,但他也提醒人们,对于构建比人类更聪明的系统,我们需要更精确的定义和理解。

Q & A

  • Sam Altman 是谁,他有哪些成就?

    -Sam Altman 是 OpenAI 的联合创始人和 CEO,OpenAI 是一个研究和部署通用人工智能(AI)的非营利性研究实验室,旨在让全人类受益。Sam 曾在斯坦福大学学习计算机科学,并在 Y Combinator 担任过总裁。他还被《时代》杂志评为年度最具影响力人物之一,并被列入福布斯世界亿万富翁名单。

  • OpenAI 背后的核心理念是什么?

    -OpenAI 的核心理念是构建通用目的的人工智能(AGI),以造福全人类。OpenAI 旨在推动 AI 研究的边界,并负责任地部署其技术,以确保其对人类社会产生积极的影响。

  • Sam Altman 在斯坦福大学的经历对他有什么影响?

    -Sam Altman 在斯坦福大学学习计算机科学,并参加了斯坦福的创业思想领袖研讨会(ETL)。这段经历为他后来的创业之路打下了基础,尤其是在他加入 Y Combinator 并创立 Loopt 公司之后。

  • Sam Altman 如何看待当前的创业环境?

    -Sam Altman 认为当前是创业的最佳时机之一,尤其是对于 AI 领域的创业。他认为,随着 AI 技术的发展,将会出现更多创新的产品和服务,并且这个时代的创业机会可能比互联网初期还要多。

  • Sam Altman 对于想要进入 AI 领域的创业者有什么建议?

    -Sam Altman 建议创业者应该追求自己独特的想法,而不是追随他人的足迹。他认为,最有价值的创新往往来自于那些非共识的想法,因此创业者应该学会信任自己的直觉,并勇于尝试那些尚未被广泛认可的创意。

  • OpenAI 如何平衡研发成本和社会责任?

    -OpenAI 通过迭代部署和紧密的反馈循环来平衡研发成本和社会责任。Sam Altman 强调,尽管研发成本可能很高,但如果能够为社会创造更大的价值,那么这些投入是值得的。此外,OpenAI 还通过免费提供某些服务来促进全球范围内的公平访问。

  • Sam Altman 对于 AGI 的未来有哪些预测?

    -Sam Altman 认为 AGI 的发展将会是渐进的,每年都会有更加强大的系统出现。他强调,尽管 AGI 可能会带来巨大的变化,但人类社会的日常生活和工作可能会保持相对稳定。

  • OpenAI 在 AI 安全和伦理方面采取了哪些措施?

    -OpenAI 采取了多种措施来确保 AI 的安全和伦理,包括进行红队测试、外部审计以及与社会紧密合作,以确保 AI 技术的负责任部署。

  • Sam Altman 如何看待 AI 对全球政治格局的影响?

    -Sam Altman 认为 AI 可能会极大地改变全球政治格局,但他没有给出具体的影响方式。他强调 AI 将是一个重要的全球性技术,可能会对国家间的力量平衡产生深远的影响。

  • OpenAI 如何处理技术进步与社会适应之间的关系?

    -OpenAI 认为技术进步应该与社会适应同步进行。Sam Altman 强调,随着 AI 模型变得更加强大,需要更加细致地迭代部署,并保持与社会的紧密反馈,以确保技术的发展能够被社会所接受和适应。

  • Sam Altman 对于年轻创业者有什么建议?

    -Sam Altman 建议年轻创业者要勇于追求自己的独特想法,并且学会信任自己的直觉。他认为,创业者应该在社会中寻找自己的位置,并为社会做出贡献。同时,他也鼓励创业者在面对挑战时展现出韧性和适应性。

Outlines

00:00

🎓 斯坦福大学创业思想领袖研讨会开场

斯坦福大学的创业思想领袖(ETL)研讨会由斯坦福技术创新工程中心(STVP)和斯坦福商业协会共同举办。Rvie Balani作为管理科学与工程系讲师及Alchemist加速器的负责人,欢迎了OpenAI的联合创始人兼CEO Sam Altman。Sam Altman在技术和创业领域有着卓越成就,包括在Y Combinator的角色以及创立OpenAI。OpenAI是开发了聊天机器人ChatGBT等革命性技术的非盈利研究实验室。Sam Altman的生活和职业经历表明了他对超越可能性和打破界限的承诺。

05:02

🤖 对话Sam Altman:OpenAI的愿景与挑战

在与Sam Altman的对话中,讨论了OpenAI的使命——创建造福全人类的通用人工智能(AGI)。Sam分享了他对于AI未来的看法,包括对AGI的期待和OpenAI在AI技术快速发展中所面临的挑战。他强调了在AI研究和产品开发中保持迭代和社会责任的重要性,同时也提到了OpenAI如何通过提供强大的工具来赋予人们创造未来的能力。

10:04

🚀 对话Sam Altman:AI技术的未来与机遇

Sam Altman在讨论中提到了AI技术未来的巨大潜力,他认为当前是创业的最佳时期,特别是对于AI领域。他鼓励年轻创业者和学生把握时机,投身于AI研究和创业。Sam还分享了他对于如何识别和追求非共识想法的看法,强调了独立思考和创新的重要性。

15:06

💡 OpenAI的创新与社会影响

Sam Altman讨论了OpenAI如何通过创新对社会产生积极影响。他提到了OpenAI的快速增长,特别是ChatGBT在短时间内达到1亿活跃用户。Sam还提到了他个人的一些荣誉,包括被《时代》杂志评为年度最具影响力人物之一。他强调了OpenAI在推动技术发展的同时,也致力于确保技术进步能够惠及全人类。

20:08

🌐 全球创新与AI基础设施的挑战

Sam Altman讨论了全球创新在AI领域面临的挑战,特别是关于如何使世界各地的人们都能访问和利用AI技术。他强调了提供公平的AI技术访问权的重要性,并提到了OpenAI如何通过免费提供ChatGBT等工具来支持这一点。此外,Sam还提到了不同国家建立自己的AI基础设施的重要性,以及OpenAI在帮助实现这一目标方面可能扮演的角色。

25:09

🛰️ AI在空间探索和殖民化中的潜在作用

Sam Altman探讨了AI在未来空间探索和殖民化中的潜在作用。他认为,鉴于太空对生物体并不友好,使用机器人进行探索和殖民可能更为合适。Sam对AI在这一领域的应用持乐观态度,并认为AI技术将有助于推动人类在太空探索方面取得更大进展。

30:11

💡 创业、创新与非共识思维的重要性

Sam Altman分享了他对于创业和创新的看法,特别是关于如何识别和追求非共识的想法。他强调了在共识和非共识之间找到平衡的重要性,并提到了在追求创新时,需要有独立思考和信任自己直觉的能力。Sam还讨论了如何通过与志同道合的人合作,同时保持独立思考,来推动创新。

35:13

🔋 能源需求的未来与可再生能源的角色

Sam Altman对能源需求的未来趋势和可再生能源的角色进行了讨论。他认为能源需求将会增长,并表达了对核聚变作为未来主要电力来源的乐观态度。Sam还提到了太阳能加储能作为另一种可能的能源解决方案,并强调了实现全球规模、成本低廉的能源供应的重要性。

40:14

🤝 OpenAI的组织结构和使命

Sam Altman解释了OpenAI独特的组织结构,包括非营利组织拥有营利性公司的模式。他讨论了这种结构的起源和演变,以及它是如何适应OpenAI的使命和目标的。Sam强调了OpenAI的使命——推动AI研究的发展,并确保AI技术的发展能够造福全人类。他还提到了OpenAI如何适应不断变化的环境,并在未来继续调整其结构。

45:15

🌍 AGI对地缘政治和全球力量平衡的影响

Sam Altman探讨了AGI可能对全球政治和力量平衡产生的影响。尽管他没有给出具体的答案,但他承认AGI可能会在很大程度上改变国际关系和全球力量的平衡。Sam强调了AI技术的重要性,以及它如何可能被用于塑造未来世界的政治面貌。

🤖 AGI的自我认识和负责任的AI部署

Sam Altman讨论了AGI系统能够认识到自身不确定性和与外界沟通的重要性。他认为,构建能够进行自我反思和识别错误的AI系统是非常重要的。此外,他还强调了在部署AGI时采取负责任的方法的重要性,包括确保AI系统的使用是安全和负责任的。

👥 OpenAI的企业文化和团队精神

Sam Altman分享了OpenAI的企业文化和团队精神。他强调了团队成员之间的紧密合作和对共同使命的忠诚。Sam提到了OpenAI员工在面对挑战时的韧性,以及他们如何共同努力实现公司的长期目标。他还讨论了如何通过共享的使命感和目标来建立强大的企业文化。

🚨 AI的滥用和短期内的挑战

Sam Altman讨论了AI在短期内可能被滥用的问题,以及行业、政府和个人如何共同努力应对这些挑战。他强调了AI技术既有积极的一面也有可能被滥用的风险,并提出了建立反馈循环和平衡安全与自由的重要性。Sam还提到了社会如何共同努力,以确保AI技术被用于促进人类福祉。

🎉 Sam Altman的生日和对未来的展望

在Sam Altman的生日之际,他分享了对未来的展望,特别是对创建比人类更智能的AGI的期待和担忧。Sam认为,虽然创造超越人类的智能可能会让人感到害怕,但这是人类进步和社会发展的一部分。他强调了社会如何通过建立更智能的基础设施来使下一代更加强大,并鼓励人们以积极的态度看待AI技术的发展。

Mindmap

OpenAI的联合创始人兼CEO
计算机科学专业
Y Combinator的首届学员
斯坦福大学ETL课程的校友
移动社交应用
获得红杉资本等投资
Loopt公司创始人
2014年至2019年任职
Y Combinator的总裁
创建普惠全人类的通用人工智能
OpenAI的使命
《时代》杂志年度百大影响力人物
《时代》杂志2023年度CEO
福布斯世界亿万富翁榜单
个人荣誉
Sam Altman
ChatGBT Dolly和Sora等项目
研究与部署公司
ChatGBT用户增长记录
快速增长
2022年收入5.2亿美元
经济模型
数据中心、芯片设计与新网络
AI基础设施
ChatGBT 3与ChatGBT 4的成本与参数
AI模型的迭代部署
AI技术发展
OpenAI
当前为创业最佳时期
AI技术的巨大潜力
创业时机
鼓励年轻人创业
选择AI领域创业
重视个人直觉与创新
创业建议
寻找AI领域的创新点
避免显而易见的想法
AI领域的挑战
创业与技术
定义与实现时间表
对社会的深远影响
人工通用智能(AGI)
对人类生活的微妙影响
对全球政治格局的影响
AI的潜在风险
确保AI的正面影响
平衡安全与自由
AI的负责任部署
AI的未来与风险
多面性与跨领域能力
对技术的乐观态度
Sam Altman的个人特质
成就、归属与权力需求
领导力驱动因素
个人优势与弱点
自我驱动与使命感
自我认知
个人成长与自我认知
共同目标与忠诚度
面对压力的韧性
使命感与团队凝聚力
非营利组织与营利组织的结合
适应性与持续改进
组织结构
OpenAI的企业文化
斯坦福大学创业思想领袖研讨会
Alert

Keywords

💡创业思想领袖研讨会(ETL)

ETL是斯坦福大学为有志于创业的人举办的研讨会,由斯坦福创业工程中心stvp和斯坦福创业学生商业协会Basis共同举办。在视频中,ETL为Sam Altman的演讲提供了平台,他是OpenAI的联合创始人和CEO,这表明ETL是一个汇聚创业者和创新思想的重要场所。

💡OpenAI

OpenAI是一个研究和部署公司,背后有著名的聊天机器人ChatGBT、Dolly和Sora等产品。Sam Altman作为OpenAI的联合创始人和CEO,致力于构建能够普惠全人类的通用人工智能。OpenAI的快速发展和对技术的贡献是视频中讨论的重点之一。

💡Y Combinator

Y Combinator是一个知名的创业孵化器,Sam Altman在斯坦福大学读书期间参与了Y Combinator的首届项目,并在后来成为其总裁。Y Combinator在视频中被提及,突显了其在创业生态系统中的重要性以及与Sam职业发展的关系。

💡通用人工智能(AGI)

AGI指的是能够执行任何智能任务的人工智能,Sam Altman提到OpenAI的使命是构建AGI。视频中,AGI作为讨论的核心,探讨了其对未来社会和经济的潜在影响。

💡迭代部署

迭代部署是Sam Altman提到的一种产品开发和发布策略,它允许不完美的产品先发布,然后通过紧密的反馈循环不断学习和改进。这种方法被用于OpenAI的产品开发,尤其是在AI模型的推出上。

💡计算成本

随着AI模型变得越来越复杂,如ChatGBT 3和4的推出,计算成本显著增加。视频中讨论了这种成本增长对AI发展的影响,以及如何平衡技术创新和经济可行性。

💡半导体晶圆厂

Sam Altman提到了OpenAI可能涉足半导体晶圆厂的计划,这表明公司正寻求控制更多AI基础设施,以支持其研究和产品开发。这个概念在视频中被提及,显示了OpenAI对整个AI生态系统的长远考虑。

💡非共识思维

非共识思维是指持有与众不同的观点或想法。Sam Altman鼓励创业者发展自己的思考方式,寻找那些不是显而易见的创意。在视频中,他强调了在创业和技术创新中追求非共识思维的重要性。

💡人工智能的伦理

视频中讨论了人工智能的伦理问题,包括对社会的影响和如何负责任地部署AI。Sam Altman提到了OpenAI在确保AI技术不被滥用方面的努力,以及建立适当的监管框架的重要性。

💡技术基础设施

技术基础设施在视频中被提到,作为支持AI发展的关键要素之一。Sam Altman强调了数据中心、芯片设计和新型网络等基础设施对未来AI技术进步的重要性。

💡人类适应性

Sam Altman表达了对社会适应新技术速度的担忧,特别是在AI领域。他提到了人类适应性的重要性,并讨论了社会如何适应AI带来的快速变化,以及这种适应性对未来社会结构的影响。

💡全球创新

视频中提到了全球创新的概念,Sam Altman讨论了AI技术如何影响全球范围内的创新,以及如何确保不同地区都能公平地访问和利用AI技术进行创新。

💡可再生能源

在讨论未来能源需求时,Sam Altman提到了可再生能源的重要性,并预测了核聚变或太阳能加储能可能成为地球上主要的电力来源。这与AI和科技进步如何支持可持续发展相关。

💡人工智能的自我意识

视频中讨论了AI系统是否能识别并传达自己的不确定性或不安全因素。Sam Altman认为,构建能够进行自我反省和错误识别的AI系统是非常重要的,这关系到AI的安全性和可靠性。

💡OpenAI的组织结构

OpenAI的组织结构在视频中被提及,特别是其非营利组织拥有营利公司的模式。Sam Altman解释了这种结构是如何随着公司的发展和需求而逐渐形成的,并强调了使命感和适应性在组织发展中的重要性。

💡人工智能的地缘政治影响

视频中探讨了人工智能可能对全球政治格局和权力平衡产生的影响。Sam Altman表达了对AI技术如何改变国际关系的不确定性,同时也暗示了AI可能比任何其他技术都更具有变革性。

💡人工智能的滥用

Sam Altman在视频中提到了对AI被滥用的担忧,尤其是在全球冲突和选举等敏感问题上。他强调了行业、政府和个人在确保AI技术不被用于不良目的方面的责任。

💡人工智能的自我改进

视频中提到了AI的自我改进能力,Sam Altman强调了每年AI系统都在变得更加强大,并且这种进步没有停止的迹象。他讨论了这种不断增长的能力对未来社会和经济的深远影响。

Highlights

山姆·阿尔特曼(Sam Altman)作为OpenAI的联合创始人和CEO,介绍了OpenAI的使命:构建能够惠及全人类的通用人工智能。

OpenAI推出了聊天机器人Chat GPT,该应用在推出后两个月内就达到了1亿活跃用户,成为历史上增长最快的应用程序。

山姆·阿尔特曼被评为《时代》杂志年度最具影响力人物之一,并在2023年被《时代》杂志评为年度CEO。

山姆·阿尔特曼强调,当前可能是自互联网以来,甚至可能是技术历史上最好的创业时期,因为人工智能的可能性每年都在变得更加非凡。

山姆·阿尔特曼认为,如果回到19岁,他会选择投身于人工智能研究,因为这是一个能够产生巨大影响的领域。

山姆·阿尔特曼提到,他对于建立非常大的计算机和人工智能系统的愿景,这将是人类技术历史上的一个重要里程碑。

OpenAI的商业模式和资金问题被讨论,山姆·阿尔特曼表示,他们更关注于创造社会价值而不是短期的盈利。

山姆·阿尔特曼讨论了人工智能的未来发展,他预测未来几年我们将拥有越来越强大的系统。

关于人工智能的潜在危险,山姆·阿尔特曼表示他更担心的是那些我们可能忽视的微妙危险,而不是那些已经被广泛讨论的灾难性事件。

山姆·阿尔特曼分享了他对人工智能在太空探索和殖民中可能扮演的角色的看法,他认为机器人可能更适合在恶劣的太空环境中工作。

山姆·阿尔特曼讨论了如何识别和追求非共识的想法,并强调了信任自己直觉的重要性。

山姆·阿尔特曼分享了他对人工智能在能源需求变化中的作用的看法,以及他对未来能源生成方式的预测。

山姆·阿尔特曼讨论了OpenAI的组织结构,包括其非营利组织拥有营利公司的俄罗斯套娃模式。

山姆·阿尔特曼强调了OpenAI团队的韧性和对使命的忠诚,他认为这是公司成功的关键文化力量。

山姆·阿尔特曼讨论了人工智能在短期内可能被滥用的问题,以及行业、政府和个人如何共同努力以负责任地部署AI。

山姆·阿尔特曼分享了他对人工智能如何改变世界政治格局和权力平衡的看法,尽管他没有给出具体的答案。

山姆·阿尔特曼讨论了人工智能系统是否能够识别自身的不确定性,并将其传达给外部世界的重要性。

山姆·阿尔特曼对于创造比人类更聪明的实体的前景表示了担忧,但同时也认为这是推动社会前进的积极力量。

Transcripts

00:01

[Music]

00:13

welcome to the entrepreneurial thought

00:15

leader seminar at Stanford

00:21

University this is the Stanford seminar

00:23

for aspiring entrepreneurs ETL is

00:25

brought to you by stvp the Stanford

00:27

entrepreneurship engineering center and

00:29

basis The Business Association of

00:31

Stanford entrepreneurial students I'm

00:33

rvie balani a lecturer in the management

00:35

science and engineering department and

00:36

the director of Alchemist and

00:38

accelerator for Enterprise startups and

00:40

today I have the pleasure of welcoming

00:42

Sam Altman to ETL

00:50

um Sam is the co-founder and CEO of open

00:53

AI open is not a word I would use to

00:55

describe the seats in this class and so

00:57

I think by virtue of that that everybody

00:58

already play knows open AI but for those

01:00

who don't openai is the research and

01:02

deployment company behind chat gbt Dolly

01:05

and Sora um Sam's life is a pattern of

01:08

breaking boundaries and transcending

01:10

what's possible both for himself and for

01:13

the world he grew up in the midwest in

01:15

St Louis came to Stanford took ETL as an

01:19

undergrad um for any and we we held on

01:22

to Stanford or Sam for two years he

01:24

studied computer science and then after

01:26

his sophomore year he joined the

01:27

inaugural class of Y combinator with a

01:29

Social Mobile app company called looped

01:32

um that then went on to go raise money

01:33

from Sequoia and others he then dropped

01:36

out of Stanford spent seven years on

01:38

looped which got Acquired and then he

01:40

rejoined Y combinator in an operational

01:42

role he became the president of Y

01:44

combinator from 2014 to 2019 and then in

01:48

2015 he co-founded open aai as a

01:50

nonprofit research lab with the mission

01:52

to build general purpose artificial

01:54

intelligence that benefits all Humanity

01:57

open aai has set the record for the

01:58

fastest growing app in history with the

02:01

launch of chat gbt which grew to 100

02:03

million active users just two months

02:05

after launch Sam was named one of

02:08

times's 100 most influential people in

02:10

the world he was also named times CEO of

02:12

the year in 2023 and he was also most

02:15

recently added to Forbes list of the

02:17

world's billionaires um Sam lives with

02:19

his husband in San Francisco and splits

02:20

his time between San Francisco and Napa

02:22

and he's also a vegetarian and so with

02:24

that please join me in welcoming Sam

02:27

Altman to the stage

02:35

and in full disclosure that was a longer

02:36

introduction than Sam probably would

02:37

have liked um brevity is the soul of wit

02:40

um and so we'll try to make the

02:41

questions more concise but this is this

02:44

is this is also Sam's birth week it's it

02:47

was his birthday on Monday and I

02:49

mentioned that just because I think this

02:50

is an auspicious moment both in terms of

02:52

time you're 39 now and also place you're

02:55

at Stanford in ETL that I would be

02:57

remiss if this wasn't sort of a moment

02:59

of just some reflection and I'm curious

03:01

if you reflect back on when you were

03:03

half a lifee younger when you were 19 in

03:05

ETL um if there were three words to

03:08

describe what your felt sense was like

03:09

as a Stanford undergrad what would those

03:11

three words be it's always hard

03:13

questions

03:17

um I was like ex uh you want three words

03:20

only okay uh you can you can go more Sam

03:23

you're you're the king of brevity uh

03:25

excited optimistic and curious okay and

03:29

what would be your three words

03:30

now I guess the same which is terrific

03:33

so there's been a constant thread even

03:35

though the world has changed and you

03:37

know a lot has changed in the last 19

03:39

years but that's going to pale in

03:40

comparison what's going to happen in the

03:41

next 19 yeah and so I need to ask you

03:44

for your advice if you were a Stanford

03:46

undergrad today so if you had a Freaky

03:47

Friday moment tomorrow you wake up and

03:49

suddenly you're 19 in inside of Stanford

03:52

undergrad knowing everything you know

03:54

what would you do would you drop be very

03:55

happy um I would feel like I was like

03:58

coming of age at the luckiest time

04:00

um like in several centuries probably I

04:03

think the degree to which the world is

04:05

is going to change and the the

04:07

opportunity to impact that um starting a

04:10

company doing AI research any number of

04:13

things is is like quite remarkable I

04:15

think this is probably the best time to

04:20

start I yeah I think I would say this I

04:22

think this is probably the best time to

04:23

start a companies since uh the internet

04:25

at least and maybe kind of like in the

04:27

history of technology I think with what

04:29

you can do with AI is like going to just

04:33

get more remarkable every year and the

04:35

greatest companies get created at times

04:38

like this the most impactful new

04:40

products get built at times like this so

04:43

um I would feel incredibly lucky uh and

04:46

I would be determined to make the most

04:47

of it and I would go figure out like

04:50

where I wanted to contribute and do it

04:52

and do you have a bias on where would

04:53

you contribute would you want to stay as

04:55

a student um would and if so would you

04:56

major in a certain major giving the pace

04:58

of of change probably I would not stay

05:01

as a student but only cuz like I didn't

05:04

and I think it's like reasonable to

05:05

assume people kind of are going to make

05:06

the same decisions they would make again

05:09

um I think staying as a student is a

05:11

perfectly good thing to do I just I it

05:13

would probably not be what I would have

05:15

picked no this is you this is you so you

05:17

have the Freaky Friday moment it's you

05:18

you're reborn and as a 19-year-old and

05:20

would you

05:22

yeah what I think I would again like I

05:25

think this is not a surprise cuz people

05:27

kind of are going to do what they're

05:28

going to do I think I would go work on

05:31

research and and and where might you do

05:33

that Sam I think I mean obviously I have

05:36

a bias towards open eye but I think

05:37

anywhere I could like do meaningful AI

05:39

research I would be like very thrilled

05:40

about but you'd be agnostic if that's

05:42

Academia or Private Industry

05:46

um I say this with sadness I think I

05:48

would pick

05:50

industry realistically um I think it's I

05:53

think to you kind of need to be the

05:55

place with so much compute M MH okay and

05:59

um if you did join um on the research

06:02

side would you join so we had kazer here

06:04

last week who was a big advocate of not

06:06

being a Founder but actually joining an

06:08

existing companies sort of learn learn

06:09

the chops for the for the students that

06:11

are wrestling with should I start a

06:13

company now at 19 or 20 or should I go

06:15

join another entrepreneurial either

06:17

research lab or Venture what advice

06:19

would you give them well since he gave

06:22

the case to join a company I'll give the

06:24

other one um which is I think you learn

06:28

a lot just starting a company and if

06:29

that's something you want to do at some

06:30

point there's this thing Paul Graham

06:32

says but I think it's like very deeply

06:34

true there's no pre-startup like there

06:36

is Premed you kind of just learn how to

06:38

run a startup by running a startup and

06:40

if if that's what you're pretty sure you

06:42

want to do you may as well jump in and

06:43

do it and so let's say so if somebody

06:45

wants to start a company they want to be

06:46

in AI um what do you think are the

06:48

biggest near-term challenges that you're

06:52

seeing in AI that are the ripest for a

06:54

startup and just to scope that what I

06:56

mean by that are what are the holes that

06:58

you think are the top priority needs for

07:00

open AI that open AI will not solve in

07:03

the next three years um yeah

07:08

so I think this is like a very

07:10

reasonable question to ask in some sense

07:13

but I think it's I'm not going to answer

07:15

it because I think you should

07:19

never take this kind of advice about

07:21

what startup to start ever from anyone

07:24

um I think by the time there's something

07:26

that is like the kind of thing that's

07:29

obvious enough that me or somebody else

07:31

will sit up here and say it it's

07:33

probably like not that great of a

07:34

startup idea and I totally understand

07:37

the impulse and I remember when I was

07:38

just like asking people like what

07:39

startup should I start

07:42

um but I I think like one of the most

07:46

important things I believe about having

07:48

an impactful career is you have to chart

07:50

your own course if if the thing that

07:53

you're thinking about is something that

07:54

someone else is going to do anyway or

07:57

more likely something that a lot of

07:58

people are going to do anyway

08:00

um you should be like somewhat skeptical

08:01

of that and I think a really good muscle

08:04

to build is coming up with the ideas

08:07

that are not the obvious ones to say so

08:09

I don't know what the really important

08:12

idea is that I'm not thinking of right

08:13

now but I'm very sure someone in this

08:15

room does it knows what that answer is

08:18

um and I think learning to trust

08:21

yourself and come up with your own ideas

08:24

and do the very like non-consensus

08:26

things like when we started open AI that

08:27

was an extremely non-consensus thing to

08:30

do and now it's like the very obvious

08:31

thing to do um now I only have the

08:34

obvious ideas CU I'm just like stuck in

08:36

this one frame but I'm sure you all have

08:38

the other

08:38

ones but are there so can I ask it

08:41

another way and I don't know if this is

08:42

fair or not but are what questions then

08:44

are you wrestling with that no one else

08:47

is talking

08:49

about how to build really big computers

08:51

I mean I think other people are talking

08:52

about that but we're probably like

08:54

looking at it through a lens that no one

08:56

else is quite imagining yet um

09:02

I mean we're we're definitely wrestling

09:05

with how we when we make not just like

09:09

grade school or middle schooler level

09:11

intelligence but like PhD level

09:12

intelligence and Beyond the best way to

09:14

put that into a product the best way to

09:16

have a positive impact with that on

09:19

society and people's lives we don't know

09:20

the answer to that yet so I think that's

09:22

like a pretty important thing to figure

09:23

out okay and can we continue on that

09:25

thread then of how to build really big

09:27

computers if that's really what's on

09:28

your mind can you share I know there's

09:30

been a lot of speculation and probably a

09:33

lot of here say too about um the

09:35

semiconductor Foundry Endeavor that you

09:38

are reportedly embarking on um can you

09:41

share what would make what what's the

09:43

vision what would make this different

09:45

than it's not just foundies although

09:47

that that's part of it it's like if if

09:50

you believe which we increasingly do at

09:52

this point that AI infrastructure is

09:55

going to be one of the most important

09:57

inputs to the Future this commodity that

09:58

everybody's going to want and that is

10:01

energy data centers chips chip design

10:04

new kinds of networks it's it's how we

10:06

look at that entire ecosystem um and how

10:09

we make a lot more of that and I don't

10:12

think it'll work to just look at one

10:13

piece or another but we we got to do the

10:15

whole thing okay so there's multiple big

10:18

problems yeah um I think like just this

10:21

is the Arc of human technological

10:25

history as we build bigger and more

10:26

complex systems and does it gross so you

10:29

know in terms of just like the compute

10:30

cost uh correct me if I'm wrong but chat

10:33

gbt 3 was I've heard it was $100 million

10:36

to do the model um and it was 100 175

10:41

billion parameters gbt 4 was cost $400

10:44

million with 10x the parameters it was

10:47

almost 4X the cost but 10x the

10:49

parameters correct me adjust me you know

10:52

it I I do know it but I won oh you can

10:54

you're invited to this is Stanford Sam

10:57

okay um uh but the the even if you don't

11:00

want to correct the actual numbers if

11:01

that's directionally correct um does the

11:05

cost do you think keep growing with each

11:07

subsequent yes and does it keep growing

11:12

multiplicatively uh probably I mean and

11:15

so the question then becomes how do we

11:18

how do you capitalize

11:20

that well look I I kind of think

11:26

that giving people really capable tools

11:30

and letting them figure out how they're

11:32

going to use this to build the future is

11:34

a super good thing to do and is super

11:36

valuable and I am super willing to bet

11:39

on the Ingenuity of you all and

11:42

everybody else in the world to figure

11:44

out what to do about this so there is

11:46

probably some more business-minded

11:48

person than me at open AI somewhere that

11:50

is worried about how much we're spending

11:52

um but I kind of

11:53

don't okay so that doesn't cross it so

11:55

you

11:56

know open ey is phenomenal chat gbt is

11:59

phenomenal um everything else all the

12:01

other models are

12:02

phenomenal it burned you've earned $520

12:05

million of cash last year that doesn't

12:07

concern you in terms of thinking about

12:09

the economic model of how do you

12:11

actually where's going to be the

12:12

monetization source well first of all

12:14

that's nice of you to say but Chachi PT

12:16

is not phenomenal like Chachi PT is like

12:20

mildly embarrassing at best um gp4 is

12:24

the dumbest model any of you will ever

12:26

ever have to use again by a lot um but

12:29

you know it's like important to ship

12:31

early and often and we believe in

12:33

iterative deployment like if we go build

12:35

AGI in a basement and then you know the

12:38

world is like kind

12:40

of blissfully walking blindfolded along

12:44

um I don't think that's like I don't

12:46

think that makes us like very good

12:47

neighbors um so I think it's important

12:49

given what we believe is going to happen

12:51

to express our view about what we

12:52

believe is going to happen um but more

12:54

than that the way to do it is to put the

12:56

product in people's hands um

13:00

and let Society co-evolve with the

13:03

technology let Society tell us what it

13:06

collectively and people individually

13:08

want from the technology how to

13:09

productize this in a way that's going to

13:11

be useful um where the model works

13:13

really well where it doesn't work really

13:14

well um give our leaders and

13:17

institutions time to react um give

13:20

people time to figure out how to

13:21

integrate this into their lives to learn

13:23

how to use the tool um sure some of you

13:25

all like cheat on your homework with it

13:27

but some of you all probably do like

13:28

very amazing amazing wonderful things

13:29

with it too um and as each generation

13:32

goes on uh I think that will expand

13:38

and and that means that we ship

13:40

imperfect products um but we we have a

13:43

very tight feedback loop and we learn

13:45

and we get better um and it does kind of

13:49

suck to ship a product that you're

13:50

embarrassed about but it's much better

13:52

than the alternative um and in this case

13:54

in particular where I think we really

13:56

owe it to society to deploy tively

14:00

um one thing we've learned is that Ai

14:02

and surprise don't go well together

14:03

people don't want to be surprised people

14:05

want a gradual roll out and the ability

14:07

to influence these systems um that's how

14:10

we're going to do it and there may

14:13

be there could totally be things in the

14:15

future that would change where we' think

14:17

iterative deployment isn't such a good

14:19

strategy um but it does feel like the

14:24

current best approach that we have and I

14:26

think we've gained a lot um from from

14:29

doing this and you know hopefully s the

14:31

larger world has gained something too

14:34

whether we burn 500 million a year or 5

14:38

billion or 50 billion a year I don't

14:40

care I genuinely don't as long as we can

14:43

I think stay on a trajectory where

14:45

eventually we create way more value for

14:47

society than that and as long as we can

14:49

figure out a way to pay the bills like

14:51

we're making AGI it's going to be

14:52

expensive it's totally worth it and so

14:54

and so do you have a I hear you do you

14:56

have a vision in 2030 of what if I say

14:58

you crushed it Sam it's 2030 you crushed

15:01

it what does the world look like to

15:03

you

15:06

um you know maybe in some very important

15:08

ways not that different uh

15:12

like we will be back here there will be

15:15

like a new set of students we'll be

15:17

talking about how startups are really

15:19

important and technology is really cool

15:21

we'll have this new great tool in the

15:23

world it'll

15:25

feel it would feel amazing if we got to

15:27

teleport forward six years today and

15:30

have this thing that was

15:31

like smarter than humans in many

15:34

subjects and could do these complicated

15:36

tasks for us and um you know like we

15:40

could have these like complicated

15:41

program written or This research done or

15:43

this business

15:44

started uh and yet like the Sun keeps

15:48

Rising the like people keep having their

15:50

human dramas life goes on so sort of

15:53

like super different in some sense that

15:55

we now have like abundant intelligence

15:58

at our fingertips

16:00

and then in some other sense like not

16:01

different at all okay and you mentioned

16:04

artificial general intellig AGI

16:05

artificial general intelligence and in

16:07

in a previous interview you you define

16:09

that as software that could mimic the

16:10

median competence of a or the competence

16:12

of a median human for tasks yeah um can

16:16

you give me is there time if you had to

16:18

do a best guess of when you think or

16:20

arrange you feel like that's going to

16:21

happen I think we need a more precise

16:23

definition of AGI for the timing

16:26

question um because at at this point

16:29

even with like the definition you just

16:30

gave which is a reasonable one there's

16:32

that's your I'm I'm I'm paring back what

16:34

you um said in an interview well that's

16:36

good cuz I'm going to criticize myself

16:37

okay um it's it's it's it's too loose of

16:41

a definition there's too much room for

16:42

misinterpretation in there um to I think

16:45

be really useful or get at what people

16:47

really want like I kind of think what

16:50

people want to know when they say like

16:52

what's the timeline to AGI is like when

16:55

is the world going to be super different

16:57

when is the rate of change going to get

16:58

super high when is the way the economy

17:00

Works going to be really different like

17:01

when does my life change

17:05

and that for a bunch of reasons may be

17:08

very different than we think like I can

17:10

totally imagine a world where we build

17:13

PhD level intelligence in any area and

17:17

you know we can make researchers way

17:18

more productive maybe we can even do

17:20

some autonomous research and in some

17:22

sense

17:24

like that sounds like it should change

17:26

the world a lot and I can imagine that

17:28

we do that and then we can detect no

17:32

change in global GDP growth for like

17:34

years afterwards something like that um

17:37

which is very strange to think about and

17:38

it was not my original intuition of how

17:40

this was all going to go so I don't know

17:43

how to give a precise timeline of when

17:45

we get to the Milestone people care

17:46

about but when we get to systems that

17:49

are way more capable than we have right

17:52

now one year and every year after and

17:56

that I think is the important point so

17:57

I've given up on trying to give the AGI

17:59

timeline but I think every year for the

18:03

next many we have dramatically more

18:05

capable systems every year um I want to

18:07

ask about the dangers of of AGI um and

18:10

gang I know there's tons of questions

18:11

for Sam in a few moments I'll be turning

18:13

it up so start start thinking about your

18:15

questions um a big focus on Stanford

18:17

right now is ethics and um can we talk

18:20

about you know how you perceive the

18:21

dangers of AGI and specifically do you

18:24

think the biggest Danger from AGI is

18:26

going to come from a cataclysmic event

18:27

which you know makes all the papers or

18:29

is it going to be more subtle and

18:31

pernicious sort of like you know like

18:33

how everybody has ADD right now from you

18:35

know using Tik Tok um is it are you more

18:37

concerned about the subtle dangers or

18:39

the cataclysmic dangers um or neither

18:42

I'm more concerned about the subtle

18:43

dangers because I think we're more

18:45

likely to overlook those the cataclysmic

18:47

dangers uh a lot of people talk about

18:50

and a lot of people think about and I

18:52

don't want to minimize those I think

18:53

they're really serious and a real thing

18:57

um but I think we at least know to look

19:01

out for that and spend a lot of effort

19:03

um the example you gave of everybody

19:05

getting add from Tik Tok or whatever I

19:07

don't think we knew to look out for and

19:10

that that's a really hard the the

19:13

unknown unknowns are really hard and so

19:15

I'd worry more about those although I

19:16

worry about both and are they unknown

19:18

unknowns are there any that you can name

19:19

that you're particularly worried about

19:21

well then I would kind of they'd be

19:22

unknown unknown um you can

19:27

I I am am worried just about so so even

19:31

though I think in the short term things

19:32

change less than we think as with other

19:35

major Technologies in the long term I

19:37

think they change more than we think and

19:40

I am worried about what rate Society can

19:43

adapt to something so new and how long

19:47

it'll take us to figure out the new

19:48

social contract versus how long we get

19:50

to do it um I'm worried about that okay

19:54

um I'm going to I'm going to open up so

19:55

I want to ask you a question about one

19:56

of the key things that we're now trying

19:58

to in

19:59

into the curriculum as things change so

20:01

rapidly is resilience that's really good

20:04

and and you

20:05

know and the Cornerstone of resilience

20:08

uh is is self-awareness and so and I'm

20:11

wondering um if you feel that you're

20:14

pretty self-aware of your driving

20:16

motivations as you are embarking on this

20:19

journey so first of all I think um I

20:23

believe resilience can be taught uh I

20:25

believe it has long been one of the most

20:27

important life skills um and in the

20:29

future I think in the over the next

20:31

couple of decades I think resilience and

20:33

adaptability will be more important

20:36

theyve been in a very long time so uh I

20:39

think that's really great um on the

20:42

self-awareness

20:44

question I think I'm self aware but I

20:48

think like everybody thinks they're

20:50

self-aware and whether I am or not is

20:52

sort of like hard to say from the inside

20:54

and can I ask you sort of the questions

20:55

that we ask in our intro classes on self

20:57

awareness sure it's like the Peter duer

20:59

framework so what do you think your

21:01

greatest strengths are

21:04

Sam

21:07

uh I think I'm not great at many things

21:10

but I'm good at a lot of things and I

21:12

think breath has become an underrated

21:15

thing in the world everyone gets like

21:17

hypers specialized so if you're good at

21:19

a lot of things you can seek connections

21:21

across them um I think you can then kind

21:25

of come up with the ideas that are

21:26

different than everybody else has or

21:28

that sort of experts in one area have

21:30

and what are your most dangerous

21:32

weaknesses

21:36

um most dangerous that's an interesting

21:39

framework for it

21:41

uh I think I have like a general bias to

21:45

be too Pro technology just cuz I'm

21:47

curious and I want to see where it goes

21:49

and I believe that technology is on the

21:50

whole a net good thing but I think that

21:54

is a worldview that has overall served

21:56

me and others well and thus got like a

21:58

lot of positive

22:00

reinforcement and is not always true and

22:03

when it's not been true has been like

22:05

pretty bad for a lot of people and then

22:07

Harvard psychologist David mcland has

22:09

this framework that all leaders are

22:10

driven by one of three Primal needs a

22:13

need for affiliation which is a need to

22:15

be liked a need for achievement and a

22:17

need for power if you had to rank list

22:19

those what would be

22:22

yours I think at various times in my

22:24

career all of those I think there these

22:26

like levels that people go through

22:29

um at this point I feel driven by like

22:32

wanting to do something useful and

22:34

interesting okay and I definitely had

22:37

like the money and the power and the

22:38

status phases okay and then where were

22:40

you when you most last felt most like

22:45

yourself I I

22:48

always and then one last question and

22:50

what are you most excited about with

22:51

chat gbt five that's coming out that uh

22:55

people

22:56

don't what are you what are you most

22:57

excited about with the of chat gbt that

22:59

we're all going to see

23:01

uh I don't know yet um I I mean I this

23:05

this sounds like a cop out answer but I

23:07

think the most important thing about gp5

23:09

or whatever we call that is just that

23:11

it's going to be smarter and this sounds

23:13

like a Dodge but I think that's like

23:17

among the most remarkable facts in human

23:19

history that we can just do something

23:21

and we can say right now with a high

23:23

degree of scientific certainty GPT 5 is

23:25

going to be smarter than a lot smarter

23:26

than GPT 4 GPT 6 going to be a lot

23:28

smarter than gbt 5 and we are not near

23:30

the top of this curve and we kind of

23:32

know what know what to do and this is

23:34

not like it's going to get better in one

23:35

area this is not like we're going to you

23:37

know it's not that it's always going to

23:39

get better at this eval or this subject

23:41

or this modality it's just going to be

23:43

smarter in the general

23:45

sense and I think the gravity of that

23:48

statement is still like underrated okay

23:50

that's great Sam guys Sam is really here

23:52

for you he wants to answer your question

23:54

so we're going to open it up hello um

23:57

thank you so much for joining joining us

23:59

uh I'm a junior here at Stanford I sort

24:01

of wanted to talk to you about

24:02

responsible deployment of AGI so as as

24:05

you guys could continually inch closer

24:07

to that how do you plan to deploy that

24:10

responsibly AI uh at open AI uh you know

24:13

to prevent uh you know stifling human

24:15

Innovation and continue to Spur that so

24:19

I'm actually not worried at all about

24:20

stifling of human Innovation I I really

24:22

deeply believe that people will just

24:24

surprise us on the upside with better

24:26

tools I think all of history suggest

24:28

that if you give people more leverage

24:30

they do more amazing things and that's

24:32

kind of like we all get to benefit from

24:34

that that's just kind of great I am

24:37

though increasingly worried about how

24:39

we're going to do this all responsibly I

24:41

think as the models get more capable we

24:42

have a higher and higher bar we do a lot

24:44

of things like uh red teaming and

24:47

external Audits and I think those are

24:48

all really good but I think as the

24:51

models get more capable we'll have to

24:53

deploy even more iteratively have an

24:55

even tighter feedback loop on looking at

24:58

how they're used and where they work and

24:59

where they don't work and this this

25:01

world that we used to do where we can

25:02

release a major model update every

25:04

couple of years we probably have to find

25:07

ways to like increase the granularity on

25:09

that and deploy more iteratively than we

25:11

have in the past and it's not super

25:13

obvious to us yet how to do that but I

25:16

think that'll be key to responsible

25:17

deployment and also the way we kind of

25:21

have all of the stakeholders negotiate

25:24

what the rules of AI need to be uh

25:27

that's going to get more comp Lex over

25:28

time too thank you next question where

25:32

here you mentioned before that there's a

25:34

growing need for larger and larger

25:36

computers and faster computers however

25:38

many parts of the world don't have the

25:40

infrastructure to build those data

25:41

centers or those large computers how do

25:44

you see um Global Innovation being

25:46

impacted by that so two parts to that

25:49

one

25:50

um no matter where the computers are

25:52

built I think Global and Equitable

25:56

access to use the computers for training

25:57

as well inference is super important um

26:01

one of the things that's like very C to

26:02

our mission is that we make chat GPT

26:05

available for free to as many people as

26:07

want to use it with the exception of

26:08

certain countries where we either can't

26:10

or don't for a good reason want to

26:12

operate um how we think about making

26:14

training compute more available to the

26:16

world is is uh going to become

26:18

increasingly important I I do think we

26:21

get to a world where we sort of think

26:23

about it as a human right to get access

26:24

to a certain amount of compute and we

26:26

got to figure out how to like distribute

26:28

that to people all around the world um

26:30

there's a second thing though which is I

26:32

think countries are going to

26:34

increasingly realize the importance of

26:36

having their own AI infrastructure and

26:38

we want to figure out a way and we're

26:40

now spending a lot of time traveling

26:41

around the world to build them in uh the

26:44

many countries that'll want to build

26:45

these and I hope we can play some small

26:47

role there in helping that happen trfic

26:50

thank

26:51

you U my question was what role do you

26:55

envision for AI in the future of like

26:57

space exploration or like

26:59

colonization um I think space is like

27:02

not that hospitable for biological life

27:05

obviously and so if we can send the

27:07

robots that seems

27:16

easier hey Sam so my question is for a

27:19

lot of the founders in the room and I'm

27:21

going to give you the question and then

27:23

I'm going to explain why I think it's

27:25

complicated um so my question is about

27:28

how you know an idea is

27:30

non-consensus and the reason I think

27:32

it's complicated is cu it's easy to

27:34

overthink um I think today even yourself

27:37

says AI is the place to start a company

27:40

I think that's pretty

27:42

consensus maybe rightfully so it's an

27:44

inflection point I think it's hard to

27:47

know if idea is non-consensus depending

27:50

on the group that you're talking about

27:52

the general public has a different view

27:54

of tech from The Tech Community and even

27:57

Tech Elites have a different point of

27:58

view from the tech community so I was

28:01

wondering how you verify that your idea

28:03

is non-consensus enough to

28:07

pursue um I mean first of all what you

28:11

really want is to be right being

28:13

contrarian and wrong still is wrong and

28:15

if you predicted like 17 out of the last

28:17

two recessions you probably were

28:20

contrarian for the two you got right

28:22

probably not even necessarily um but you

28:24

were wrong 15 other times and and

28:28

and so I think it's easy to get too

28:30

excited about being contrarian and and

28:33

again like the most important thing to

28:35

be right and the group is usually right

28:39

but where the most value is um is when

28:42

you are contrarian and

28:45

right

28:47

and and that doesn't always happen in

28:50

like sort of a zero one kind of way like

28:54

everybody in the room can agree that AI

28:57

is the right place to start the company

28:59

and if one person in the room figures

29:00

out the right company to start and then

29:02

successfully executes on that and

29:03

everybody else thinks ah that wasn't the

29:05

best thing you could do that's what

29:07

matters so it's okay to kind of like go

29:11

with conventional wisdom when it's right

29:13

and then find the area where you have

29:14

some unique Insight in terms of how to

29:17

do that um I do think surrounding

29:21

yourself with the right peer group is

29:23

really important and finding original

29:24

thinkers uh is important but there is

29:28

part of this where you kind of have to

29:30

do it Solo or at least part of it Solo

29:33

or with a few other people who are like

29:35

you know going to be your co-founders or

29:36

whatever

29:38

um and I think by the time you're too

29:41

far in the like how can I find the right

29:43

peer group you're somehow in the wrong

29:45

framework already um so like learning to

29:48

trust yourself and your own intuition

29:51

and your own thought process which gets

29:53

much easier over time no one no matter

29:55

what they said they say I think is like

29:57

truly great at this this when they're

29:58

just starting out you because like you

30:02

kind of just haven't built the muscle

30:03

and like all of your Social pressure and

30:07

all of like the evolutionary pressure

30:09

that produced you was against that so

30:11

it's it's something that like you get

30:12

better at over time and and and don't

30:15

hold yourself to too high of a standard

30:16

too early on

30:19

it Hi Sam um I'm curious to know what

30:22

your predictions are for how energy

30:24

demand will change in the coming decades

30:26

and how we achieve a future where

30:28

renewable energy sources are 1 set per

30:29

kilowatt

30:31

hour

30:32

um I mean it will go up for sure well

30:36

not for sure you can come up with all

30:37

these weird ways in which

30:39

like we all depressing future is where

30:42

it doesn't go up I would like it to go

30:43

up a lot I hope that we hold ourselves

30:46

to a high enough standard where it does

30:47

go up I I I forget exactly what the kind

30:50

of world's electrical gener generating

30:53

capacity is right now but let's say it's

30:54

like 3,000 4,000 gwatt something like

30:57

that even if we add another 100 gwatt

31:00

for AI it doesn't materially change it

31:02

that much but it changes it some and if

31:06

we start at a th gwatt for AI someday it

31:08

does that's a material change but there

31:10

are a lot of other things that we want

31:11

to do and energy does seem to correlate

31:14

quite a lot with quality of life we can

31:16

deliver for people

31:18

um my guess is that Fusion eventually

31:21

dominates electrical generation on Earth

31:24

um I think it should be the cheapest

31:25

most abundant most reliable densest

31:27

source

31:28

I could could be wrong with that and it

31:30

could be solar Plus Storage um and you

31:33

know my guess most likely is it's going

31:35

to be 820 one way or the other and

31:37

there'll be some cases where one of

31:38

those is better than the other but uh

31:42

those kind of seem like the the two bets

31:43

for like really global scale one cent

31:46

per kilowatt hour

31:51

energy Hi Sam I have a question it's

31:54

about op guide drop what happened last

31:56

year so what's the less you learn cuz

31:59

you talk about resilience so what's the

32:01

lesson you learn from left that company

32:04

and now coming back and what what made

32:06

you com in back because Microsoft also

32:09

gave you offer like can you share more

32:11

um I mean the best lesson I learned was

32:14

that uh we had an incredible team that

32:17

totally could have run the company

32:18

without me and did did for a couple of

32:20

days

32:22

um and you never and also that the team

32:26

was super resilient like we knew that a

32:29

CRA some crazy things and probably more

32:31

crazy things will happen to us between

32:33

here and AGI um as different parts of

32:37

the world have stronger and stronger

32:40

emotional reactions and the stakes keep

32:41

ratcheting up and you know I thought

32:45

that the team would do well under a lot

32:46

of pressure but you never really know

32:49

until you get to run the experiment and

32:50

we got to run the experiment and I

32:52

learned that the team was super

32:54

resilient and like ready to kind of run

32:56

the company um in terms of why I came

32:59

back you know I originally when the so

33:02

it was like the next morning the board

33:04

called me and like what do you think

33:05

about coming back and I was like no um

33:07

I'm mad um

33:11

and and then I thought about it and I

33:13

realized just like how much I loved open

33:14

AI um how much I loved the people the C

33:17

the culture we had built uh the mission

33:19

and I kind of like wanted to finish it

33:21

Al

33:23

together you you you emotionally I just

33:25

want to this is obviously a really

33:26

sensitive and one of one of oh it's it's

33:29

not but was I imagine that was okay well

33:32

then can we talk about the structure

33:33

about it because this Russian doll

33:35

structure of the open AI where you have

33:38

the nonprofit owning the for-profit um

33:40

you know when we're we're trying to

33:41

teach principal ger entrepreneur we got

33:43

here we got to the structure gradually

33:46

um it's not what I would go back and

33:47

pick if we could do it all over again

33:49

but we didn't think we were going to

33:50

have a product when we started we were

33:52

just going to be like a AI research lab

33:54

wasn't even clear we had no idea about a

33:56

language model or an API or chat GPT so

33:59

if if you're going to start a company

34:01

you got to have like some theory that

34:03

you're going to sell a product someday

34:04

and we didn't think we were going to we

34:06

didn't realize we're were going to need

34:07

so much money for compute we didn't

34:08

realize we were going to like have this

34:09

nice business um so what was your

34:11

intention when you started it we just

34:13

wanted to like push AI research forward

34:15

we thought that and I know this gets

34:17

back to motivations but that's the pure

34:18

motivation there's no motivation around

34:21

making money or or power I cannot

34:24

overstate how foreign of a concept like

34:28

I mean for you personally not for open

34:30

AI but you you weren't starting well I

34:32

had already made a lot of money so it

34:33

was not like a big I mean I I like I

34:36

don't want to like claim some like moral

34:38

Purity here it was just like that was

34:41

the of my life a dver driver okay

34:44

because there's this so and the reason

34:46

why I'm asking is just you know when

34:47

we're teaching about principle driven

34:48

entrepreneurship here you can you can

34:49

understand principles inferred from

34:51

organizational structures when the

34:52

United States was set up the

34:54

architecture of governance is the

34:55

Constitution it's got three branches of

34:58

government all these checks and balances

35:00

and you can infer certain principles

35:02

that you know there's a skepticism on

35:04

centralizing power that you know things

35:06

will move slowly it's hard to get things

35:08

to change but it'll be very very

35:10

stable if you you know not to parot

35:13

Billy eish but if you look at the open

35:14

AI structure and you think what was that

35:16

made for um it's a you have a like your

35:18

near hundred billion dollar valuation

35:20

and you've got a very very limited board

35:22

that's a nonprofit board which is

35:24

supposed to look after it's it's its

35:26

fiduciary duties to the again it's not

35:28

what we would have done if we knew then

35:30

what we know now but you don't get to

35:31

like play Life In Reverse and you have

35:34

to just like adapt there's a mission we

35:36

really cared about we thought we thought

35:38

AI was going to be really important we

35:39

thought we had an algorithm that learned

35:42

we knew it got better with scale we

35:43

didn't know how predictably it got

35:44

better with scale and we wanted to push

35:46

on this we thought this was like going

35:47

to be a very important thing in human

35:50

history and we didn't get everything

35:52

right but we were right on the big stuff

35:54

and our mission hasn't changed and we've

35:56

adapted the structure as we go and will

35:57

adapt it more in the future um but you

36:00

know like you

36:04

don't like life is not a problem set um

36:08

you don't get to like solve everything

36:09

really nicely all at once it doesn't

36:11

work quite like it works in the

36:12

classroom as you're doing it and my

36:14

advice is just like trust yourself to

36:16

adapt as you go it'll be a little bit

36:18

messy but you can do it and I just asked

36:20

this because of the significance of open

36:21

AI um you have a you have a board which

36:23

is all supposed to be independent

36:25

financially so that they're making these

36:26

decisions as a nonprofit thinking about

36:29

the stakeholder their stakeholder that

36:30

they are fiduciary of isn't the

36:32

shareholders it's Humanity um

36:34

everybody's independent there's no

36:36

Financial incentive that anybody has

36:38

that's on the board including yourself

36:40

with hope and AI um well Greg was I okay

36:43

first of all I think making money is a

36:44

good thing I think capitalism is a good

36:46

thing um my co-founders on the board

36:48

have had uh financial interest and I've

36:50

never once seen them not take the

36:52

gravity of the mission seriously um but

36:56

you know we've put a structure in place

36:58

that we think is a way to get um

37:02

incentives aligned and I do believe

37:03

incentives are superpowers but I'm sure

37:06

we'll evolve it more over time and I

37:08

think that's good not bad and with open

37:09

AI the new fund you're not you don't get

37:11

any carry in that and you're not

37:12

following on investments onto those okay

37:15

okay okay thank you we can keep talking

37:16

about this I I I know you want to go

37:18

back to students I do too so we'll go

37:19

we'll keep we'll keep going to the

37:20

students how do you expect that AGI will

37:23

change geopolitics and the balance of

37:24

power in the world um like maybe more

37:29

than any

37:30

other technology um I don't I I think

37:34

about that so much and I have such a

37:37

hard time saying what it's actually

37:38

going to do um I or or maybe more

37:42

accurately I have such a hard time

37:44

saying what it won't do and we were

37:46

talking earlier about how it's like not

37:47

going to CH maybe it won't change

37:48

day-to-day life that much but the

37:50

balance of power in the world it feels

37:53

like it does change a lot but I don't

37:55

have a deep answer of exactly how

37:58

thanks so much um I was wondering sorry

38:02

I was wondering in the deployment of

38:03

like general intelligence and also

38:05

responsible AI how much do you think is

38:08

it necessary that AI systems are somehow

38:12

capable of recognizing their own

38:14

insecurities or like uncertainties and

38:16

actually communicating them to the

38:18

outside world I I always get nervous

38:21

anthropomorphizing AI too much because I

38:23

think it like can lead to a bunch of

38:25

weird oversights but if we say like how

38:28

much can AI recognize its own

38:31

flaws uh I think that's very important

38:34

to build and right now and the ability

38:38

to like recognize an error in reasoning

38:41

um and have some sort of like

38:43

introspection ability like that that

38:46

that seems to me like really important

38:47

to

38:51

pursue hey s thank you for giving us

38:54

some of your time today and coming to

38:55

speak from the outside looking in we we

38:57

all hear about the culture and together

38:59

togetherness of open AI in addition to

39:00

the intensity and speed of what you guys

39:02

work out clearly seen from CH gbt and

39:05

all your breakthroughs and also in when

39:07

you were temporarily removed from the

39:08

company by the board and how all the all

39:10

of your employees tweeted open air is

39:11

nothing without its people what would

39:13

you say is the reason behind this is it

39:15

the binding mission to achieve AGI or

39:16

something even deeper what is pushing

39:18

the culture every

39:19

day I think it is the shared Mission um

39:22

I mean I think people like like each

39:23

other and we feel like we've you know

39:25

we're in the trenches together doing

39:26

this really hard thing um

39:30

but I think it really is like deep sense

39:33

of purpose and loyalty to the mission

39:36

and when you can create that I think it

39:39

is like the strongest force for Success

39:42

at any start at least that I've seen

39:43

among startups um and you know we try to

39:47

like select for that and people we hire

39:49

but even people who come in not really

39:51

believing that AGI is going to be such a

39:54

big deal and that getting it right is so

39:55

important tend to believe it after the

39:56

first three months or whatever and so

39:58

that's like that's a very powerful

40:00

cultural force that we have

40:03

thanks um currently there are a lot of

40:06

concerns about the misuse of AI in the

40:08

immediate term with issues like Global

40:10

conflicts and the election coming up

40:12

what do you think can be done by the

40:14

industry governments and honestly People

40:16

Like Us in the immediate term especially

40:18

with very strong open- Source

40:22

models one thing that I think is

40:25

important is not to pretend like this

40:27

technology or any other technology is

40:29

all good um I believe that AI will be

40:32

very net good tremendously net good um

40:36

but I think like with any other tool

40:40

um it'll be misused like you can do

40:43

great things with a hammer and you can

40:45

like kill people with a hammer um I

40:48

don't think that absolves us or you all

40:50

or Society from um trying to mitigate

40:55

the bad as much as we can and maximize

40:56

the good

40:58

but I do think it's important to realize

41:02

that with any sufficiently powerful Tool

41:06

uh you do put Power in the hands of tool

41:09

users or you make some decisions that

41:12

constrain what people in society can do

41:15

I think we have a voice in that I think

41:17

you all have a voice on that I think the

41:19

governments and our elected

41:20

representatives in Democratic process

41:21

processes have the loudest voice in

41:24

that but we're not going to get this

41:26

perfectly right like we Society are not

41:28

going to get this perfectly right

41:31

and a tight feedback loop I think is the

41:34

best way to get it closest to right um

41:37

and the way that that balance gets

41:39

negotiated of safety versus freedom and

41:42

autonomy um I think it's like worth

41:44

studying that with previous Technologies

41:47

and we'll do the best we can here we

41:49

Society will do the best we can

41:51

here um gang actually I've got to cut it

41:54

sorry I know um I'm wanty to be very

41:56

sensitive to time I know the the

41:58

interest far exceeds the time and the

42:00

love for Sam um Sam I know it is your

42:03

birthday I don't know if you can indulge

42:04

us because I know there's a lot of love

42:05

for you so I wonder if we can all just

42:07

sing Happy Birthday no no no please no

42:09

we want to make you very uncomfortable

42:11

one more question I'd much rather do one

42:13

more

42:14

question this is less interesting to you

42:17

thank you we can you can do one more

42:18

question

42:20

quickly day dear

42:23

Sam happy birthday to you

42:27

20 seconds of awkwardness is there a

42:29

burner question somebody who's got a

42:30

real burner and we only have 30 seconds

42:32

so make it

42:34

short um hi I wanted to ask if the

42:38

prospect of making something smarter

42:41

than any human could possibly be scares

42:44

you it of course does and I think it

42:47

would be like really weird and uh a bad

42:50

sign if it didn't scare me um humans

42:54

have gotten dramatically smarter and

42:56

more capable over time you are

42:59

dramatically more capable than your

43:02

great great grandparents and there's

43:05

almost no biological drift over that

43:07

period like sure you eat a little bit

43:08

better and you got better healthare um

43:11

maybe you eat worse I don't know um but

43:14

that's not the main reason you're more

43:16

capable um you are more capable because

43:20

the infrastructure of

43:22

society is way smarter and way more

43:25

capable than any human and and through

43:27

that it made you Society people that

43:30

came before you um made you uh the

43:34

internet the iPhone a huge amount of

43:37

knowledge available at your fingertips

43:39

and you can do things that your

43:41

predecessors would find absolutely

43:44

breathtaking

43:47

um Society is far smarter than you now

43:50

um Society is an AGI as far as you can

43:52

tell and the

43:57

the way that that happened was not any

43:59

individual's brain but the space between

44:01

all of us that scaffolding that we build

44:03

up um and contribute to Brick by Brick

44:08

step by step uh and then we use to go to

44:11

far greater Heights for the people that

44:13

come after us um things that are smarter

44:16

than us will contribute to that same

44:18

scaffolding um you will

44:21

have your children will have tools

44:23

available that you didn't um and that

44:25

scaffolding will have gotten built up to

44:28

Greater Heights

44:32

and that's always a little bit scary um

44:35

but I think it's like more way more good

44:38

than bad and people will do better

44:40

things and solve more problems and the

44:42

people of the future will be able to use

44:45

these new tools and the new scaffolding

44:47

that these new tools contribute to um if

44:49

you think about a world that has um AI

44:54

making a bunch of scientific discovery

44:56

what happens to that scientific progress

44:58

is it just gets added to the scaffolding

45:00

and then your kids can do new things

45:02

with it or you in 10 years can do new

45:03

things with it um but the way it's going

45:07

to feel to people uh I think is not that

45:10

there is this like much smarter entity

45:14

uh because we're much smarter in some

45:17

sense than the great great great

45:19

grandparents are more capable at least

45:21

um but that any individual person can

45:23

just do

45:25

more on that we're going to end it so

45:27

let's give Sam a round of applause

45:35

[Music]