Nvidia 2024 AI Event: Everything Revealed in 16 Minutes

CNET
18 Mar 202416:00

Summary

TLDRThe transcript introduces Blackwell, a revolutionary computing platform with 28 billion transistors, designed for the generative AI era. It features a unique architecture that allows two dies to function as one, with 10 terabytes per second data transfer. The platform is set to transform AI computing with its memory-coherent design and content token generation. NVIDIA's partnerships with major companies like AWS, Google, and Microsoft are highlighted, emphasizing the development of AI factories and digital twins for various industries. The transcript also discusses the Jetson Thor robotics chips and the potential of AI in robotics, exemplified by Disney's BDX robots.

Takeaways

  • 🚀 **Blackwell Platform Introduction**: Blackwell is a revolutionary computing platform that is not just a GPU, but a significant advancement in chip technology, marking a new era for AI and computing.
  • 🔗 **Chip Integration**: The Blackwell chip features a unique design where two dies are integrated so seamlessly that they operate as one, with 10 terabytes per second of data transfer between them, eliminating memory locality and cache issues.
  • 🌐 **Compatibility with Hopper**: Blackwell is designed to be form-fit function compatible with Hopper, allowing for a seamless transition from one system to another, which is crucial given the widespread installations of Hoppers globally.
  • 💡 **Memory Coherence**: One of the key features of the Blackwell chip is memory coherence, which allows multiple computing units to work together as if they were a single entity, enhancing efficiency and performance.
  • 🌟 **Content Token Generation**: A significant part of the Blackwell platform is its capability for content token generation, a format known as FP4, which is essential for the generative AI era.
  • 🔄 **MVY Link Switch**: Nvidia introduced the MVY link switch chip with 50 billion transistors, almost the size of Hopper itself, featuring four MV links, each capable of 1.8 terabytes per second data transfer, facilitating high-speed communication between GPUs.
  • 🔌 **System Design**: The Blackwell system design is groundbreaking, allowing for an exaflops AI system in a single rack, which is a testament to the power and efficiency of the platform.
  • 🀖 **AI and Robotics**: Nvidia is working on integrating AI and robotics, with projects like Jetson Thor robotics chips and Isaac lab for training humanoid robots, showcasing the company's commitment to advancing AI-powered robotics.
  • 🛠 **AI Foundry Partnerships**: Nvidia AI Foundry is collaborating with major companies like SAP, Cohesity, Snowflake, and NetApp to build AI solutions, emphasizing the company's role as an AI foundry that helps other industries integrate AI into their operations.
  • 🌐 **Omniverse and Digital Twins**: Nvidia's Omniverse is a digital twin simulation engine that allows for AI agents to learn and navigate complex industrial spaces, with the computer OVX hosted in the Azure Cloud, highlighting the potential for increased productivity and accurate data exchange across departments.
  • 🎉 **Innovation and Future Prospects**: The script emphasizes Nvidia's continuous innovation in computing and AI, with the introduction of new technologies like Blackwell and the Jetson Thor chip, setting the stage for future advancements in AI and robotics.

Q & A

  • What is the significance of the Blackwell platform mentioned in the transcript?

    -Blackwell is a revolutionary computing platform that has changed the traditional concept of GPUs. It features 28 billion transistors and enables two dies to function as one chip, eliminating memory locality and cache issues, thus operating as a single, giant chip.

  • How does the Hopper version of Blackwell relate to existing Hopper installations?

    -The Hopper version of Blackwell is designed to be form, fit, function compatible with existing Hopper installations. This means that one can simply slide out an existing Hopper and replace it with Blackwell, which is an efficient process given the identical infrastructure, design, power requirements, and software.

  • What is the role of the MVY link switch chip in the Blackwell system?

    -The MVY link switch chip is an integral part of the Blackwell system, featuring 50 billion transistors. It allows every single GPU to communicate with every other GPU at full speed simultaneously, which is crucial for building high-performance, memory-coherent systems.

  • How does Nvidia's partnership with companies like AWS, Google, and Oracle enhance the AI ecosystem?

    -Nvidia's partnerships with major tech companies like AWS, Google, and Oracle involve accelerating AI services, optimizing data processing, and integrating Nvidia's technologies into their platforms. These collaborations expand the reach of Nvidia's GPUs and AI capabilities, fostering innovation and efficiency across various industries.

  • What is the purpose of the Nvidia inference microservice (NIMS)?

    -The Nvidia inference microservice (NIMS) is designed to simplify the deployment and use of pre-trained AI models across Nvidia's extensive install base. It includes optimization for various GPU configurations and is accessible through easy-to-use APIs, allowing users to run AI models in various environments, from the cloud to their own data centers.

  • How does the AI Foundry concept work in the context of Nvidia's collaborations?

    -The AI Foundry concept involves Nvidia working closely with industry partners to build and optimize AI solutions. Nvidia provides the infrastructure, software, and expertise to create tailored AI applications, much like a foundry manufactures chips. This collaborative approach enables companies to leverage Nvidia's AI capabilities to enhance their own products and services.

  • What is the role of Omniverse in Nvidia's vision for the future?

    -Omniverse serves as a digital twinning platform that represents the physical world in a virtual space, enabling AI agents and robots to learn and navigate complex environments. It is integral to Nvidia's strategy for advancing industries by allowing for sophisticated simulations and collaborative workflows that enhance productivity and innovation.

  • How does the Jetson Thor robotics chip contribute to the development of AI-powered robotics?

    -The Jetson Thor robotics chip is designed to provide the computational power needed for advanced AI-powered robotics. It enables robots to learn from human demonstrations and execute complex tasks, emulating human movements, which is crucial for the next generation of intelligent, interactive robots.

  • What is the significance of the Project Groot and how does it relate to humanoid robot learning?

    -Project Groot is a general-purpose foundation model for humanoid robot learning. It takes multimodal instructions and past interactions as input, producing the next action for the robot to execute. This project represents a significant step in the development of AI models that can assist robots in learning tasks and movements similar to humans, thus advancing the field of AI-powered robotics.

  • How does the collaboration between Nvidia and Disney Research䜓现 in the transcript?

    -The collaboration between Nvidia and Disney Research is showcased through the BDX robots挔瀺. These robots are powered by Nvidia's Jetson platform and have been trained in Isaac Sim, demonstrating the practical application of Nvidia's AI and robotics technologies in creating interactive and intelligent robotic systems.

Outlines

00:00

🚀 Introduction to Blackwell and Its Impact on Computing

The paragraph introduces Blackwell, a revolutionary computing platform that redefines traditional GPU architecture. It highlights the Hopper chip, which contains 28 billion transistors and enables 10 terabytes per second data transfer between its two sides, effectively making them act as one giant chip. The discussion extends to the challenges of integrating Blackwell into existing systems due to its efficiency and the need for identical infrastructure. The paragraph also touches on the creation of a processor for the AI era, emphasizing the importance of content token generation in the FP4 format, and the development of an additional chip, the MVY link switch, with 50 billion transistors for high-speed inter-GPU communication. The speaker expresses the urgency of advancing computing technology, even though the current pace is already impressive.

05:00

🀖 Collaborations and AI Integration with Major Companies

This paragraph discusses the partnerships and collaborations with major tech companies to optimize and accelerate various aspects of their services and products. It mentions the integration of AI with Google's and GCP's initiatives, Oracle's involvement with Nvidia DGX Cloud, and Microsoft's wide-ranging partnership with Nvidia, including the acceleration of services in Microsoft Azure. The paragraph also highlights the Nvidia ecosystem's integration with Azure, including Nvidia Omniverse and Nvidia Healthcare. Furthermore, it introduces the Nvidia inference microservice, also known as NIMS, and explains its optimization for different GPU configurations and its availability for download. The speaker positions Nvidia as an AI Foundry, offering services similar to TSMC's role in chip manufacturing, and announces collaborations with companies like SAP, Cohesity, Snowflake, NetApp, and Dell to build AI factories and service co-pilots.

10:00

🌐 Nvidia's Omniverse and Robotics Initiatives

The paragraph focuses on Nvidia's Omniverse, a digital twin simulation engine for robotics, and the OVX computer that runs it, hosted on Azure Cloud. It emphasizes the productivity gains from connecting everything in a digital twin environment, where different departments operate on the same data. The announcement of Omniverse Cloud streaming to The Vision Pro is highlighted, which integrates various design tools into Omniverse. The paragraph also introduces Nvidia Project Groot, a foundation model for humanoid robot learning, and Isaac Lab, an application for training robots on Omniverse Isaac Sim. The new Jetson Thor robotics chips are mentioned, designed to power the future of AI-powered robotics, and the speaker shares excitement for the project 'General Robotics 003', showcasing the intersection of computer graphics, physics, and artificial intelligence.

15:02

🎉 Unveiling the Blackwell Chip and Future Outlook

In the final paragraph, the speaker unveils the Blackwell chip, describing it as an amazing processor and a marvel of system design. The paragraph reiterates the significance of Blackwell in the context of GPU evolution and positions it as a symbol of the future of computing. The speaker's enthusiasm for the technology is evident, and the paragraph concludes on a forward-looking note, emphasizing the transformative potential of Blackwell on the industry.

Mindmap

Keywords

💡Blackwell

Blackwell refers to a new platform mentioned in the transcript, which is a significant technological advancement. It is described as having 28 billion transistors and is capable of handling 10 terabytes of data per second. This platform is designed to bridge two dies (chips) together so seamlessly that they operate as one, giant chip. This innovation is a central theme of the video, showcasing a leap forward in computing capabilities.

💡GPUs

GPUs, or Graphics Processing Units, are specialized electronic chips that handle the complex tasks of rendering images and video for the computer graphics industry. In the context of the video, the speaker clarifies that while the company does make GPUs, the traditional form of these units has changed with the advent of the Blackwell platform. GPUs are integral to the discussion as they represent the core technology being advanced.

💡Hopper

Hopper is referenced as a revolutionary technology that has changed the world in the context of the transcript. It appears to be a previous or related technology to Blackwell, and the mention of 'Hopper version' suggests it is a benchmark against which the new Blackwell technology is compared. Hopper is likely a high-performance computing platform, and the transition from Hopper to Blackwell signifies a significant upgrade.

💡Memory Coherence

Memory coherence in computing refers to the consistency of data across multiple processing units. In the context of the video, it is highlighted as a key feature of the Blackwell platform, which allows the two sides of the chip to operate without any memory locality issues or cache problems. This means that data is synchronized across the chip, ensuring efficient and high-speed data processing.

💡MVY Link Switch

The MVY Link Switch is introduced as an impressive component with 50 billion transistors, almost the size of the Hopper by itself. It is equipped with four MV links, each capable of transferring data at 1.8 terabytes per second. This high-speed networking device is designed to enable full-speed communication between GPUs, which is crucial for building powerful and efficient computing systems.

💡dgx

dgx, or Data Center GPU, refers to a system architecture designed for high-performance computing, particularly for AI and deep learning applications. In the video, a dgx system is described as having exaflops of computing power, indicating it is among the most powerful machines on the planet. The dgx system is used to demonstrate the capabilities of the Blackwell platform and its compatibility with existing high-performance computing infrastructure.

💡AI Foundry

AI Foundry is a term used in the video to describe a comprehensive service that Nvidia provides for building and deploying AI solutions. It encompasses a range of offerings, including the NIMS (Nvidia inference microservice), Nemo microservice, and dgx Cloud. The AI Foundry is likened to a manufacturing process for AI, where Nvidia takes big ideas and turns them into practical, optimized AI solutions for various industries.

💡Omniverse

Omniverse is a digital twin platform mentioned in the video, designed to simulate and represent the physical world in a digital space. This platform is used for various applications, including robotics, where it provides a virtual environment for AI agents and robots to learn and navigate. The Omniverse is hosted in the Azure Cloud, and its integration with other tools and systems is highlighted as a key aspect of future productivity and collaboration.

💡Jetson Thor

Jetson Thor is a robotics chip mentioned in the video, designed for the future of AI-powered robotics. It is part of the building blocks for the next generation of AI-driven robots, indicating a significant advancement in the capabilities and potential applications of robotics technology. The Jetson Thor chip is likely to be used in conjunction with software like Isaac Sim and Project Groot to enable robots to learn and perform tasks.

💡Digital Twin

A digital twin is a virtual representation of a physical entity or system. In the context of the video, it is used to describe the simulation engine that represents the world digitally, allowing for AI agents and robots to learn and navigate in a virtual environment before being deployed in the real world. Digital twins are crucial for training and evaluating AI systems in complex industrial spaces, as they provide a safe and controlled environment for testing.

Highlights

Arrival at a developers conference with a focus on science, algorithms, computer architecture, and mathematics.

Introduction of the Blackwell platform, which is a significant advancement in chip technology.

Hopper, a revolutionary chip with 28 billion transistors, has changed the world of computing.

The Blackwell chip features a unique design where two dies are connected in a way that they function as one, with 10 terabytes of data transfer per second.

The Blackwell chip is form fit function compatible with Hopper, allowing for seamless integration into existing systems.

The development of a new processor for the generative AI era, emphasizing content token generation with a new format called FP4.

The incredible advancements in computing speed, yet the industry still seeks faster solutions.

Introduction of the MVY link switch chip with 50 billion transistors, capable of 1.8 terabytes per second data transfer and integrated computation.

The creation of a system where every GPU can communicate with every other GPU at full speed simultaneously.

The unveiling of a new DGX system, an exaflops AI system in a single rack.

Partnerships with major companies like AWS, Google, and Microsoft to integrate and optimize AI services and systems.

Nvidia's role as an AI Foundry, providing comprehensive solutions for AI development and deployment.

Collaborations with SAP, Cohesity, Snowflake, and NetApp to build AI-powered co-pilots and virtual assistance.

The importance of the Omniverse platform for creating digital twins and enabling AI agents to navigate complex industrial spaces.

The development of Isaac lab, a robot learning application, and the new Jetson Thor robotics chips.

Nvidia's Project Groot, a general-purpose foundation model for humanoid robot learning.

The intersection of computer graphics, physics, and artificial intelligence at Nvidia.

Disney's BDX robots showcasing the capabilities of Jetson-powered AI in action.

Transcripts

00:01

I hope you realize this is not a

00:06

concert you have

00:08

arrived at a developers

00:12

conference there will be a lot of

00:14

science

00:15

described algorithms computer

00:18

architecture mathematics Blackwell is

00:21

not a chip Blackwell is the name of a

00:24

platform uh people think we make

00:27

gpus and and we do but gpus don't look

00:31

the way they used to this is hopper

00:34

Hopper changed the

00:36

world this is

00:47

Blackwell it's okay

00:52

Hopper 28 billion transistors and so so

00:56

you could see you I can see there there

01:00

a small line between two dyes this is

01:02

the first time two dieses have abutted

01:04

like this together in such a way that

01:07

the two CH the two dies think it's one

01:09

chip there's 10 terabytes of data

01:12

between it 10 terabytes per second so

01:15

that these two these two sides of the

01:17

Blackwell Chip have no clue which side

01:19

they're on there's no memory locality

01:22

issues no cach issues it's just one

01:25

giant chip and it goes into two types of

01:29

systems the first

01:31

one is form fit function compatible to

01:34

Hopper and so you slide a hopper and you

01:37

push in Blackwell that's the reason why

01:39

one of the challenges of ramping is

01:41

going to be so efficient there are

01:43

installations of Hoppers all over the

01:45

world and they could be they could be

01:47

you know the same infrastructure same

01:49

design the power the electricity The

01:52

Thermals the software identical push it

01:56

right back and so this is a hopper

01:59

version for the current hgx

02:02

configuration and this is what the other

02:05

the second Hopper looks like this now

02:07

this is a prototype board this is a

02:09

fully functioning board and I just be

02:12

careful here this right here is I don't

02:15

know10

02:21

billion the second one's

02:26

five it gets cheaper after that so any

02:29

customer in the audience it's okay the

02:32

gray CPU has a super fast chipto chip

02:34

link what's amazing is this computer is

02:37

the first of its kind where this much

02:40

computation first of all fits into this

02:44

small of a place second it's memory

02:47

coherent they feel like they're just one

02:49

big happy family working on one

02:52

application together we created a

02:54

processor for the generative AI era and

02:59

one of the most important important

03:00

parts of it is content token generation

03:03

we call it this format is fp4 the rate

03:06

at which we're advancing Computing is

03:08

insane and it's still not fast enough so

03:10

we built another

03:13

chip this chip is just an incredible

03:17

chip we call it the mvy link switch it's

03:20

50 billion transistors it's almost the

03:23

size of Hopper all by itself this switch

03:25

ship has four MV links in

03:28

it each 1.8 terabytes per

03:32

second

03:33

and and it has computation in it as I

03:37

mentioned what is this chip

03:39

for if we were to build such a chip we

03:43

can have every single GPU talk to every

03:47

other GPU at full speed at the same time

03:51

you can build a system that looks like

03:58

this

04:03

now this system this

04:05

system is kind of

04:08

insane this is one dgx this is what a

04:12

dgx looks like now just so you know

04:14

there only a couple two three exop flops

04:16

machines on the planet as we speak and

04:19

so this is an exif flops AI system in

04:23

one single rack I want to thank I want

04:26

to thank some partners that that are

04:28

joining us in this uh aw is gearing up

04:30

for Blackwell they're uh they're going

04:32

to build the first uh GPU with secure AI

04:35

they're uh building out a 222 exif flops

04:39

system we Cuda accelerating Sage maker

04:42

AI we Cuda accelerating Bedrock AI uh

04:45

Amazon robotics is working with us uh

04:47

using Nvidia Omniverse and Isaac Sim AWS

04:51

Health has Nvidia Health Integrated into

04:54

it so AWS has has really leaned into

04:57

accelerated Computing uh Google is

05:00

gearing up for Blackwell gcp already has

05:02

A1 100s h100s t4s l4s a whole Fleet of

05:06

Nvidia Cuda gpus and they recently

05:09

announced the Gemma model that runs

05:11

across all of it uh we're work working

05:13

to optimize uh and accelerate every

05:16

aspect of gcp we're accelerating data

05:18

proc which for data processing the data

05:21

processing engine Jacks xlaa vertex Ai

05:25

and mujo for robotics so we're working

05:27

with uh Google and gcp across whole

05:29

bunch of initiatives uh Oracle is

05:32

gearing up for blackw Oracle is a great

05:34

partner of ours for Nvidia dgx cloud and

05:36

we're also working together to

05:38

accelerate something that's really

05:40

important to a lot of companies Oracle

05:43

database Microsoft is accelerating and

05:46

Microsoft is gearing up for Blackwell

05:48

Microsoft Nvidia has a wide- ranging

05:50

partnership we're accelerating could

05:52

accelerating all kinds of services when

05:54

you when you chat obviously and uh AI

05:56

services that are in Microsoft Azure uh

05:59

it's very very very likely nvidia's in

06:00

the back uh doing the inference and the

06:02

token generation uh we built they built

06:04

the largest Nvidia infiniband super

06:07

computer basically a digital twin of

06:09

ours or a physical twin of ours we're

06:11

bringing the Nvidia ecosystem to Azure

06:14

Nvidia DJ's Cloud to Azure uh Nvidia

06:17

Omniverse is now hosted in Azure Nvidia

06:19

Healthcare is in Azure and all of it is

06:22

deeply integrated and deeply connected

06:24

with Microsoft fabric a NM it's a

06:27

pre-trained model so it's pretty clever

06:30

and it is packaged and optimized to run

06:33

across nvidia's install base which is

06:36

very very large what's inside it is

06:39

incredible you have all these

06:41

pre-trained stateof the open source

06:43

models they could be open source they

06:45

could be from one of our partners it

06:46

could be created by us like Nvidia

06:48

moment it is packaged up with all of its

06:51

dependencies so Cuda the right version

06:54

cdnn the right version tensor RT llm

06:57

Distributing across the multiple gpus

06:59

tried and inference server all

07:01

completely packaged together it's

07:05

optimized depending on whether you have

07:07

a single GPU multi- GPU or multi- node

07:09

of gpus it's optimized for that and it's

07:12

connected up with apis that are simple

07:14

to use these packages incredible bodies

07:17

of software will be optimized and

07:19

packaged and we'll put it on a

07:22

website and you can download it you

07:24

could take it with you you could run it

07:27

in any Cloud you could run it in your

07:29

own data Center you can run in

07:30

workstations if it fit and all you have

07:32

to do is come to ai. nvidia.com we call

07:35

it Nvidia inference microservice but

07:38

inside the company we all call it Nims

07:40

we have a service called Nemo

07:42

microservice that helps you curate the

07:44

data preparing the data so that you

07:46

could teach this on board this AI you

07:49

fine-tune them and then you guardrail it

07:51

you can even evaluate the answer

07:53

evaluate its performance against um

07:55

other other examples and so we are

07:58

effectively an AI Foundry we will do for

08:02

you and the industry on AI what tsmc

08:05

does for us building chips and so we go

08:08

to it with our go to tsmc with our big

08:10

Ideas they manufacture and we take it

08:12

with us and so exactly the same thing

08:14

here AI Foundry and the three pillars

08:17

are the NIMS Nemo microservice and dgx

08:21

Cloud we're announcing that Nvidia AI

08:23

Foundry is working with some of the

08:25

world's great companies sap generates

08:27

87% of the world's global Commerce

08:30

basically the world runs on sap we run

08:32

on sap Nvidia and sap are building sap

08:35

Jewel co-pilots uh using Nvidia Nemo and

08:38

dgx Cloud uh service now they run 80 85%

08:42

of the world's Fortune 500 companies run

08:44

their people and customer service

08:46

operations on service now and they're

08:49

using Nvidia AI Foundry to build service

08:52

now uh assist virtual

08:55

assistance cohesity backs up the world's

08:58

data their sitting on a gold mine of

09:00

data hundreds of exobytes of data over

09:02

10,000 companies Nvidia AI Foundry is

09:05

working with them helping them build

09:08

their Gia generative AI agent snowflake

09:12

is a company that stores the world's uh

09:15

digital Warehouse in the cloud and

09:17

serves over three billion queries a day

09:22

for 10,000 Enterprise customers

09:24

snowflake is working with Nvidia AI

09:26

Foundry to build co-pilots with Nvidia

09:29

Nemo and Nims net apppp nearly half of

09:33

the files in the world are stored on

09:36

Prem on net app Nvidia AI Foundry is

09:39

helping them uh build chat Bots and

09:41

co-pilots like those Vector databases

09:44

and retrievers with enidan Nemo and

09:47

Nims and we have a great partnership

09:49

with Dell everybody who everybody who is

09:52

building these chatbots and generative

09:54

AI when you're ready to run it you're

09:57

going to need an AI Factory

10:00

and nobody is better at Building

10:02

endtoend Systems of very large scale for

10:05

the Enterprise than Dell and so anybody

10:08

any company every company will need to

10:10

build AI factories and it turns out that

10:13

Michael is here he's happy to take your

10:17

order we need a simulation

10:21

engine that represents the world

10:23

digitally for the robot so that the

10:25

robot has a gym to go learn how to be a

10:27

robot we call that

10:30

virtual world Omniverse and the computer

10:33

that runs Omniverse is called ovx and

10:36

ovx the computer itself is hosted in the

10:39

Azure Cloud the future of heavy

10:42

Industries starts as a digital twin the

10:45

AI agents helping robots workers and

10:48

infrastructure navigate unpredictable

10:50

events in complex industrial spaces will

10:52

be built and evaluated first in

10:55

sophisticated digital twins once you

10:57

connect everything together it's insane

11:00

how much productivity you can get and

11:02

it's just really really wonderful all of

11:04

a sudden everybody's operating on the

11:05

same ground

11:06

truth you don't have to exchange data

11:09

and convert data make mistakes everybody

11:12

is working on the same ground truth from

11:15

the design Department to the art

11:16

Department the architecture Department

11:18

all the way to the engineering and even

11:19

the marketing department today we're

11:21

announcing that Omniverse

11:23

Cloud streams to The Vision Pro and

11:35

it is very very

11:37

strange that you walk around virtual

11:40

doors when I was getting out of that

11:43

car and everybody does it it is really

11:47

really quite amazing Vision Pro

11:49

connected to Omniverse portals you into

11:52

Omniverse and because all of these cat

11:55

tools and all these different design

11:56

tools are now integrated and connected

11:58

to Omniverse

11:59

you can have this type of workflow

12:01

really

12:02

incredible this is Nvidia Project

12:09

Groot a general purpose Foundation model

12:13

for humanoid robot

12:15

learning the group model takes

12:17

multimodal instructions and past

12:19

interactions as input and produces the

12:22

next action for the robot to

12:25

execute we developed Isaac lab a robot

12:29

learning application to train Gro on

12:31

Omniverse Isaac

12:33

Sim and we scale out with osmo a new

12:36

compute orchestration service that

12:38

coordinates workflows across djx systems

12:40

for training and ovx systems for

12:43

simulation the group model will enable a

12:46

robot to learn from a handful of human

12:49

demonstrations so it can help with

12:51

everyday

12:53

tasks and emulate human movement just by

12:57

observing us all this incredible

12:59

intelligence is powered by the new

13:01

Jetson Thor robotics chips designed for

13:04

Gro built for the future with Isaac lab

13:08

osmo and Groot we're providing the

13:10

building blocks for the next generation

13:12

of AI powered

13:18

[Applause]

13:26

[Music]

13:28

robotics

13:30

about the same

13:37

size the soul of

13:39

Nvidia the intersection of computer

13:42

Graphics physics artificial intelligence

13:46

it all came to bear at this moment the

13:48

name of that project general robotics

13:54

003 I know super

13:58

good

13:59

super

14:01

good well I think we have some special

14:05

guests do

14:10

[Music]

14:15

we hey

14:19

guys so I understand you guys are

14:22

powered by

14:23

Jetson they're powered by

14:26

Jetson little Jetson robotics computer

14:29

inside they learn to walk in Isaac

14:36

Sim ladies and gentlemen this this is

14:39

orange and this is the famous green they

14:42

are the bdx robots of

14:47

Disney amazing Disney

14:51

research come on you guys let's wrap up

14:54

let's

14:55

go five things where you going

15:02

I sit right

15:07

here Don't Be Afraid come here green

15:09

hurry

15:12

up what are you

15:15

saying no it's not time to

15:19

eat it's not time to

15:24

eat I'll give I'll give you a snack in a

15:26

moment let me finish up real quick

15:29

come on green hurry up stop wasting

15:34

time this is what we announce to you

15:37

today this is Blackwell this is the

15:45

plat amazing amazing processors MV link

15:49

switches networking systems and the

15:52

system design is a miracle this is

15:55

Blackwell and this to me is what a GPU

15:58

looks like in my mind

Rate This
★
★
★
★
★

5.0 / 5 (0 votes)

関連タグ
GPU TechnologyAI InnovationBlackwell ProcessorMemory CoherenceGenerative AIRobotics AdvancementNVIDIA JetsonDisney RobotsHigh-Performance ComputingAI-Enabled Systems
日本語の芁玄は必芁ですか