NVIDIA Is On a Different Planet

Gamers Nexus
21 Mar 202431:44

Summary

TLDRNvidia's GTC event unveiled the Blackwell GPU, emphasizing the company's shift from a gaming focus to AI and data center dominance. The Blackwell architecture, which combines two large chiplets as a single GPU, promises significant advancements in chip-to-chip communication and multi-chip modules. Nvidia also introduced Envy Link and Envy Link Switch for improved data center connectivity and a new AI foundation model for humanoid robots, highlighting its commitment to pushing the boundaries of AI technology.

Takeaways

  • 🚀 Nvidia unveiled its Blackwell GPU at the GTC event, marking a significant advancement in AI and gaming technology.
  • 📈 Nvidia's growth in the AI market is impacting its consumer and gaming sectors, with the company now functioning as a major AI Foundry.
  • 🔗 The Blackwell GPU combines two large dies into a single GPU solution, improving chip-to-chip communication and reducing latency.
  • 🧠 Nvidia's focus on AI extends to humanoid robotics with Project Groot, showcasing a future where AI-powered robots could perform complex tasks.
  • đŸ€– The introduction of Nvidia's Thor and its multimodal AI models like Groot indicates a shift towards AI integration in various industries.
  • 🌐 Nvidia's Envy Link and Envy Link Switch technologies aim to improve data center communication, with the new Blackwell GPUs offering increased bandwidth.
  • 💡 Nvidia's RAS engine is designed for proactive hardware health monitoring and maintenance, potentially reducing downtime in data centers.
  • 📊 Nvidia's NIMS (Nvidia Inference Machine Software) is a suite of pre-trained AI models for businesses, emphasizing data utilization and IP ownership.
  • 🔄 Multi-chip modules are highlighted as the future of high-end silicon, with Nvidia's Blackwell architecture being a notable example of this trend.
  • 🎼 Despite the technical focus, gaming was not heavily discussed during the event, but the impact of Nvidia's AI advancements on gaming is expected to be significant.
  • 🌐 Nvidia's market dominance is evident, with its AI and data center segments driving significant revenue and influencing the direction of the GPU market.

Q & A

  • What was the main focus of Nvidia's GTC event?

    -The main focus of Nvidia's GTC event was the unveiling of its Blackwell GPU and discussing its advancements in AI technology, multi-chip modules, and communication hardware solutions like NVLink and Envy Link Switch.

  • How has Nvidia's position in the market changed over the years?

    -Nvidia has transitioned from being primarily a gaming company to a dominant player in the AI market, with its products now being used in some of the biggest ventures by companies like OpenAI, Google, and Amazon.

  • What is the significance of the Blackwell GPU for Nvidia?

    -The Blackwell GPU represents a significant technological leap for Nvidia, especially in AI workloads. It combines two large dies into a single GPU solution, offering improved chip-to-chip communication and potentially setting the stage for future consumer products.

  • What are the implications of Nvidia's advancements in chip-to-chip communication?

    -Improvements in chip-to-chip communication, such as those introduced with the Blackwell GPU, can lead to more efficient and high-performing multi-chip modules. This could result in better yields for fabrication, potentially lower costs, and the ability to handle larger data transfers crucial for AI and data center applications.

  • How does Nvidia's AI technology impact the gaming market?

    -While Nvidia has emphasized its AI capabilities, its advancements also have implications for the gaming market. The company's influence in game development and feature inclusion is significant, and its GPUs are often designed to support the latest gaming technologies.

  • What is the role of the Envy Link and Envy Link Switch in Nvidia's announcements?

    -The Envy Link and Envy Link Switch are communication solutions that Nvidia announced to improve the bandwidth and connectivity between GPUs. This is particularly important for data centers and multi-GPU deployments, where high-speed communication is essential for performance.

  • What is Nvidia's strategy with its new inference platform, Nims?

    -Nims is a platform of pre-trained AI models designed for businesses to perform various tasks such as data processing, training, and retrieval. It is CUDA-based, meaning it can run on any platform with Nvidia GPUs, and allows businesses to retain full ownership and control over their intellectual property.

  • How does Nvidia's project Groot and the Thor GS platform contribute to the development of humanoid robots?

    -Project Groot is a general-purpose foundation model for humanoid robots, and the Thor GS platform is designed to run multimodal AI models like Groot. The Thor GS has a Blackwell-based GPU with 800 Tera flops of FP8 capability and a built-in functional safety processor, making it suitable for AI-powered robotics applications.

  • What is the significance of the multi-chip module approach for the future of high-end silicon?

    -The multi-chip module approach is considered the future of high-end silicon as it allows for higher yields and potentially lower costs. It also enables better performance by overcoming limitations in communication links between different pieces of silicon, which is crucial for complex AI and data center applications.

  • How does Nvidia's market position affect its competitors, Intel and AMD?

    -Nvidia's dominant market position influences trends and game development, forcing competitors like Intel and AMD to keep up. Nvidia's substantial revenue from AI and data center segments gives it significant power in the GPU market, which can impact the pricing and development of consumer GPUs.

  • What is the potential impact of Nvidia reallocating assets from AI to gaming?

    -If Nvidia reallocates resources from its successful AI segment to gaming, it could further widen the gap between itself and its competitors in terms of performance, features, and market share. This could lead to Nvidia driving more trends and developments in game technology.

Outlines

00:00

🚀 Nvidia's GTC Event and the Unveiling of Blackwell GPU

The Nvidia GTC event showcased the unveiling of the Blackwell GPU, marking a significant advancement in GPU technology. The presentation highlighted Nvidia's shift from being primarily a gaming company to a major player in the AI market. The event emphasized the importance of multi-chip modules and the challenges of chip-to-chip communication, showcasing Nvidia's innovations in this area. The discussion also touched on the impact of Nvidia's growth on the consumer and gaming markets, and the company's role in the AI sector.

05:01

đŸ€– Advancements in AI and Multi-Chip Technologies

This paragraph delves into the technical aspects of Nvidia's advancements, particularly in AI and multi-chip technologies. It discusses the potential of multi-chip modules to increase yields and reduce costs for consumers, as well as the focus on improving chip-to-chip communication. The summary also mentions the debut of the B1000, expected to be a multi-die product, and the significance of Nvidia's Blackwell architecture. The paragraph highlights the company's efforts in democratizing computing and the anticipation surrounding the impact of these technologies on both the enterprise and consumer markets.

10:02

🌐 Nvidia's Positioning in the AI and Data Center Markets

Nvidia's strategic positioning in the AI and data center markets is the focus of this paragraph. It discusses the company's branding as an AI foundry and the potential for its technology to influence consumer parts. The summary covers Nvidia's multi-chip module technology, the Blackwell GPU's impressive execution, and the implications for the future of high-end silicon. It also touches on the company's partnerships and the concept of digital twins, which are digital representations of real workspaces used for training robotic solutions.

15:03

🔍 Nvidia's Blackwell GPU and Its Impact on Software Development

The Blackwell GPU's impact on software development and its seamless integration as a single package solution is the central theme of this paragraph. The summary explains how Nvidia has worked to minimize the challenges of chip-to-chip communication, allowing the Blackwell GPU to behave like a monolithic silicon chip. It details the technical specifications of the Blackwell GPU, including its transistor count and memory bandwidth, and discusses the potential for the technology to be integrated into future consumer products.

20:06

đŸ€– AI and Robotics: Nvidia's Project Groot and Nims

This paragraph focuses on Nvidia's venture into AI-powered robotics with Project Groot and the introduction of Nims, a suite of pre-trained AI models for businesses. The summary covers the potential applications of humanoid robots in various industries and the cultural appeal of AI-driven robotics. It also discusses the capabilities of the Thor platform, which supports multimodal AI models and has a built-in safety processor. The paragraph highlights Nvidia's efforts to create a foundation model for humanoid robots that can understand human language and navigate the world.

25:08

💡 Market Dynamics and the Future of Consumer GPUs

The final paragraph discusses the market dynamics in the GPU industry and the potential future of consumer GPUs. The summary explores Nvidia's dominant position and its influence on game development and features. It also considers the roles of AMD and Intel in the market and their pursuit of AI technology. The discussion touches on the challenges of providing affordable entry points into the gaming market and the potential for multichip modules to become more prevalent in consumer GPUs.

Mindmap

Keywords

💡Nvidia

Nvidia is a multinational technology company known for its graphics processing units (GPUs) and artificial intelligence (AI) technologies. In the video, Nvidia is discussed in the context of its advancements in GPU technology, particularly with the unveiling of the Blackwell GPU at the GTC event, and its significant role in the AI market.

💡Blackwell GPU

The Blackwell GPU is a new graphics processing unit developed by Nvidia, which is expected to be a significant technological breakthrough. It is described as a multi-chip module that combines two large chiplets into a single GPU solution, offering improved performance for AI applications.

💡AI

Artificial Intelligence (AI) refers to the development of computer systems that can perform tasks typically requiring human intelligence, such as learning, reasoning, problem-solving, and perception. In the context of the video, AI is a key market for Nvidia, with the company's GPUs being used in various AI applications and the new Blackwell GPU being particularly aimed at AI workloads.

💡Multi-chip modules

Multi-chip modules (MCMs) are electronic circuits that consist of multiple chips or 'chiplets' integrated into a single functional unit. In the video, this technology is significant as it allows for the creation of powerful GPUs like the Blackwell, which combines two large chiplets to act as a single GPU, improving performance and efficiency.

💡Chiplets

Chiplets are smaller, modular semiconductor chips that can be combined to form a larger system on a chip (SoC). They are used to increase the yield, reduce costs, and improve the performance of integrated circuits. In the context of the video, chiplets are a crucial component of the Blackwell GPU's design, allowing for high-density integration and efficient communication between the chips.

💡PCI Express

PCI Express (Peripheral Component Interconnect Express) is a high-speed serial expansion bus standard for connecting a computer to one or more peripheral devices. It is used for various components such as graphics cards, SSDs, and networking cards. In the video, PCI Express is mentioned in the context of the technical specifications of Nvidia's GPUs.

💡Envy Link and Envy Link Switch

Envy Link and Envy Link Switch are Nvidia technologies related to high-speed communication links and switches between GPUs. These technologies are designed to increase the bandwidth and efficiency of data transfer between GPUs in multi-GPU configurations or within data centers.

💡Digital Twin

A digital twin is a virtual representation of a physical entity or system, used for simulation, analysis, and control. In the context of the video, digital twins are part of Nvidia's technology portfolio, used for training AI models, particularly in the domain of robotics and automation.

💡AI Foundry

An AI foundry is a term that refers to a company or facility that specializes in the design, development, and manufacturing of AI technologies, including hardware and software. In the video, Nvidia is described as branding itself as an AI foundry, indicating its focus on providing comprehensive AI solutions.

💡Project Groot

Project Groot is an initiative by Nvidia that involves the development of a general-purpose AI foundation model for humanoid robots. The project aims to create robots that can understand human language, sense, and navigate the world autonomously.

💡Nims

Nims, or Nvidia Inference Models, are pre-trained AI models provided by Nvidia for businesses to perform various tasks such as data processing, training, and generation. These models are CUDA-based, meaning they can run on any platform where Nvidia GPUs are present, offering businesses the ability to retain full ownership and control over their intellectual property.

Highlights

Nvidia's GTC event unveiled the Blackwell GPU, marking a significant advancement in GPU technology.

Nvidia's shift from being primarily a gaming company to focusing on AI and data center markets has been profound.

The Blackwell GPU is expected to be the first multi-die product with larger tech designs integrated into smaller chiplets.

Nvidia's growth to unfathomable heights in the AI market is impacting its consumer and gaming market behaviors.

The Blackwell architecture announcement was the main focus of Nvidia's GTC event, showcasing its capabilities and interconnects.

Nvidia's advancements in chip-to-chip communication, such as NVLink and Envy Link, are set to be crucial for future high-performance computing.

The Blackwell GPU combines two large die into a single package, potentially offering a significant leap over the Hopper architecture for AI workloads.

Nvidia's project Groot introduces a general-purpose foundation model for humanoid robots, aiming to navigate and interact with the world autonomously.

The Nvidia Thor system, with a Blackwell-based GPU and 800 Tera flops of FP8 capability, is designed to run multimodal AI models.

Nvidia's NIMS tool is a selection of pre-trained AI models for businesses, allowing them to retain full ownership and control over their intellectual property.

Nvidia's RAS engine is designed for monitoring hardware health and identifying potential downtime before it happens.

The Envy Link and Envy Link Switch technologies from Nvidia aim to improve communication between GPUs within data centers.

Nvidia's focus on AI has led to a democratization of computing, making advanced computing more accessible to the masses.

The Blackwell GPU supports up to 192 GB of HBM3E memory, offering a significant increase in memory bandwidth for data-intensive tasks.

Nvidia's GTC event also covered the importance of digital twins in software, where companies use digital representations of real workspaces for training robotic solutions.

Nvidia's branding shift positions it as an AI foundry, with the expectation that its technology will influence consumer parts in the future.

The discussion around Nvidia's Blackwell GPU and its implications for the future of high-end silicon and consumer multi-chip modules.

The transcript highlights the commentary on the technology sphere and the absurdity of the current world situation, particularly in relation to technology advancements.

Transcripts

00:00

there's so many companies that would

00:01

like to build they're sitting on gold

00:03

mines gold mine gold mine it's a gold

00:05

mine gold mine gold mine and we've got

00:07

the

00:10

pickaxes nvidia's GTC event saw the

00:12

unveil of its Blackwell GPU and uh

00:15

generally speaking as far as Nvidia

00:16

presentations go this one was fairly

00:18

well put together there were still some

00:20

memeable quotes God I love

00:23

Nvidia if you take away all of my

00:26

friends it's okay

00:28

Hopper

00:34

you're you're very

00:36

good good good

00:39

boy well good girl PCI Express on the

00:44

bottom on on uh

00:48

your which one is M and your left one of

00:51

them it doesn't matter but M as a side

00:54

Jensen was absolutely right when he said

00:55

that Nvidia is not a gaming company

00:57

anymore and it's clear why companies

00:58

like open Ai and goog and Amazon get a

01:01

little bit nervous when considering they

01:03

have functionally a sole GPU source for

01:05

some of their biggest Ventures that they

01:07

working on right now and Nvidia at this

01:09

point has grown to actually unfathomable

01:12

Heights it's insane to think that this

01:15

was basically a a largely gaming company

01:19

up until more recent years it always had

01:21

professional and workstation and data

01:22

center was growing but gaming was the

01:25

bulk of a lot of nvidia's revenue for a

01:28

long time and that's changed and it's

01:30

clear and how it performs in the AI

01:32

Market will impact how it behaves in the

01:35

consumer and the gaming markets but at

01:37

this point yeah we We Knew by the

01:40

numbers that Nvidia was gigantic it

01:42

didn't really sink in though until I

01:46

made myself sit through this from

01:48

mainstream news coverage Nvidia still is

01:50

a center of the universe uh huge

01:52

performance upgrade now and I had to

01:54

Google what a a pedop flop was but

01:56

please please stop he'll timly

01:58

democratized Computing give code that

02:00

for or Java or python or whatever else

02:02

the vast majority of us never learned

02:04

making us all Hostage to the autocratic

02:06

computer class he's busting up that clue

02:09

what but I think that what they a lot of

02:10

people are wanting to hear about is the

02:12

debut of What's called the

02:14

b1000 that's not that's not the name

02:17

that's it's not even the technical part

02:20

but it's expected to be the first what's

02:21

called a multi-dye product basically

02:24

larger Tech Designs put into really

02:26

small uh they're called chiplets sounds

02:29

really uh kind of cute in a way what you

02:33

said software they're also uh yeah

02:36

talking about Enterprise digital there

02:39

there was more than just the the

02:41

Blackwell

02:42

U that new technology that was uh

02:46

introduced wasn't there what else that's

02:47

right uh and actually guy's name is

02:50

David Harold Blackwell uh a

02:53

mathematician it wasn't uh Richard

02:56

Blackwell the fashionista but um just

02:59

just just shut up just please

03:03

shut like a host to a parasite gaming

03:05

has finally done something productive in

03:08

the eyes of the massive conglomerate of

03:10

non-technical media as they scramble to

03:13

tell everyone that bigger number better

03:16

and uh try to understand literally

03:19

anything about the stock they're pumping

03:20

they they don't understand what it is or

03:23

why it exists but they know that money

03:25

goes in and money comes out and so you

03:28

can speak to it in English and it would

03:30

directly generate USD you do have to

03:32

wonder though if the engineers watching

03:34

this who designed and developed all the

03:36

breakthroughs are pained by their work

03:39

being boiled down

03:41

into make investor more money now please

03:44

but the cause for all this as you would

03:45

expect was AI so we're be talking about

03:47

some of that today uh the technolog is

03:49

really interesting that Nvidia discussed

03:51

some of the biggest takeaways for us

03:53

were advancements in uh chip to chip

03:56

communication multi-chip modules and

03:59

components like Envy link or Envy link

04:00

switch where uh the actual communication

04:04

link between the different pieces of

04:06

silicon starts become the biggest

04:08

limiting factor it already was but uh

04:10

that's going to be one of the main areas

04:12

and additionally we're going to be

04:13

spending a good amount of time on just

04:14

commentary because it's we live in an

04:17

absurd world right now and at least in

04:20

the technology sphere and it deserves

04:22

some some discussion some commentary

04:24

about that too so we'll space that

04:26

throughout and in the conclusion we'll

04:27

get more into uh our thoughts on it okay

04:30

let's get started before that this video

04:32

is brought to you by thermal take and

04:33

the tower 300 the tower 300 is a full-on

04:36

showcase PC case built to present the

04:38

computer straight on with its angled

04:40

tempered glass windows or on a unique

04:43

mounting stand to show off the build in

04:45

new ways the tower 300 has a layout that

04:47

positions the GPU fans against the mesh

04:49

panel with ventilation on the opposite

04:51

side for liquid coolers and CPUs there's

04:54

also an included 2 140 mm fans up top

04:56

the panels use a quick access tooless

04:58

system to be quickly popped in and out

05:00

for maintenance and you can learn more

05:02

at the link in the description below so

05:04

whether or not you're into all of the AI

05:06

discussion this is still an interesting

05:09

uh set of

05:10

technological breakthroughs or at least

05:12

just Technologies to talk about because

05:14

some of it will come into consumer

05:16

multi- chip modules are definitely the

05:18

future of large silicon uh making it

05:21

more higher yields for fabrication

05:23

hopefully lower cost that gets at least

05:25

partially pass to Consumers but I have

05:26

some thoughts on that we'll talk about

05:28

later but generally speaking speak for

05:30

AI that's kind of what got all the buzz

05:32

and despite being a relatively technical

05:34

conference and relatively technically uh

05:36

dense keynote as far as Nvidia Keynotes

05:38

go they knew a lot of Financial and

05:41

investment firms and eyes were watching

05:43

this one and so there was some appeal to

05:46

some of the chaos that those

05:47

organizations like to observe making us

05:49

all Hostage to the autocratic computer

05:51

class moving on let's start with a

05:52

summary of the two hours of Nvidia

05:54

announcements is a lot less fluff this

05:55

time than they've typically had there

05:56

was still some fluff okay so 3,000 lb

05:59

ton and a half so it's not quite an

06:02

elephant four

06:08

elephants one

06:13

GPU and for our European viewers that's

06:16

actually an imperial measurement it's

06:17

pretty common here we use the weight of

06:19

elephants to compare things for example

06:21

one of our mod mats weighs about

06:24

0.00

06:27

4165 of an African bush elephant adult

06:31

as fast as possible pretending to

06:33

Blackwell Nvidia made these

06:34

announcements the Blackwell

06:35

architectural announcement took most of

06:36

the focus Nvidia discussed its two

06:38

largest possible dieses acting as a

06:40

single GPU Nvidia showed the brainup

06:42

board that Juan only half jokingly uh

06:46

noted as being1 billion cost to build

06:48

for testing and relating to this it

06:50

spent some time on the various

06:52

interconnects and communication Hardware

06:54

required to facilitate chipto chip

06:56

transfer nvlink switch is probably one

06:58

of the most important an M of its

07:00

presentation its Quantum infin band

07:02

switch was another of the communications

07:03

announcements a lot of time was also

07:05

spent on varying configurations of black

07:07

well like multi-gpu deployments and

07:10

servers of varying sizes for data

07:11

centers dgx was another of those

07:14

discussed outside of these announcements

07:17

Nvidia showed some humanoid robotics

07:19

programs such as project Groot giving us

07:22

some Tony Stark Vibes including

07:24

showcasing these awfully cute robots

07:26

that were definitely intended to assuage

07:29

all concerns about the future downfall

07:31

of society from

07:33

Terminators five things where you

07:38

going I sit right

07:44

here Don't Be Afraid come here green

07:47

hurry

07:50

up what are you

07:53

saying as always the quickest way to get

07:55

people to accept something they are

07:56

frightened of is by making it cute and

07:59

now after watching that my biggest

08:01

concern is actually if they'll allow

08:03

green to keep its job it was stage

08:05

fright it's new it still learning

08:08

doesn't deserve to lose its job and its

08:11

livelihood over that one small flub on

08:14

stage and wait a

08:17

minute it's working and during all of

08:20

this robotics discussion they also spent

08:21

time speaking of various Partnerships

08:23

and digital twins software is a huge

08:26

aspect of this uh companies are using

08:28

digital representation of their real

08:30

workspaces to train robotic Solutions

08:32

and a modernized take on Automation and

08:36

the general theme was that Nvidia is

08:37

branding itself differently now and so

08:40

we are

08:41

effectively an AI Foundry but our

08:43

estimation is that this technology will

08:45

still work its way into consumer Parts

08:47

in some capacity or another multi-

08:50

Solutions are clearly the future for

08:51

high-end silicon AMD has done well to

08:53

get their first in a big way but Nvidia

08:55

published its own multi-chip module

08:57

white paper many years ago and it's been

08:59

work working on this for about as long

09:01

as AMD AMD went multi-chip with its

09:04

consumer GP products the RX 7000 series

09:06

Nvidia has now done the same with

09:08

Blackwell but with a more impressive

09:10

execution of the chipto chip

09:12

communication which is maybe made easier

09:14

by the fact that companies spend

09:15

millions of dollars on these looking at

09:17

the Blackwell silicon held up by Juan

09:20

during the keynote despite obviously

09:22

limited sort of quality of footage at

09:24

this vantage point we think we can see

09:26

the split centrally located as described

09:28

with additional splits for the hbm you

09:31

can see those dividing lines in this

09:33

shot so now we're getting into recapping

09:34

the Blackwell Hardware side of things

09:36

Blackwell combines two of what Nvidia

09:38

calls the largest possible dieses into

09:41

basically a single GPU solution or a

09:44

single package solution at least in

09:46

combination with ony memory and uh Juan

09:49

described Blackwell as being unaware of

09:53

its connection between the two separate

09:55

dyes and this is sort of the most

09:57

important aspect of this because

10:00

as described at least on stage this

10:02

would imply that the Silicon would

10:05

behave like normal monolithic silicon

10:07

where it doesn't need special

10:09

programming considerations made by the

10:11

software developers by uh those using

10:13

the chip to work around things like

10:15

chipto chip communication uh chipto chip

10:18

latency like you see just for example in

10:20

the consumer World on ryzen CPUs where

10:22

uh Crossing from one CCX to another

10:25

imposes all kinds of new challenges to

10:27

deal with and there's not really been a

10:29

great way to deal with those if you're

10:31

heavily affected by it other than trying

10:33

to isolate the work onto a single piece

10:35

of silicon in that specific example but

10:38

Nvidia says that it has worked around

10:40

this so the total combined solution is a

10:43

208 billion transistor GPU or more

10:47

accurately a combination of two pieces

10:49

of silicon that are each 104 billion

10:52

transistors for contacts the h100 has 80

10:55

billion transistors that's the previous

10:57

one but they're still selling it they

10:59

still have orders backlogged to fulfill

11:01

Nvidia had various claims of how many

11:03

X's better than previous Solutions the

11:06

new Blackwell solution would be with

11:08

those multipliers ranging depending on

11:10

how many gpus are used what Precision it

11:12

is if it's related to the power uh or

11:15

whatever but at the end of the day

11:16

Blackwell appears to be a big jump over

11:19

hopper for AI workloads there was no

11:21

real mention of gaming but it's likely

11:23

that gaming gets a derivative somewhere

11:26

it's likely in the 50 Series we'll see

11:27

Blackwell unless Nvidia pull of Volta

11:29

and skips but that seems unlikely given

11:32

the current rumors Blackwell supports up

11:34

to 192 GB of HPM 3E memory or depending

11:38

on which slide you reference hmb 3E

11:41

memory the good news is that that error

11:44

means that the slides are still done by

11:45

a human at Nvidia the bad news is that

11:49

we don't know how much longer they're

11:50

going to be done by a human at Nvidia

11:52

Blackwell has 8 terab per second of

11:54

memory bandwidth as defined in this

11:56

image and as for derivative

11:58

configurations or Alternatives of this

12:00

the grace Blackwell combination uses a

12:03

Blackwell GPU solution and Grace CPUs

12:06

which is an Nvidia arm collaboration

12:08

previously launched these combinations

12:10

create a full CPU and GPU product with

12:12

varying counts of gpus and CPUs present

12:14

andv video noted that the two Blackwell

12:16

gpus and one Grace CPU configuration

12:19

would host 384 GB of HPM 3E 72 arm

12:23

neoverse V2 CPU cores and it has 900 GB

12:26

per second of nvlink bandwidth chip to

12:28

chip Nvidia says it's gb200 so-called

12:32

super chip has 16 terby per second of

12:34

high bandwidth memory 3.6 terabytes per

12:37

second of NV link bandwidth and 40 pedop

12:40

flops of AI performance depending on how

12:43

charitably you define that and turning

12:45

this into a Blackwell compute node

12:47

pushes that up to 1.7 terabytes of hbm

12:50

3E which is an obscene amount of memory

12:54

uh 32 TB pers second of memory bandwidth

12:56

and liquid cooling most of the

12:58

discussion Beyond this point focused on

13:00

various Communications Hardware

13:02

Solutions including both inip and

13:05

intranode or data center Solutions we

13:08

previously met with Sam nafziger from

13:10

AMD who is an excellent engineer great

13:13

uh communicator and presenter uh

13:15

engineer is actually kind of

13:16

underserving his position at AMD he's

13:19

considered a fellow which is basically

13:20

the highest technical rank you can get

13:22

there uh but anyway we talked with him

13:24

previously about AMD moving to multichip

13:26

for RX 7000 although it's a different

13:28

prod product it was a different era a

13:30

different target market a lot of the key

13:33

challenges are the same for what Nvidia

13:35

faced and what AMD was facing and

13:37

ultimately those challenges largely uh

13:40

Center on the principle of if running

13:43

multiple pieces of silicon obviously

13:45

they can only be as fast as the literal

13:47

link connecting them the reason for

13:49

bringing this up though is probably

13:50

because a lot of you have either

13:51

forgotten that discussion or never saw

13:53

it uh and it's very relevant here so

13:55

this is going to be a short clip from

13:57

our prior interview with AMD talking

14:00

about some of the chipto chip

14:02

Communications and uh chiplet um

14:05

interconnect and fabric limitations so

14:08

that we all get some good context for

14:10

what Nvidia is facing as well the

14:12

bandwidth requirements are just so much

14:13

higher with GBU because we're

14:14

Distributing all of this work the you

14:17

know terabytes of of data we loved the

14:21

chiplet concept we knew that the wire

14:24

counts were just too high in graphics to

14:26

do to replay what we did on CPUs

14:30

and so we were scratching our head um

14:32

you know how can we get significant

14:34

benefit um and we were aware of those

14:37

scaling curves that I showed and and the

14:40

observation was you know there actually

14:42

is a pretty clean boundary between the

14:44

infinity cache um and out and we we

14:49

recognized that these things weren't

14:51

didn't need 5 nanometer and they were

14:53

just fine for the product and in six we

14:55

were hardly spending any power you know

14:57

and the and the the G gddr6 itself

15:00

doesn't benefit at all from technology

15:02

so that's where we came up with the idea

15:04

you know we already have these gddr6

15:06

interfaces in insect technology like I

15:08

talked about the cost of porting right

15:10

and all the engineers and we already had

15:12

that and we could just split it off into

15:15

its own little die and um I mean you can

15:18

see see the results right so we were

15:19

spending 520 millim squ here we

15:21

increased our um our compute unit count

15:24

by 20% we added a bunch of new

15:26

capability but we so this thing was

15:29

would be like over you know it be

15:31

pushing 600 550 millimet squared or

15:35

something right um but we shrank it down

15:37

to 300 the rest of that discussion was

15:39

also great you should check it out if

15:40

you haven't seen it we'll link it below

15:41

it's in our engineering discussions

15:43

playlist where we've actually recently

15:45

had Nvidia on for latency discussion and

15:48

Intel on for talking about how drivers

15:50

work driver optimization all that stuff

15:52

but that's all linked below all right so

15:53

the key challenge is getting the amount

15:55

of tiny wires connecting the chips to

15:57

fit without losing performance using

15:59

that real estate for them cost making

16:02

sure that uh although there's benefit

16:03

from yields you're not causing new

16:05

problems and then just the speed itself

16:08

being the number one issue and for AMD

16:11

it was able to solve these issues well

16:12

enough where it segmented the components

16:15

of the GPU into mcds and gcds so it

16:18

didn't really split the compute it split

16:21

parts of the memory subsystem out and

16:23

that's a lot different from what Nvidia

16:25

is doing Nvidia is going the next step

16:26

with Blackwell which is a a much more

16:28

expensive part totally different use

16:30

casee than we see with RX 7000 although

16:32

AMD has its own Mi Instinct cards as

16:34

well that we've talked about with wend

16:35

in the past but uh either way this is

16:38

something where it's going a step

16:40

further for NVIDIA and uh it actually

16:44

appears to be just two literal Blackwell

16:46

complete dieses next to each other that

16:48

uh that behave as one GPU if what Jensen

16:52

Juan is saying is is accurate there

16:55

there's a small line between two dieses

16:57

this is the first time two d have Abed

16:59

like this together in such a way that

17:02

the two chip the two dieses think it's

17:04

one chip there's 10 terabytes of data

17:07

between it 10 terabytes per second so

17:10

that these two these two sides of the

17:13

Blackwell Chip have no clue which side

17:15

they're on there's no memory locality

17:17

issues no cash issues it's just one

17:21

giant chip so that's the big difference

17:23

between amd's consumer design we

17:24

previously saw and what nvidia's doing

17:26

here uh from what nvidia's katanzaro

17:29

said on Twitter our understanding is

17:31

that the 10 terab pers second link

17:33

theoretically makes all the Silicon

17:34

appear uniform to software and we're not

17:36

experts in this field or programming but

17:39

if this means that the need to write

17:41

special code for managing and scheduling

17:42

work beyond what would normally be done

17:44

anyway for a GPU is not needed here then

17:48

that's a critical step to encouraging in

17:50

socket functionally upgrades for data

17:52

centers faster adoption things like that

17:54

the next biggest challenge after this is

17:57

getting each individual GPU to speak

17:59

with the other gpus in the same Rack or

18:01

data center this is handled by a lot of

18:03

components including just the actual

18:05

literal physical wires that are

18:06

connecting things as you scale into true

18:09

data center size uh but to us again as

18:12

people who aren't part of the data

18:13

center world the most seemingly

18:16

important is the Envy link and envy link

18:19

switch uh improvements that Nvidia

18:21

announced as well nvidia's Envy link

18:23

generation 5 solution supports up to 576

18:26

gpus concurrently and a technical brief

18:29

Nvidia said this of its new Envy link

18:31

switch quote while the new Envy Link in

18:34

blackw gpus also uses two highspeed

18:36

differential pairs in each direction to

18:38

form a single link as in the hopper GPU

18:40

Nvidia Blackwell doubles the effective

18:42

bandwidth per link to 50 GB per second

18:44

in each Direction Blackwell gpus include

18:47

18 fifth generation NV link links to

18:50

provide 1.8 terabytes per second total

18:52

bandwidth 900 GB per second in each

18:55

Direction and via's Technical brief also

18:57

noted that this is over 14 times the

19:00

bandwidth of PCI Gen 5 just quickly

19:02

without spending a ton of time here

19:03

Nvidia also highlighted a Diagnostics uh

19:06

component to all of this which I thought

19:08

was really cool actually um it's the

19:11

reliability availability and

19:12

serviceability engine they're calling it

19:14

Ras for monitoring Hardware health and

19:16

identifying potential downtime before it

19:18

happens so this is one of the things we

19:20

talk about a lot internally which is we

19:22

produce so much data logging of all the

19:24

tests we're running uh but one of the

19:26

biggest challenges is that it is

19:29

difficult to to do something with that

19:31

data uh and we have systems in place for

19:34

the charts we make but there's a lot

19:35

more we could do with it it's just that

19:37

you need a system as in a computer to

19:39

basically start identifying those things

19:41

for you to really leverage it and so Ras

19:43

does that on a serviceability and a

19:45

maintenance and uptime standpoint but

19:48

they also talked about it with their

19:49

Nims so Nvidia talking about its new

19:52

Nvidia inference Nim Tool uh is a lot

19:55

less flashy than the humanoid robots

19:57

bleeding edge gpus but probably one of

19:59

the more actually immediately useful

20:02

from a business standpoint and this is

20:05

intended to uh be a selection of

20:08

pre-trained AI models for businesses

20:10

that they can use to do a number of

20:12

tasks including data processing training

20:15

llms um retrieval augmented generation

20:18

or

20:19

rag great acronym but Nims are Cuda

20:22

based so they can be run basically

20:24

anywhere Nvidia gpus live Cloud on-

20:26

premise servers local workstations

20:28

whatever businesses will also be able to

20:30

retain full ownership and control over

20:32

their intellectual property something

20:33

that's hugely important when considering

20:35

adding AI to any workflow a great

20:37

example of this is nvidia's own chip

20:39

Nemo or an llm that the company uses

20:43

internally to work with its own vast

20:45

documentation on chip design stuff that

20:47

can't get out and is incredibly useful

20:49

Nims will be able to interface with

20:51

various platforms like service now and

20:53

crowd strike or custom internal systems

20:55

there are a lot of businesses that

20:56

generate oceans of data that could help

20:58

them identify issues optimizations or

21:00

Trends in general but may not have had a

21:03

a good idea what to do with the data or

21:05

how to draw patterns from it and Jensen

21:08

Juan said this during the GTC

21:09

presentation the the Enterprise IT

21:11

industry is sitting on a gold mine it's

21:13

a gold mine because they have so much

21:16

understanding of of uh the way work is

21:19

done they have all these amazing tools

21:21

that have been created over the years

21:23

and they're sitting on a lot of data if

21:25

they could take that gold mine and turn

21:28

it into co-pilots these co-pilots could

21:30

help us do things and again we're back

21:32

to if data is the gold mine then Nvidia

21:36

in the Gold Rush is selling the pickaxes

21:39

maybe like the the Bagger 288 excavator

21:42

or something in the case of the dgx are

21:44

are you impressed with my excavation

21:46

knowledge Nvidia also announced its

21:48

project Groot with zeros which is a

21:50

legally significant distinction this is

21:52

described as a general purpose

21:54

Foundation model for humanoid robots

21:57

their quote so that's right it's robot

21:59

coming soon not the vacuum kind but the

22:01

Will Smith kind and it's going to smack

22:03

Us in the face autonomous or AI powered

22:06

robotics extend to a lot of practical

22:08

applications in vehicles industrial

22:10

machines and warehouse jobs just to name

22:12

a few however the Sci-Fi appeal of

22:15

pseudo intelligent humanoid robots is a

22:17

part of our Collective culture that even

22:19

mainstream media has been picking up on

22:21

or it's been been trying to he announced

22:24

at the end of the show uh what they call

22:27

Groot which is basically a new type of

22:29

AI architecture a foundation model uh

22:33

for

22:34

humanoid like robots so uh you can think

22:37

uh not exactly uh Terminator or Bender

22:41

from Futurama but is the classic study

22:44

of a false dichotomy your two options

22:46

are a killing machine or an

22:48

inappropriate alcoholic robot that has a

22:51

penion for gambling yeah well I'm going

22:54

to go build my own theme park with

22:57

Blackjack and hook Groot as a project is

23:00

both software and hardware and stands

23:02

for generalist robot 0000 technology as

23:05

a sum of parts and video wants Groot

23:07

machines to be able to understand human

23:10

language sense and navigate the world

23:12

around themselves uh and Carry Out

23:15

arbitrary tasks like getting gpus out of

23:18

the oven which is obviously an extremely

23:20

common occurrence over at Nvidia these

23:22

days the training part of the stack

23:23

starts with video so the model can sort

23:26

of intuitively understand physics then

23:28

accurately modeling the robots in

23:30

Virtual space Nvidia Omniverse digital

23:32

twin Tech in order to teach the model

23:35

how to move around and interact with

23:37

terrain objects or other robots that

23:39

will eventually rise up to rule us all

23:42

Jensen described this as a gym here the

23:45

model can learn Basics far quicker than

23:47

in the real world that one blue robot is

23:50

actually really good at knocking others

23:52

down the stairs we noticed not sure if

23:54

that's going into the training data but

23:55

if it does it's just the first step and

23:57

there are personal of methods to

23:59

dispatch humans I'm just I just saw a

24:02

lot of there all the mainstream Outlets

24:04

were talking about like how the world's

24:06

going to end and I kind of wanted I felt

24:09

left out the hardware is nvidia's Thor s

24:11

so which has a Blackwell based GPU with

24:15

800 Tera flops of fp8 capability on

24:17

board uh Jets and Thor will run

24:19

multimodal AI models like Groot and has

24:22

a built-in functional safety processor

24:25

that will definitely never malfunction

24:27

so we mostly cover G Hardware here back

24:29

to the commentary stuff some of the fun

24:30

just sort of discussion and thought

24:32

experiment and in this instance it does

24:34

seem like the major breakthroughs that

24:37

at least to me are the the most

24:39

personally interesting are those in

24:41

multi-chip design which we have plenty

24:45

of evidence that uh when it works it

24:48

works phenomenally well for Value it's

24:50

it's kind of a question of is NVIDIA

24:52

going to be the company that will uh

24:56

feel it's in a position to need to

24:58

propose a good value like AMD was when

25:01

it was up against Intel getting

25:03

slaughtered by ancient Skylake silicon

25:05

rehashed year after year the answer is

25:07

probably no Nvidia is not in that

25:09

position uh and so it's it's it's a

25:13

different world for them but we're not

25:16

sure if this is going to happen for the

25:18

RTX 5000 series there's no actual news

25:20

yet there's some rumors out there uh but

25:22

there's no real hard news Nvidia is

25:24

clearly however moving in the direction

25:25

of multichip modules again they

25:27

published that white paper or something

25:29

this was a long time ago might might be

25:31

more than five years ago now uh so this

25:32

has been known it's just been a question

25:34

of if they can get the implementation

25:37

working in a competitive and performant

25:40

way uh where they feel it's worth

25:42

flipping the switch on getting away from

25:43

monolithic for Consumer now AMD has

25:47

shown at least success in a technical

25:49

capacity for its RX 7000 series uh they

25:53

are relatively competitive especially in

25:55

rasterization R tracing sometimes a

25:58

different story it depends on the game

25:59

but uh they do get slaughtered in say

26:01

cyberpunk with really high RT workloads

26:04

but either way the point is that uh that

26:08

particular issue may have existed

26:10

monolithic or multi-chip and they've

26:12

shown that multichip can work for

26:14

Consumer so either way Nvidia right now

26:17

whether AMD gets its Zen moment for gpus

26:20

as well Nvidia remains a scary Beast it

26:22

is very powerful it's a ginormous

26:25

company and that quote from the Roundup

26:26

earlier uh from the one is actually

26:28

somewhat true where he said this is

26:31

nvidia's world and uh not totally wrong

26:34

so Intel and AMD are almost certainly

26:37

motivated to stay in gpus for AI

26:40

purposes just like Nvidia is that they

26:43

chase money that's the job of the

26:44

corporations that are publicly funded

26:46

and gigantic like these AI is money they

26:49

are going to chase it and the Fallout of

26:51

that is consumer gpus um so the the

26:55

difference maybe being that Nvidia has

26:58

less meaning to provide cheap entry

27:02

points into the gaming Market as we've

27:04

seen they haven't really done anything

27:05

lower than a 4060 in the modern uh

27:08

lineup that's affordable and they don't

27:12

have as much motive there's not really a

27:14

lot of reason to go chase the smaller

27:16

dollar amount when then go get $1,600

27:19

$2,000 for

27:20

490 uh at at least retail assembled cost

27:24

and then tens and hundreds and more of

27:28

thousands of dollars in Ai and data

27:31

center Parts uh so that's where AMD and

27:34

Intel will kind of remain at least

27:36

immediately critical is keeping that

27:39

part of the market healthy and Alive

27:40

making it possible to build PCS for

27:42

reasonable price without it getting

27:43

ballooned out of control where

27:45

everyone's eventually sort of snowed and

27:47

GID into thinking this is just what gpus

27:49

should cost now that gpus have gotten

27:51

such high average selling price and

27:53

they've sustained a little bit better

27:55

from the company's perspective than

27:56

previously uh it seems maybe unlikely

27:59

that they would just bring it down like

28:01

in other words if people are used to

28:02

spending $1,000 on a GPU and even if the

28:04

cost can come down in some respect uh

28:07

with advancements and things like we get

28:09

ryzen style chiplets in the future that

28:11

work on a GPU level in spite of those

28:15

costs maybe being more controllable for

28:17

the manufacturer and better yields they

28:19

might still try to find a way to sell

28:21

you a $1,000 GPU if you're buying them

28:23

if you're showing interest in them uh

28:26

that's that's how they're going to

28:27

respond unless there's significant

28:29

competition where they start

28:30

undercutting each other which obviously

28:32

is ideal for everyone here but that's

28:34

not the world we live in right now it's

28:36

the market share is largely Nvidia no

28:38

matter which Market you look at in the

28:39

GPU world and no matter how you define

28:41

gpus on the positive side our hope is

28:44

that seeing some of what they're doing

28:45

with Blackwell with multichip means that

28:47

there's a pathway to get multichip into

28:50

consumer in bigger ways than it is now

28:52

for gpus which again ryzen just sets

28:55

it's this uh enormous success story of

28:58

being able to make relatively affordable

29:01

Parts uh without actually losing money

29:04

on them and that's been the critical

29:06

component of it it's just that we're

29:07

missing the one other key aspect of

29:09

ryzen's success which was that AMD had

29:12

no other options when it launched ryzen

29:14

it it did not have the option to be

29:17

expensive whereas Nvidia does so that's

29:19

pretty different one thing's clear

29:21

though Nvidia is a behemoth it again

29:22

operates on a completely different

29:24

planet from most of the other companies

29:26

in this industry in general if it

29:28

reallocates any amount of the assets

29:31

it's gaining from AI into gaming that is

29:33

going to further widen the gulf between

29:35

them and their competitors in in all

29:37

aspects of uh gaming gpus and part of

29:41

this you see

29:43

materialize in just market share where

29:46

Nvidia is able to drive Trends and

29:48

actual game development and features

29:50

that are included in games because it

29:52

has captured so much of the market uh it

29:54

can convince these developers that hey

29:55

if you put this in statistically most of

29:58

the people who play your game will be

30:00

able to use it on our GPU and that

30:02

forces Intel and AMD into a position

30:04

where they're basically perpetually

30:06

trying to keep up uh and that's a tough

30:09

to position to fight from when the one

30:12

they're trying to keep up with has

30:14

insane revenue from segments outside of

30:18

the battle that they're fighting in

30:19

gaming so anyway AMD and then Intel

30:22

aren't slouches so despite whatever both

30:24

uh companies have for issues with their

30:26

gpus we've seen that they are also

30:28

making progress uh they're clearly

30:30

staffed with capable engineers and also

30:32

pursuing AI so some of that technology

30:36

will work its way into consumer as well

30:37

and uh AMD again stands as a good

30:40

example of showing that even though a

30:43

company can have a near complete

30:45

Monopoly over a segment talking about

30:46

Intel here many years ago if they get

30:49

complacent uh or if they just have a

30:52

series of stumbles like Intel had with

30:54

10 nanometer not being able to ship it

30:55

for forever then

30:58

they can lose that position faster than

31:01

they gained it that's kind of scary too

31:02

for the companies it just seems like

31:04

Nvidia is maybe a little more aware of

31:06

that than intel was at the time so

31:08

that's it for this one hopefully you got

31:09

some value out of it even if it was just

31:11

entertainment Watching Me Be baffled

31:14

by Massive media conglomerates failing

31:17

to get even the name of the thing

31:18

they're reporting on right when they are

31:22

some of the uh the largest and most

31:24

established companies in the world with

31:26

the long long EST history of reporting

31:29

on news but that's

31:30

okay we'll keep making YouTube videos

31:33

thanks for watching subscribe for more

31:35

go to store. gam access.net to help us

31:37

out directly if you want to help fund

31:39

our discussion of the absurdity of it

31:41

all and we'll see you all next time

Rate This
★
★
★
★
★

5.0 / 5 (0 votes)

Tags associés
Nvidia GTCBlackwell GPUAI TechnologyMulti-chip ModulesHigh PerformanceData CentersRoboticsAI FoundryGaming IndustryTech Innovations
Avez-vous besoin d'un résumé en français?