Intel is Gunning for NVIDIA

Gamers Nexus
12 Apr 202425:29

Summary

TLDRIntel made significant announcements at its Vision 2024 event, focusing on AI advancements with the launch of the gouty 3 accelerator and Zeon 6 CPUs. The company is aiming to compete with Nvidia's dominance in the AI market by promoting open solutions and AI-optimized Ethernet solutions. Intel's strategy highlights a shift towards AI computing and the development of non-proprietary technologies, with an emphasis on total cost of ownership and performance in AI tasks. The event also touched on software developments and Intel's approach to the AI market, indicating a competitive landscape in the tech industry's new AI era.

Takeaways

  • 🚀 Intel unveiled its Gaudi 3 AI accelerator and new CPUs at the Vision 2024 event, focusing on AI advancements.
  • 🎯 Both Intel and Google announced plans for AI accelerators, aiming to compete with Nvidia's dominance in the AI market.
  • 🤖 Intel's Gaudi 3 accelerator claims to perform large language model tasks 40-50% faster than Nvidia's H100 in some scenarios.
  • 🔧 Intel's strategy includes pushing for more open solutions, such as relying on Ethernet instead of NVLink, contrasting with Nvidia's proprietary approaches.
  • 🌐 Nvidia has been expanding into the software side, creating an ecosystem around its AI offerings, which is a concern for competitors.
  • 📈 Intel's CEO, Pat Gelsinger, highlighted the shift away from proprietary CUDA models and positioned Intel as an alternative in the AI training market.
  • 🔌 The Gaudi 3 accelerator features a multi-die approach with high-speed interconnects like HBM2E memory and 200 GB Ethernet connections.
  • 💡 Intel's new Zeon 6 CPUs, including Sierra Forest and Granite Rapids, aim to improve performance per watt and cater to various market segments.
  • 📱 Intel's mobile client CPUs, code-named Lunar Lake, are part of their vision for an AI-focused personal computing revolution.
  • 🛠️ Intel addressed the software stack, emphasizing curated pipelines and services like RAIG (Retrieval-Augmented Generation) for business applications.

Q & A

  • What was Intel's main focus during the Vision 2024 keynote event?

    -Intel's main focus during the Vision 2024 keynote event was on AI and its advancements in AI accelerators, specifically the new Gaudi 3 accelerator and Zeon 6 CPUs.

  • How does Intel's Gaudi 3 accelerator compare to Nvidia's H100 in terms of performance and power efficiency?

    -Intel claims that the Gaudi 3 accelerator is 40% faster in time to train and 50% more power efficient than Nvidia's H100 in large language models, in specific workload scenarios.

  • What is the significance of Intel's approach to using Ethernet instead of NVLink in their AI accelerators?

    -Intel's approach to using Ethernet instead of NVLink aims to promote more open solutions and challenge Nvidia's proprietary technology, potentially fostering collaboration among non-Nvidia companies in the AI market.

  • What are some of the new AI applications and functions mentioned by Intel for their upcoming CPUs?

    -Intel mentioned Microsoft Co-Pilot, Adobe Sensei, and Zoom Teams as applications with new AI functions for their upcoming CPUs.

  • How does the new Zeon 6 CPU from Intel differ from the previous generation in terms of performance and efficiency?

    -The new Zeon 6 CPU is built on the Intel 3 process node and is composed entirely of efficiency cores, leading to a quoted 2.4x improvement in performance per watt versus the previous generation.

  • What is Intel's strategy in targeting the manufacturing and logistics sector with their AI solutions?

    -Intel is targeting the manufacturing and logistics sector by promoting the use of AI and deep learning for robotics, automation, and advanced logistics, aiming to bring their AI solutions to traditionally non-consumer sectors of high-end computational needs.

  • What is the significance of Intel's mention of 'retroactive augmented generation' (RAG) in the software stack discussion?

    -RAG is significant because it allows pre-trained LLMs to reference or know about proprietary or sensitive data without needing to be trained with it, which could be useful in business settings for tasks like data citation and information retrieval.

  • How does the competition among Intel, AMD, and Nvidia in the AI market reflect on their strategies and goals?

    -The competition among these companies indicates that they see the AI market as a massive opportunity for growth and are aggressively trying to capture market share from each other, with a focus on developing both hardware and software solutions to establish themselves in the new computational landscape.

  • What is the significance of the naming convention for Intel's new CPUs, such as Sierra Forest and Granite Rapids?

    -The naming convention, which features names related to water and natural landscapes, reflects Intel's theme for their CPU codenames and likely aims to create a memorable and consistent branding for their products.

  • How does the transcript suggest Intel's current positioning in the AI market compared to Nvidia?

    -The transcript suggests that Intel acknowledges it is not yet the market leader in the AI space and that they are working on catching up to Nvidia, with a focus on providing open solutions and collaborating with other companies to challenge Nvidia's dominance.

Outlines

00:00

🚀 Intel's Vision 2024 Keynote and AI Focus

Intel's Vision 2024 event highlighted the company's shift in focus towards AI and its commitment to compete with Nvidia in the AI market. The keynote addressed previous advancements in technology as 'boring' and hinted at Intel's new approach, including the development of AI accelerators and partnerships with Google. The event emphasized the importance of marketing and the challenge to Nvidia's dominance, particularly in the software ecosystem. Intel's CEO, Pat Gelsinger, criticized proprietary CUDA models and positioned Intel as a proponent of open solutions like Ethernet, contrasting with Nvidia's NVLink.

05:00

🌐 Targeting Nvidia's H100 and Ethernet Developments

The second paragraph delved into Intel's direct challenge to Nvidia's H100, positioning its GouTube 3 parts as a competitive alternative. It discussed the industry's anticipation for hardware that could alleviate the backorder issues surrounding the H100. Intel's strategy involves not only competing for the top spot but also addressing the needs of companies unable to secure Nvidia's products. The segment also highlighted Intel's push for Ethernet as an alternative to NVLink, aiming to revolutionize AI fabrics and networking with AI-optimized Ethernet solutions.

10:02

🔧 Architectural Insights and Performance Claims

This paragraph provided an in-depth look at the technical specifications of Intel's GouTube 3 accelerator, including its multi-die approach and the inclusion of tensor cores, Matrix math engines, and SRAM cache. It also touched on the memory capacity and bandwidth, as well as the performance metrics compared to Nvidia's H100. Intel's claims of faster training times and power efficiency were scrutinized, with a focus on the hardware's potential impact on the AI market and the company's efforts to reach manufacturing clients.

15:04

💡 New CPU Announcements and AI Integration

Intel's software and hardware advancements were further explored, with a focus on the new Zeon 6 CPUs, including Sierra Forest and Granite Rapids. The paragraph highlighted the performance improvements and energy efficiency of these CPUs, as well as their potential to reduce server racks. Intel's mobile client CPUs, code-named Lunar Lake, were introduced as part of the AI PC revolution, promising significant AI performance boosts. The segment also mentioned third-generation NPU code-named Panther Lake, emphasizing Intel's commitment to integrating AI capabilities across its product lineup.

20:05

🌟 Software Stack and AI Strategy

The final paragraph discussed Intel's approach to the software stack, aiming to provide curated pipelines similar to Nvidia's NIMS. Intel's focus on the business applications of AI, such as Retrieval Augmented Generation (RAG), was highlighted, showcasing its potential in practical scenarios. The segment closed with a reflection on Intel's positioning in the AI market, acknowledging the need for time to catch up to Nvidia and the competitive landscape among tech giants in the emerging AI space.

25:06

🎥 Upcoming Reviews and Consumer Hardware News

The video script concluded with a teaser for upcoming content, including reviews of ITX and ATX cases and coolers, emphasizing the excitement around new product evaluations. The paragraph also encouraged viewers to subscribe for more updates and check out recent interviews with various companies, wrapping up the event with a promise of continued consumer hardware news coverage.

Mindmap

Keywords

💡AI Accelerators

AI Accelerators are specialized hardware designed to speed up artificial intelligence tasks such as deep learning and machine learning. In the context of the video, companies like Intel and Google are developing AI accelerators to compete with Nvidia's dominance in the market. An example from the script is Intel's new Gaudi 3 accelerator, which is compared to Nvidia's H100 for training large language models (LLMs).

💡Nvidia

Nvidia is a company known for its graphics processing units (GPUs) and AI technology. The video discusses Nvidia's strong position in the AI market and how competitors like Intel are aiming to challenge this dominance. Nvidia's H100 is highlighted as a benchmark for training LLMs, and there's mention of Nvidia's proprietary CUDA models and the expansion into software, creating an ecosystem around their products.

💡Gaudi 3 Accelerator

The Gaudi 3 Accelerator is a product developed by Intel as part of their strategy to compete in the AI market. It is an AI chip designed to perform LLM-related generative AI tasks faster and more power-efficiently than Nvidia's H100, according to Intel's claims. The Gaudi 3 is part of Intel's push to offer alternatives to Nvidia's solutions and is a key announcement from Intel's Vision 2024 event.

💡Open Solutions

Open Solutions refer to non-proprietary and standardized technologies that can be used by multiple parties without restrictions. In the video, Intel is promoting open solutions like Ethernet instead of Nvidia's NVLink, aiming to create a more accessible and collaborative environment for AI development. This approach is also seen with AMD's marketing strategy for their GPUs, highlighting a move away from proprietary models towards more open industry standards.

💡Ethernet

Ethernet is a widely used technology for local area networking. In the context of the video, Intel is pushing for the development and use of Ethernet as an alternative to Nvidia's NVLink for high-speed interconnects in AI hardware. Intel's vision is to create AI-optimized Ethernet solutions that can support the growing demands of AI fabrics and networking for the future, challenging Nvidia's dominance and offering more open standards.

💡Zeon 6 CPU

The Zeon 6 CPU is a new processor from Intel that is part of their strategy to enhance AI capabilities in computing. Specifically, the Sierra Forest version is built with the Intel 3 process node and is said to offer a 2.4x improvement in performance per watt over the previous generation. This CPU is an example of Intel's efforts to improve efficiency and performance in their hardware to better compete in the AI market.

💡AI PCs

AI PCs refer to personal computers with enhanced capabilities for artificial intelligence tasks. Intel discusses the concept of an AI PC as a revolution in personal computing, emphasizing the increased integration of AI functions in everyday computing devices. The Lunar Lake CPUs, with their improved AI performance, are an example of how Intel aims to bring AI capabilities to the consumer level.

💡Software Stack

The software stack refers to the complete collection of software components and layers needed to run an application or system. In the video, Intel's focus on the software side of AI includes providing curated pipelines similar to Nvidia's NIMS, which are inference microservices. The software stack is crucial for the effective operation of AI accelerators and the development of AI applications.

💡Retrieval Augmented Generation (RAG)

Retrieval Augmented Generation (RAG) is a machine learning technique that combines the capabilities of retrieving relevant data with the generation of new content. In the video, Intel highlights RAG as a promising business application, allowing pre-trained models to reference or know about proprietary or sensitive data without needing to be trained on it. An example given is using RAG in identifying parts information related to a contamination incident.

💡Market Share

Market share refers to the percentage of the total market that a company or product holds. The video discusses the competitive efforts of companies like Intel, AMD, and Google to gain market share in the AI industry, challenging Nvidia's dominance. The focus on AI accelerators, CPUs, and software solutions is part of these companies' strategies to capture a larger portion of the AI market.

💡Industry Ecosystem

An industry ecosystem is a network of interconnected organizations and components that create a larger, functioning whole within a specific sector. In the context of the video, Nvidia has created an ecosystem around its products, including hardware and software. Intel's strategy to promote open solutions and standards aims to challenge this ecosystem and create a more collaborative environment for AI development.

Highlights

Intel's Vision 2024 event showcased new advancements in AI technology, emphasizing the company's shift from previous hardware improvements to a focus on AI.

Intel's CEO Pat Gelsinger compared the past decade's progress in technology to a more boring pace, highlighting the company's new direction in AI.

Intel and Google both announced plans for AI accelerators, indicating a competitive market and a challenge to Nvidia's dominance in the AI sector.

Intel's Gaudi 3 accelerator was a major announcement, aiming to compete directly with Nvidia's H100 in terms of performance and power efficiency.

Intel's strategy includes focusing on open solutions and non-proprietary technologies, contrasting with Nvidia's CUDA models and ecosystem.

The Gaudi 3 accelerator features a multi-die approach and a significant amount of memory and bandwidth, designed to enhance AI tasks.

Intel's Xeon 6 CPU announcements highlighted the company's first volume part made with the Intel 3 process node, aiming for improved performance per watt.

Intel's mobile client CPUs, code-named Lunar Lake, are expected to bring three times the AI performance of the current Core Ultra lineup.

Intel's focus on AI in the software stack was evident, with an emphasis on curated pipelines and services like Nvidia's NIMS.

Intel's approach to AI includes the use of Retrieval Augmented Generation (RAG), which allows for the use of pre-trained models without the need for proprietary data.

Intel acknowledged that it will take time to catch up to Nvidia, showing a more humble and realistic approach to its position in the AI market.

The AI market is seen as a massive opportunity by companies like Intel, Google, and Amazon, leading to a land grab and competitive aggression in this new space.

Intel's announcements suggest an attempt to secure a position as the second-place contender in the AI market, reflecting a strategic shift in the company's direction.

The competitive landscape in AI is leading to potential industry consolidation, with companies looking to acquire or merge to strengthen their positions.

Intel's focus on AI is not limited to high-end solutions but also targets sectors like manufacturing and logistics, aiming to revolutionize these industries with AI and automation.

The event highlighted the importance of interconnects and IO solutions in AI hardware, with Intel pushing for more open standards like Ethernet.

Intel's strategy in the AI market is to provide hardware and software solutions that are accessible and affordable, comparing itself to the availability of Waffle House at 3:00 a.m.

Transcripts

00:00

in blasted on stage at its Vision 2024

00:02

event with some news Papa's little baby

00:05

here man look at this big boy

00:08

yeah and you know I I sort of looked at

00:11

the prior decade as sort of

00:14

boring you know we made PCI gen a little

00:17

bit faster we made DDR the next

00:20

increment you we kicked up the thermal

00:23

envelope a little you know we

00:25

added a few more ques and we said okay

00:27

it's a good chip boring

00:30

definitely a different approach from

00:32

Nvidia let's see if they're excited

00:34

about anything gouty

00:36

[Applause]

00:38

[Music]

00:39

[Applause]

00:46

[Music]

00:50

3 you know huge advancements okay still

00:52

a different approach but that's

00:56

better I am the sound effect but what we

00:59

really care about is the marketing music

01:01

behind it you might remember nvidia's

01:03

brainwashing music

01:06

previously this the part

01:10

this and we're concerned about if intel

01:13

has an answer to that in the township of

01:16

quo where many folks lacked Vision Chief

01:20

geek patch and sales guy shell were

01:23

shopkeepers on a mission they thought

01:25

what quo needed was a spark of

01:28

inspiration so they whipped up a a batch

01:30

of woohoo chips they knew would cause a

01:33

sensation a business exploded with lions

01:37

out the door what the

01:39

Intel just got done with its Vision

01:41

2024 keynote event and you can guess

01:44

what they were talking about ai ai ai a

01:48

a ai ai ai ai ai ai ai ai ai ai but

01:54

Intel is absolutely taking this

01:55

seriously and actually so is Google both

01:57

companies announced their plans for AI

01:59

EX accelerators and both of them appear

02:02

to be aiming at Nvidia to try and knock

02:04

them down a few pegs if not maybe later

02:07

eventually try and take the crown Nvidia

02:09

has been asserting absolute and total

02:11

dominance over the AI market for some

02:14

time now and has been expanding more

02:16

into the software side of things which

02:18

is probably caused for concern for its

02:20

competitors because they start to create

02:22

this ecosystem where if you want AI

02:25

anything you're attached somehow to

02:28

Nvidia and in CEO Pat Galer took some

02:31

direct shots in the vision 2024 keynote

02:34

industry is quickly moving away from

02:36

proprietary Cuda models and gouty is the

02:39

only Benchmark alternative to the Nvidia

02:42

h100 for training llms and it's actually

02:46

not the first time that Galer made

02:48

comments about Cuda having maybe a

02:50

limited lifespan back in around 2008 or

02:53

so before he was CEO at Intel he had a

02:56

comment about Cuda being an interesting

02:57

footnote in the annals of of History uh

03:01

so and looking forward to seeing how

03:03

Google autot transcribes that word we'll

03:06

find out I'm not going to change it so

03:07

whatever it comes up with we'll leave it

03:09

there maybe it'll be like when nvidia's

03:11

Auto transcript from the keynote said to

03:14

connect anwers to the Omniverse digital

03:17

twin fun fact they changed it after our

03:19

video I was sad before that this video

03:22

is brought to you by lean Lee and the

03:24

o1d Evo RGB case the o1d Evo RGB is an

03:28

updated entry to the famed lineup

03:31

retaining heavy support for fan mounts

03:33

Drive Mount locations and flexibility on

03:35

component mounting such as two options

03:37

to the power supply the o1d Evo rgb's

03:40

dual chamber approach aims to maximize

03:42

cable storage on the backside to

03:44

streamline Cable Management coupling

03:46

this with a unique vertical GPU Mount to

03:48

showcase the most expensive part in most

03:50

systems learn more at the link in the

03:52

description below regardless of whether

03:54

you're using any of this Hardware today

03:55

or you're even in that part of the

03:57

industry you will probably be using it

03:59

in some form

04:00

at some point that's either going to be

04:01

directly as all of this stuff comes down

04:03

to Consumer it does tend to go that way

04:05

just less powerful we see npus and the

04:08

CPUs today which are AI accelerators

04:11

built into Hardware that are meant for

04:13

accelerating these so-called AI tasks uh

04:16

and even if you're not going to use that

04:17

stuff probably consumption via services

04:20

online is the next likely route so we

04:23

haven't seen Intel AMD and Nvidia be

04:25

this amped up and aggressively

04:27

competitive in quite some time it's

04:29

clear that they see a huge money

04:31

opportunity here and they're all going

04:32

for a land grab and what they see as a

04:34

new world actually there's some new

04:36

markets here too not just software and

04:38

selling those solutions to companies but

04:41

also there's some talk of manufacturing

04:43

targeting things that would be

04:44

traditionally seen as not a consumer of

04:48

this Ultra high-end type of uh

04:50

accelerator CPU GPU whatever it may be

04:53

so anyway Intel's event this year

04:55

debuted its gudy 3 accelerator and also

04:57

some new CPU so those are the primary to

05:00

announcements they also talked about

05:01

software they talked about ethernet as

05:03

an alternative to EnV link and pushing

05:05

some development in the ethernet space

05:08

and Intel directly targeted nvidia's

05:10

h100 as a head-to-head comparison for

05:13

performance on the hardware side so

05:15

that's kind of big news because the h100

05:17

has been backordered into Oblivion

05:19

there's a big opportunity there for any

05:21

other Hardware vendor to strike for

05:23

companies that either can't wait online

05:25

because they need it now or companies

05:26

that aren't big enough to get a spot

05:28

high up enough in the line

05:30

uh so it will now have Blackwell to

05:33

compete with as well with its gouty 3

05:34

parts and Intel seems to be aware we'll

05:36

talk about this later in this video uh

05:39

that they are not going to be the number

05:41

one immediately but they are still

05:43

trying to line up as competition Google

05:45

and AMD are ramping up their efforts

05:47

this year as well in the same space and

05:49

we expect to see more from AMD at compy

05:52

Tex this year now for G3 it was joined

05:55

by Zeon 6 announcements the Zeon 6

06:00

it lacks a prime number you might have

06:02

noticed that from the six and that to us

06:07

is concerning because Intel has threes

06:09

and fives and sevens if intel has

06:11

depleted its prime number

06:14

Supply it's

06:16

over I guess there's the nine but the

06:18

vision 2024 keynote with CEO Pat Galer

06:21

wasn't as information dense as nvidia's

06:23

GTC keynote but there was still a lot of

06:25

good detail for us to cover along with

06:27

our Impressions on Intel's General

06:29

positioning strategy in the exploding AI

06:31

compute market and because gaming and

06:33

crypto have gotten too boring for them

06:35

to talk about boring AI is the newest

06:38

craze we'll see to what extent it sticks

06:40

around in this capacity but that's what

06:42

everyone's talking about now and at

06:44

least some of these things can be

06:46

deployed for non deep learning or

06:48

machine learning use cases as well but

06:50

everybody is gunning for NVIDIA right

06:52

now currently they have a combination of

06:54

leading hardware and software Solutions

06:57

but also a massive head start on some

06:59

in-house designed Solutions like IO

07:02

nvlink for example was huge for NVIDIA

07:04

in this year's Blackwell keynote and the

07:06

company also went multi-chip with its

07:08

approach to Silicon Intel is countering

07:10

with a push for more open Solutions

07:12

including relying on ethernet instead of

07:15

NV link and it's taking an angle that's

07:17

actually somewhat familiar to amd's

07:19

approach to marketing gpus and

07:21

developing them which is that both

07:23

companies are promoting a level of

07:25

openness and focusing marketing on

07:27

non-proprietary Solutions and that is

07:29

clearly aimed at Nvidia so or at least

07:32

we think so anyway it seems like it it

07:34

may be a sort of enemy of my enemy type

07:38

of situation for these companies uh and

07:41

one of the biggest advancements for

07:42

NVIDIA again based on our exposure to

07:46

the side of the market has been the the

07:49

multiple uh silicon approach to

07:51

Blackwell and also the EnV link solution

07:55

we talked about that in our Nvidia

07:56

coverage giving some opinions and uh

07:58

some of the detail about the news from

08:00

that event but Jensen's modern

08:02

definition of a GPU has changed somebody

08:04

used to say you know you guys make gpus

08:07

and we do but this is what a GPU looks

08:10

like to me when somebody says GPU I see

08:13

this and that's going to be a

08:14

consideration for all of its competitors

08:16

because regardless of what you call it

08:18

it is actually uh not just the literal

08:22

GPU silicon anymore that matters Jensen

08:24

Juan was absolutely right in that sense

08:26

because these devices need to talk to

08:29

each other at speeds that don't

08:31

hamstring them significantly and so

08:33

that's where these IO Solutions and

08:35

interconnects and things like that come

08:36

in and that's something that Intel

08:38

talked about in its Vision keynote other

08:39

critical things that customers are

08:41

saying is boy how do I Stitch these

08:44

together well there's some proprietary

08:46

Solutions available we also see that the

08:49

ultra ethernet Consortium and the work

08:51

that we're driving is standing up to

08:54

fill this scale up and scale out

08:56

networking domain and through UE Intel

09:00

is

09:01

revolutionizing building on ethernet

09:03

networking for AI fabrics for the future

09:06

and we'll be introducing an array of AI

09:09

optimized ethernet Solutions this lineup

09:11

will include Cutting Edge AI Nick cards

09:14

that will be delivering standard Nick

09:16

Solutions it will also include AI

09:19

chiplets that will enable for our

09:22

customers and partners in Intel Foundry

09:25

we're not experts in this space of AI we

09:28

talk about gaming Hardware this stuff's

09:30

cool to us it's fun to cover because

09:31

it's interesting uh technologically it's

09:34

fun to to study and read about and some

09:36

of it does come to Consumer but as far

09:38

as we understand this Market again

09:40

coming from our perspective of

09:42

non-experts in this so-called AI era uh

09:47

as far as we understand it it seems to

09:49

me that the approach to ethernet or some

09:53

open standard here would be critical for

09:56

all these non- Nvidia companies to in

09:59

some form collaborate on because it may

10:01

need to be a strength and numbers effort

10:05

against the Behemoth that is NVIDIA now

10:07

and that if they all go off and develop

10:09

their own standards and proprietary

10:10

Solutions then uh they they'll lack that

10:14

focused effort in numbers but ultimately

10:16

what we took away from Intel's key note

10:18

was that if you are a company with big

10:20

compute or AI needs you go to Intel for

10:24

the same reason that you go to Waffle

10:27

House at 3:00 a.m. they're open and

10:30

they're cheap it's not an insult by the

10:32

way that if I weren't a fan of 3: a.m.

10:35

Waffle

10:37

House why would I

10:40

inex why would I inexplicably have this

10:43

sign that I probably shouldn't have the

10:46

answer is there's a security company

10:48

founded by X Waffle House staff that's

10:51

how I knew they were good at security

10:52

moving on CEO Pat Galino ped how

10:56

beneficial open platforms like pytorch

10:58

are and repeated stated the value

11:00

proposition and lower total cost of

11:01

ownership or TCO of its solution versus

11:05

the proprietary and expense of

11:07

competition the competition again of

11:09

course being Nvidia everyone already

11:11

knows which company is number one right

11:13

now in the compute Market it's not even

11:16

a question you pull up Nvidia stock

11:18

numbers and it becomes pretty clear what

11:21

is uncertain though is which company is

11:23

going to be number two in the longer

11:25

term because everybody is trying to get

11:27

into this even companies that are aren't

11:29

traditional silicon manufacturers like

11:32

AMD Intel and Nvidia we don't think

11:34

Intel or AMD would admit what's going on

11:36

here is a fight for second place but

11:38

that's kind of what's happening right

11:40

now you have to get there before

11:41

fighting for first but that doesn't mean

11:43

Intel isn't trying to win the top spot

11:46

according to Galer and Intel the new

11:49

gouty 3 accelerator chip can do llm

11:51

related generative AI tasks at what they

11:54

say in at least some scenarios is 40 to

11:56

50% faster than nvidia's h100

12:00

we don't have a way to validate that

12:01

that's not the kind of testing we do but

12:03

that's the claim and Intel is also

12:05

trying to reach manufacturing clients uh

12:07

which this we thought was pretty

12:08

interesting because this is a trend

12:10

we've seen in Nvidia as well where uh

12:12

Nvidia kind of kicked this off but

12:15

targeting places like factories

12:18

warehouses uh facilities that

12:20

traditionally you might not associate

12:22

with these super intensive computational

12:25

needs maybe Amazon kind of a side for

12:27

their logistical efforts and Company

12:29

like that but they are both targeting

12:30

this sector which opens up probably some

12:34

new uh client base that they might not

12:37

have had before and we think that's due

12:39

to a recognition that uh in this

12:42

instance you could start using things

12:44

like AI deep learning machine learning

12:46

whatever to either fuel things like

12:48

robotics so like Amazon does with uh

12:51

using robots in the workforce and some

12:54

level of AI and automation mixed

12:56

together to pick in place or pack the

12:58

right products and it can definitely

13:00

also be used for more advanced Logistics

13:03

like routing for all of the various

13:05

delivery vehicles uh and the various

13:07

modes of transport involved in between

13:09

point A and B this is a a quote Intel

13:12

was direct and its Outreach to that

13:14

sector here's what they said normally

13:16

right you know remember manufacturing

13:18

Supply chains those kind of things you

13:20

know the developers get the cool PCS and

13:22

then the mediocre ones you know go to

13:25

the white collar workers and then the

13:26

crap ones okay we'll give those to the

13:28

manufactur facturing

13:30

no call your head of it it is time for a

13:34

pcai refresh because use cases like this

13:37

show extraordinary value for Fleet

13:40

upgrades now there's more to all this

13:42

than just Hardware of course so they

13:44

also talked about software but we'll

13:46

start with the gouty 3 accelerator this

13:48

was Intel's biggest announcement for the

13:50

event as far as uh we're concerned

13:52

anyway and technically super micro

13:55

accidentally leaked it early on stage we

13:58

also showing a real G 3 system the

14:00

world's first G 3 system new I haven't

14:03

even announced it

14:05

yet we're a little bit ahead okay I

14:09

encourage everybody to go to our boost

14:11

just check it out genuinely a really

14:13

good save though the package has 16

14:15

total silicon tiles or chiplets the two

14:18

largest in the middle contain 64 tensor

14:20

cores eight Matrix math engines a media

14:23

engine and 96 MB of SRAM cache there's

14:28

also IO which is made up of 24 200 GB

14:31

ethernet connections along the left and

14:33

16 pcie Gen 5 Lanes on the right the top

14:37

and bottom have links for the hbm2e

14:39

memory which are the uh eight

14:42

mediumsized tiles that you see along the

14:43

edges there the total memory capacity

14:46

for gudy 3 is 128 GB with 3.7 terab per

14:51

second of bandwidth and it'll be

14:53

available as a pcie addin card with a

14:56

600 watt TDP so that's again the big

14:58

difference from Nvidia not using nvlink

15:01

of course for performance a single gouty

15:03

3 is listed at 1835 teraflops of fp8

15:07

performance scaling up to Universal

15:09

baseboard which as we understand it is

15:11

what would go inside a compute server

15:13

with eight accelerators bring that up to

15:15

14.6 pedop flops of fp8 it also have

15:19

over 1 terab of hbm2e with 29.6

15:22

terabytes per second of bandwidth and

15:24

9.6 tabes per second of bidirectional

15:26

networking just a quick note here as

15:27

well it appears that Galaxy might have

15:29

misspoken on stage he said that the

15:31

memory is hbm 3E all of the slides and

15:34

the white paper though indicate hbm2 e

15:37

and Nvidia made a a similar

15:40

transpositional mistake with hbm in its

15:42

presentation so maybe it's just an hbm

15:45

thing and Intel is excited about it man

15:48

look at this big boy

15:50

yeah I always bring Big Brother along

15:53

with me nothing says giant mega

15:55

Corporation hosts AI keynote quite like

15:58

the phrase Big Brother most of the

16:01

performance metrics that Intel provided

16:03

were listed as comparisons to nvidia's

16:05

h100 or just sort of Standalone Intel

16:08

claims gudy 3 is 40% faster in time to

16:11

train and 50% more power efficient than

16:13

nvidia's h100 in large language models

16:16

that's in a specific workload though as