Degenerative AI… The recent failures of "artificial intelligence" tech

Fireship
30 May 202405:25

Summary

TLDRIn a heated debate, Elon Musk and AI expert Yann LeCun clash over the definition of science and the future of AGI, with Musk predicting its arrival by next year and LeCun expressing skepticism. The Code Report dives into this and other AI-related topics, highlighting the financial struggles of Stability AI and the potential downfall of centralized AI. It also critiques Google's AI search advice, Meta's data-hungry AI models, and the Humane pin and Rabbit R1's lack of practicality. The report concludes with disappointment over OpenAI's GPT 5 announcement, suggesting that the hype around AGI may be unfounded and driven by profit rather than genuine technological breakthroughs.

Takeaways

  • πŸš€ Elon Musk raised $6 billion for his linear algebra company, XAI, and predicted AGI will surpass humans by next year.
  • πŸ”₯ Meta's Chief AI, Yann LeCun, challenged Elon's predictions, leading to a debate on the definition of science and the future of AGI.
  • πŸ’¬ Elon and Yann disagree on the timeline for AGI, with Elon expecting it soon and Yann considering it may never come to fruition.
  • πŸ€– Some critics argue that AI is a marketing scam perpetuated by the linear algebra industry.
  • πŸ“‰ Stability AI, an open AI company, is struggling to raise funds at a $4 billion valuation and faces financial challenges.
  • πŸ›‘ The CEO of Stability AI plans to step down, acknowledging the difficulty of competing with centralized AI.
  • πŸ‘¨β€πŸ’» Codium is a free alternative to GitHub Copilot, offering AI-assisted coding with customizable context awareness.
  • πŸ“š Google's AI search recommendations can be misleading and potentially harmful, as demonstrated by the pizza and cheese advice.
  • πŸ•ŠοΈ Meta is using user data from Facebook and Instagram to train AI models, with a complex opt-out process that discourages users from doing so.
  • πŸ”¨ The Humane pin and Rabbit R1 are criticized as overhyped and potentially scam products in the AI industry.
  • 🚧 OpenAI's announcement of training GPT 5 suggests that AGI has not yet been achieved, contrary to earlier speculation.

Q & A

  • How much money did Elon raise for his linear algebra company, xAI?

    -Elon raised $6 billion for his linear algebra company, xAI.

  • What is the disagreement between Elon and Yan regarding artificial general intelligence (AGI)?

    -Elon predicts that AGI will surpass humans by next year, while Yan thinks it may never come.

  • What is the controversy surrounding the definition of science as discussed by Elon and Yan?

    -The controversy is that Yan believes in the importance of published research for something to be considered science, while Elon dismisses this view as one of the dumbest things anyone has ever said.

  • What is the current financial situation of Stability AI?

    -Stability AI has recently failed to raise more money at a $4 billion valuation and is spending money rapidly. Its founder and CEO also announced plans to step down.

  • What is the criticism of Google Search AI's advice on pizza making?

    -Google Search AI advised adding non-toxic glue to the pizza sauce for more tackiness, which is not a practical or safe suggestion.

  • How can users revert to the old Google search without AI?

    -Users can add the URL parameter 'udm 14' to Google Search to revert to the old search engine without AI enhancements.

  • What is the issue with Meta's policy on using personal data for AI training?

    -Meta has a policy that collects user data for AI training by default and provides a convoluted opt-out process, which many users find intrusive and difficult.

  • What are the criticisms of the Humane pin and Rabbit R1 products?

    -The Humane pin and Rabbit R1 are criticized for being overhyped and potentially scammy, with the former having spent 6 years in development and raised $230 million, while the latter offers functionality that can be replicated with a smartphone app.

  • What is the significance of the announcement of GPT 5 by Open AI?

    -The announcement of GPT 5 signifies that Open AI is working on a new frontier model that they claim will bring us to the next level of capabilities on the path to AGI, although it also suggests that AGI has not yet been achieved internally.

  • What is the controversy surrounding Sam Altman's departure from Open AI?

    -Sam Altman was fired from Open AI for allegedly lying to the board, and it has been speculated that his departure had nothing to do with AGI taking over humanity.

  • What is the sponsor's tool, Codium, and how does it differ from GitHub Copilot?

    -Codium is a tool sponsored in the video that offers similar features to GitHub Copilot but is completely free for individual developers. It uses AI to assist in coding, refactoring, and explaining existing code, with the ability to understand and customize context.

Outlines

00:00

πŸ€– Debates on AGI and AI Company Funding

The first paragraph discusses a debate between Elon Musk and Yon Laon about the future of artificial general intelligence (AGI). Elon's company, xAI, recently raised $6 billion and he predicts AGI will surpass human intelligence by next year. Yon, Meta's Chief AI wizard, challenges Elon's prediction, leading to a heated exchange about the definition of science and the validity of published work. The paragraph also touches on the financial struggles of Stability AI, a company that creates open image models but is facing challenges in raising further funds and needs to increase revenue to survive. The video then transitions to a sponsored segment about Codium, an AI-powered coding tool that is an alternative to GitHub Copilot and is free for individual developers.

05:02

🚫 Failures and Missteps in AI Development

The second paragraph highlights various instances where AI has gone awry. It starts with a humorous anecdote about Google Search AI providing a questionable solution to prevent cheese from falling off a pizza. The paragraph then criticizes Meta's data collection practices for AI training, which involve using Facebook and Instagram user data by default with a complicated opt-out process. The discussion moves on to the failures of the Humane pin and the Rabbit R1, two products criticized for being overhyped and underwhelming. The final point of critique is directed at Open AI and its announcement of training GPT 5, which the speaker finds disappointing as it suggests that AGI has not been achieved internally. The paragraph ends with skepticism about the discourse on AI safety and the potential for regulatory capture and financial gain from perpetuating fear around AI.

Mindmap

Keywords

πŸ’‘Elon Musk

Elon Musk is an entrepreneur and CEO known for his work with companies like Tesla and SpaceX. In the video, he is mentioned for raising $6 billion for his 'linear algebra company', presumably a fictional or metaphorical reference to his ventures in AI and technology. The script also describes a debate between him and another AI expert, indicating his influence and controversial stances within the tech community.

πŸ’‘Linear Algebra

Linear algebra is a branch of mathematics that deals with vector spaces and linear equations. In the context of the video, it is humorously used to describe the underlying mathematical principles that power AI technologies. The script implies that there is skepticism about the real-world applications and the hype surrounding AI advancements.

πŸ’‘Artificial General Intelligence (AGI)

AGI refers to the hypothetical ability of an AI to understand, learn, and apply knowledge across a broad range of tasks at a level equal to or beyond that of a human. The video discusses predictions about AGI surpassing human intelligence, highlighting the ongoing debate and differing opinions on the timeline and feasibility of such advancements.

πŸ’‘Meta

Meta is the parent company of Facebook and other social media platforms. In the script, Meta's Chief AI wizard, Yon Laon, is depicted as engaging in a debate with Elon Musk, showcasing the company's involvement in AI research and development. The mention of Meta also brings up issues related to data privacy and the use of personal information for AI training.

πŸ’‘Technical Papers

Technical papers are scholarly articles that present research findings in a specific field. The video script uses the number of technical papers published by Yon Laon as a measure of scientific credibility and contribution to the field of AI, emphasizing the importance of peer-reviewed research in validating scientific claims.

πŸ’‘Stability AI

Stability AI is a company mentioned in the script that focuses on AI technologies. The company is described as having financial struggles and a CEO who has stepped down, suggesting challenges in the AI industry's sustainability and the practicality of current business models.

πŸ’‘Open AI

Open AI is a research organization that aims to develop and promote friendly AI technologies. The script refers to Open AI in the context of its financial success and the development of AI models like 'stable diffusion'. It also critiques the company's approach to AI development and its potential impact on the industry.

πŸ’‘Codium

Codium is an AI-powered coding assistant mentioned as an alternative to GitHub Copilot. The video promotes it as a free tool for individual developers, emphasizing its features like code refactoring and context understanding. This highlights the growing integration of AI in software development.

πŸ’‘Google Search AI

Google Search AI refers to the AI algorithms used by Google to enhance search results. The video humorously criticizes the AI's suggestions, such as adding glue to pizza sauce, to illustrate potential flaws and the need for caution when relying on AI for practical advice.

πŸ’‘Data Harvesting

Data harvesting is the process of collecting large amounts of data, often personal, for analysis or other uses. The script mentions Meta's practices in data harvesting for AI training, raising ethical concerns about user privacy and the extent to which companies should use personal data.

πŸ’‘GPT-5

GPT-5 is the next iteration of Open AI's language model, which the script suggests is being marketed as a step towards AGI. The mention of GPT-5 is used to critique the hype around AI advancements and to question whether the technology is truly progressing as claimed by its developers.

Highlights

Elon Musk raised $6 billion for his linear algebra company, XAI.

XAI predicts artificial general intelligence will surpass humans by next year.

Meta's Chief AI wizard, Yon Laon, joined a debate with Elon on AI predictions.

Elon cast doubt on Yon's scientific abilities, sparking a heated exchange.

Yon countered with his publication of 80 technical papers in the last 2 years.

Elon dismissed the importance of non-published science in their debate.

Yan provided an elitist definition of how science works, concluding a 'dumb' debate.

Disagreement between Elon and Yon on the future of AGI; Elon says it's imminent, Yon doubts it.

Some opinions suggest AI is a marketing scam by the linear algebra industry.

The video discusses five examples of linear algebra going wrong.

Stability AI struggles to raise funds at a $4 billion valuation and faces financial challenges.

Sponsor mention: Codium, an AI coding tool alternative to GitHub Copilot, is free for individual developers.

Google Search AI provided incorrect advice on a cooking issue.

A method to revert to the old Google Search without AI by adding 'udm 14' as a URL parameter.

Meta's data collection policy for AI training uses Facebook and Instagram user data by default.

Humane pin and Rabbit R1 are criticized as useless products based on scams.

GPT 5 by OpenAI is a new frontier model claimed to bring capabilities closer to AGI.

Speculations about AGI achievements were incorrect, as indicated by the firing of Sam Altman.

Altman's role in creating fear around AI for hype and potential regulatory capture is questioned.

AI is a useful tool, but the narrative around large language models being intelligent is challenged.

Transcripts

00:00

yesterday is Elon raised $6 billion for

00:02

his linear algebra company xai then

00:05

predicted artificial general

00:06

intelligence surpassing humans will be

00:08

here by next year moments later meta's

00:10

Chief AI wizard yon laon joined the chat

00:13

to roast Elon on his own platform

00:15

basically implying that his predictions

00:16

are insane Elon clapped back casting

00:19

doubt on Yan's science abilities then

00:20

Yan was like dog I've published 80

00:23

technical papers in the last 2 years how

00:24

about you Elon was like that's nothing

00:26

and then Yan was like well if it's not

00:28

published it's definitely not signed

00:30

science and then elon's all this is one

00:31

of the dumbest things anyone has ever

00:33

said at which point Yan gave him the

00:34

elitist definition of how science Works

00:37

thus concluding one of the dumbest

00:38

debates of all time between two of the

00:39

brightest Minds in technology Yan and

00:41

Elon may not agree on the definition of

00:43

science but they also disagree on the

00:45

future of AGI Elon says it's about to

00:47

come but yon thinks it may never come

00:49

some say that AI is not even real it's

00:51

just the greatest marketing scam ever

00:52

pulled by the linear algebra industry

00:54

and in today's video we'll look at five

00:56

recent examples that are both hilarious

00:58

and terrifying of when linear your

01:00

algebra goes wrong it is May 30th 2024

01:02

and you're watching the code report

01:04

first up we need to talk about stability

01:06

AI which might need to be renamed

01:08

unstable Ai and that's unfortunate

01:10

because it's a far more open AI company

01:12

than its competitors like open Ai and

01:14

they create the best open image models

01:15

out there like stable diffusion it's

01:17

raised a lot of money but recently

01:18

failed to raise more money at a $4

01:20

billion valuation and they're spending

01:22

money like crazy in addition it's

01:24

founder and CEO recently announced plans

01:26

to step down and said you're not going

01:27

to beat centralized AI with more

01:29

centralized AI this is the same guy that

01:31

said in 5 years there will be no

01:33

programmers left the unfortunate reality

01:35

is that you need billions of dollars in

01:36

cash to burn through to train these

01:38

large Foundation models a stability AI

01:40

is not like Microsoft where they rake in

01:42

billions of dollars every month selling

01:43

a subpar operating system and if it

01:45

doesn't come up with a way to generate a

01:47

lot more Revenue fast it could fail my

01:49

only hope is that they release stable

01:50

diffusion 3 before that happens before

01:52

we go any further though I want to tell

01:54

you about an amazing tool that every

01:56

programmer should know about called

01:57

codium the sponsor of today's video by

02:00

now you've probably heard of GitHub

02:01

co-pilot but what if there is a better

02:03

alternative with all the same features

02:05

that was completely free for individual

02:06

developers it sounds too good to be true

02:08

but if you don't believe me install it

02:10

on any editor right now then hit control

02:12

I you now have the ability to write code

02:14

with AI using a model that's trained on

02:16

permissive data it can easily refactor

02:18

and explain existing code but most

02:20

importantly it understands context it'll

02:23

automatically take into account the

02:24

context of open files and the current

02:26

git repo but this context can also be

02:28

customized you can choose other files

02:30

directories and repos to guide the AI to

02:33

the proper code style and apis for your

02:35

project and it's just Flatout fun to use

02:37

use codium for free forever with no

02:39

strings attached using the link right

02:41

here but now let's get back to failures

02:43

like Google search AI this weekend I was

02:45

making a pizza but the cheese was

02:46

falling off so I went to Google to see

02:48

what I could do about this Gemini in its

02:50

Infinite Wisdom it told me to go ahead

02:51

and add an eighth of a cup of non-toxic

02:53

glue to the sauce to give it more

02:55

tackiness and I had to learn the hard

02:56

way that this is not a great idea if

02:58

you're concerned about diarrhea also

03:00

don't ask Google AI about depression or

03:01

homicidal feelings just trust me on that

03:04

one but luckily I have a little secret

03:05

to let you in on if you go to Google

03:07

Search and then add this URL parameter

03:09

of udm 14 you can go back to the old

03:11

Google search before it Panic

03:13

implemented AI everywhere it's just pure

03:15

Google search where your only worry is

03:16

being bombarded by advertisements

03:18

speaking of ads the world's most

03:20

notorious data Harvester of meta is also

03:22

a front runner in the AI race they've

03:24

given us some great open models like

03:25

llama however in order to train these

03:27

models they need a lot of data and they

03:29

want to use Facebook and Instagram user

03:31

data to do that now most people don't

03:32

want their personal information devoured

03:34

by AI so in true metaform they create a

03:37

policy that collects your data for AI by

03:39

default then provide a convoluted way to

03:41

opt out of it you have to find and fill

03:43

out this form which requires a written

03:45

explanation then request a onetime

03:47

password to finally submit it they want

03:48

to cover their asses legally but also

03:50

really don't want you to opt out and a

03:52

100% guarantee there will be a useless

03:54

Senate hearing about this in the next

03:55

couple years but our next major failure

03:57

is a two for one including the Humane

03:59

pin and and the rabbit R1 the Humane pin

04:01

which was famously obliterated by Marcus

04:03

brownley spent 6 years developing this

04:05

thing in stealth mode while raising 230

04:07

million and now they're trying to find a

04:09

sucker to buy this company for a billion

04:11

dollars the rabbit R1 is a similar

04:13

useless product which I criticized a few

04:14

months ago because you can do the exact

04:16

same thing with an app on your phone but

04:18

I had no idea how deep this scam

04:20

actually goes I don't want to give away

04:21

the details so check out coffee zilla's

04:23

video on that it's built on a scam and

04:25

finally that brings us to number five

04:27

GPT 5 open AI recently announced that

04:30

they're now training GPT 5 a New

04:32

Frontier Model that will quote bring us

04:34

to the next level of capabilities on our

04:36

path to AGI and that statement is

04:38

extremely disappointing because if you

04:39

read between the lines it means that AGI

04:42

has not been achieved internally like

04:43

everyone was speculating a few months

04:45

ago when Sam Alman was fired this week

04:47

one of the former board members spoke

04:49

out about why Sam Alman was fired and it

04:51

had nothing to do with AGI taking over

04:52

Humanity apparently Sam ultman was

04:54

outright lying to the board why ly and

04:58

as of this week he's now the head head

04:59

of open ai's new 9-person Safety

05:01

Committee but it makes you wonder if all

05:03

this pontificating about AI safety is

05:06

actually just a lie if fear keeps the

05:07

hype train going and opens the door to

05:09

things like regulatory capture and

05:11

ultimately more money artificial

05:13

intelligence is no doubt a useful tool

05:14

but the greatest trick linear algebra

05:16

ever pulled was making humans think that

05:18

large language models are actually

05:20

intelligent this has been the code

05:21

report thanks for watching and I will

05:23

see you in the next one

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
AGI DebateTech HumorElon MuskAI IndustryLinear AlgebraArtificial IntelligenceTech CritiqueFuturologyCode ReportAI Ethics