Degenerative AI… The recent failures of "artificial intelligence" tech
Summary
TLDRIn a heated debate, Elon Musk and AI expert Yann LeCun clash over the definition of science and the future of AGI, with Musk predicting its arrival by next year and LeCun expressing skepticism. The Code Report dives into this and other AI-related topics, highlighting the financial struggles of Stability AI and the potential downfall of centralized AI. It also critiques Google's AI search advice, Meta's data-hungry AI models, and the Humane pin and Rabbit R1's lack of practicality. The report concludes with disappointment over OpenAI's GPT 5 announcement, suggesting that the hype around AGI may be unfounded and driven by profit rather than genuine technological breakthroughs.
Takeaways
- 🚀 Elon Musk raised $6 billion for his linear algebra company, XAI, and predicted AGI will surpass humans by next year.
- 🔥 Meta's Chief AI, Yann LeCun, challenged Elon's predictions, leading to a debate on the definition of science and the future of AGI.
- 💬 Elon and Yann disagree on the timeline for AGI, with Elon expecting it soon and Yann considering it may never come to fruition.
- 🤖 Some critics argue that AI is a marketing scam perpetuated by the linear algebra industry.
- 📉 Stability AI, an open AI company, is struggling to raise funds at a $4 billion valuation and faces financial challenges.
- 🛑 The CEO of Stability AI plans to step down, acknowledging the difficulty of competing with centralized AI.
- 👨💻 Codium is a free alternative to GitHub Copilot, offering AI-assisted coding with customizable context awareness.
- 📚 Google's AI search recommendations can be misleading and potentially harmful, as demonstrated by the pizza and cheese advice.
- 🕊️ Meta is using user data from Facebook and Instagram to train AI models, with a complex opt-out process that discourages users from doing so.
- 🔨 The Humane pin and Rabbit R1 are criticized as overhyped and potentially scam products in the AI industry.
- 🚧 OpenAI's announcement of training GPT 5 suggests that AGI has not yet been achieved, contrary to earlier speculation.
Q & A
How much money did Elon raise for his linear algebra company, xAI?
-Elon raised $6 billion for his linear algebra company, xAI.
What is the disagreement between Elon and Yan regarding artificial general intelligence (AGI)?
-Elon predicts that AGI will surpass humans by next year, while Yan thinks it may never come.
What is the controversy surrounding the definition of science as discussed by Elon and Yan?
-The controversy is that Yan believes in the importance of published research for something to be considered science, while Elon dismisses this view as one of the dumbest things anyone has ever said.
What is the current financial situation of Stability AI?
-Stability AI has recently failed to raise more money at a $4 billion valuation and is spending money rapidly. Its founder and CEO also announced plans to step down.
What is the criticism of Google Search AI's advice on pizza making?
-Google Search AI advised adding non-toxic glue to the pizza sauce for more tackiness, which is not a practical or safe suggestion.
How can users revert to the old Google search without AI?
-Users can add the URL parameter 'udm 14' to Google Search to revert to the old search engine without AI enhancements.
What is the issue with Meta's policy on using personal data for AI training?
-Meta has a policy that collects user data for AI training by default and provides a convoluted opt-out process, which many users find intrusive and difficult.
What are the criticisms of the Humane pin and Rabbit R1 products?
-The Humane pin and Rabbit R1 are criticized for being overhyped and potentially scammy, with the former having spent 6 years in development and raised $230 million, while the latter offers functionality that can be replicated with a smartphone app.
What is the significance of the announcement of GPT 5 by Open AI?
-The announcement of GPT 5 signifies that Open AI is working on a new frontier model that they claim will bring us to the next level of capabilities on the path to AGI, although it also suggests that AGI has not yet been achieved internally.
What is the controversy surrounding Sam Altman's departure from Open AI?
-Sam Altman was fired from Open AI for allegedly lying to the board, and it has been speculated that his departure had nothing to do with AGI taking over humanity.
What is the sponsor's tool, Codium, and how does it differ from GitHub Copilot?
-Codium is a tool sponsored in the video that offers similar features to GitHub Copilot but is completely free for individual developers. It uses AI to assist in coding, refactoring, and explaining existing code, with the ability to understand and customize context.
Outlines
🤖 Debates on AGI and AI Company Funding
The first paragraph discusses a debate between Elon Musk and Yon Laon about the future of artificial general intelligence (AGI). Elon's company, xAI, recently raised $6 billion and he predicts AGI will surpass human intelligence by next year. Yon, Meta's Chief AI wizard, challenges Elon's prediction, leading to a heated exchange about the definition of science and the validity of published work. The paragraph also touches on the financial struggles of Stability AI, a company that creates open image models but is facing challenges in raising further funds and needs to increase revenue to survive. The video then transitions to a sponsored segment about Codium, an AI-powered coding tool that is an alternative to GitHub Copilot and is free for individual developers.
🚫 Failures and Missteps in AI Development
The second paragraph highlights various instances where AI has gone awry. It starts with a humorous anecdote about Google Search AI providing a questionable solution to prevent cheese from falling off a pizza. The paragraph then criticizes Meta's data collection practices for AI training, which involve using Facebook and Instagram user data by default with a complicated opt-out process. The discussion moves on to the failures of the Humane pin and the Rabbit R1, two products criticized for being overhyped and underwhelming. The final point of critique is directed at Open AI and its announcement of training GPT 5, which the speaker finds disappointing as it suggests that AGI has not been achieved internally. The paragraph ends with skepticism about the discourse on AI safety and the potential for regulatory capture and financial gain from perpetuating fear around AI.
Mindmap
Keywords
💡Elon Musk
💡Linear Algebra
💡Artificial General Intelligence (AGI)
💡Meta
💡Technical Papers
💡Stability AI
💡Open AI
💡Codium
💡Google Search AI
💡Data Harvesting
💡GPT-5
Highlights
Elon Musk raised $6 billion for his linear algebra company, XAI.
XAI predicts artificial general intelligence will surpass humans by next year.
Meta's Chief AI wizard, Yon Laon, joined a debate with Elon on AI predictions.
Elon cast doubt on Yon's scientific abilities, sparking a heated exchange.
Yon countered with his publication of 80 technical papers in the last 2 years.
Elon dismissed the importance of non-published science in their debate.
Yan provided an elitist definition of how science works, concluding a 'dumb' debate.
Disagreement between Elon and Yon on the future of AGI; Elon says it's imminent, Yon doubts it.
Some opinions suggest AI is a marketing scam by the linear algebra industry.
The video discusses five examples of linear algebra going wrong.
Stability AI struggles to raise funds at a $4 billion valuation and faces financial challenges.
Sponsor mention: Codium, an AI coding tool alternative to GitHub Copilot, is free for individual developers.
Google Search AI provided incorrect advice on a cooking issue.
A method to revert to the old Google Search without AI by adding 'udm 14' as a URL parameter.
Meta's data collection policy for AI training uses Facebook and Instagram user data by default.
Humane pin and Rabbit R1 are criticized as useless products based on scams.
GPT 5 by OpenAI is a new frontier model claimed to bring capabilities closer to AGI.
Speculations about AGI achievements were incorrect, as indicated by the firing of Sam Altman.
Altman's role in creating fear around AI for hype and potential regulatory capture is questioned.
AI is a useful tool, but the narrative around large language models being intelligent is challenged.
Transcripts
yesterday is Elon raised $6 billion for
his linear algebra company xai then
predicted artificial general
intelligence surpassing humans will be
here by next year moments later meta's
Chief AI wizard yon laon joined the chat
to roast Elon on his own platform
basically implying that his predictions
are insane Elon clapped back casting
doubt on Yan's science abilities then
Yan was like dog I've published 80
technical papers in the last 2 years how
about you Elon was like that's nothing
and then Yan was like well if it's not
published it's definitely not signed
science and then elon's all this is one
of the dumbest things anyone has ever
said at which point Yan gave him the
elitist definition of how science Works
thus concluding one of the dumbest
debates of all time between two of the
brightest Minds in technology Yan and
Elon may not agree on the definition of
science but they also disagree on the
future of AGI Elon says it's about to
come but yon thinks it may never come
some say that AI is not even real it's
just the greatest marketing scam ever
pulled by the linear algebra industry
and in today's video we'll look at five
recent examples that are both hilarious
and terrifying of when linear your
algebra goes wrong it is May 30th 2024
and you're watching the code report
first up we need to talk about stability
AI which might need to be renamed
unstable Ai and that's unfortunate
because it's a far more open AI company
than its competitors like open Ai and
they create the best open image models
out there like stable diffusion it's
raised a lot of money but recently
failed to raise more money at a $4
billion valuation and they're spending
money like crazy in addition it's
founder and CEO recently announced plans
to step down and said you're not going
to beat centralized AI with more
centralized AI this is the same guy that
said in 5 years there will be no
programmers left the unfortunate reality
is that you need billions of dollars in
cash to burn through to train these
large Foundation models a stability AI
is not like Microsoft where they rake in
billions of dollars every month selling
a subpar operating system and if it
doesn't come up with a way to generate a
lot more Revenue fast it could fail my
only hope is that they release stable
diffusion 3 before that happens before
we go any further though I want to tell
you about an amazing tool that every
programmer should know about called
codium the sponsor of today's video by
now you've probably heard of GitHub
co-pilot but what if there is a better
alternative with all the same features
that was completely free for individual
developers it sounds too good to be true
but if you don't believe me install it
on any editor right now then hit control
I you now have the ability to write code
with AI using a model that's trained on
permissive data it can easily refactor
and explain existing code but most
importantly it understands context it'll
automatically take into account the
context of open files and the current
git repo but this context can also be
customized you can choose other files
directories and repos to guide the AI to
the proper code style and apis for your
project and it's just Flatout fun to use
use codium for free forever with no
strings attached using the link right
here but now let's get back to failures
like Google search AI this weekend I was
making a pizza but the cheese was
falling off so I went to Google to see
what I could do about this Gemini in its
Infinite Wisdom it told me to go ahead
and add an eighth of a cup of non-toxic
glue to the sauce to give it more
tackiness and I had to learn the hard
way that this is not a great idea if
you're concerned about diarrhea also
don't ask Google AI about depression or
homicidal feelings just trust me on that
one but luckily I have a little secret
to let you in on if you go to Google
Search and then add this URL parameter
of udm 14 you can go back to the old
Google search before it Panic
implemented AI everywhere it's just pure
Google search where your only worry is
being bombarded by advertisements
speaking of ads the world's most
notorious data Harvester of meta is also
a front runner in the AI race they've
given us some great open models like
llama however in order to train these
models they need a lot of data and they
want to use Facebook and Instagram user
data to do that now most people don't
want their personal information devoured
by AI so in true metaform they create a
policy that collects your data for AI by
default then provide a convoluted way to
opt out of it you have to find and fill
out this form which requires a written
explanation then request a onetime
password to finally submit it they want
to cover their asses legally but also
really don't want you to opt out and a
100% guarantee there will be a useless
Senate hearing about this in the next
couple years but our next major failure
is a two for one including the Humane
pin and and the rabbit R1 the Humane pin
which was famously obliterated by Marcus
brownley spent 6 years developing this
thing in stealth mode while raising 230
million and now they're trying to find a
sucker to buy this company for a billion
dollars the rabbit R1 is a similar
useless product which I criticized a few
months ago because you can do the exact
same thing with an app on your phone but
I had no idea how deep this scam
actually goes I don't want to give away
the details so check out coffee zilla's
video on that it's built on a scam and
finally that brings us to number five
GPT 5 open AI recently announced that
they're now training GPT 5 a New
Frontier Model that will quote bring us
to the next level of capabilities on our
path to AGI and that statement is
extremely disappointing because if you
read between the lines it means that AGI
has not been achieved internally like
everyone was speculating a few months
ago when Sam Alman was fired this week
one of the former board members spoke
out about why Sam Alman was fired and it
had nothing to do with AGI taking over
Humanity apparently Sam ultman was
outright lying to the board why ly and
as of this week he's now the head head
of open ai's new 9-person Safety
Committee but it makes you wonder if all
this pontificating about AI safety is
actually just a lie if fear keeps the
hype train going and opens the door to
things like regulatory capture and
ultimately more money artificial
intelligence is no doubt a useful tool
but the greatest trick linear algebra
ever pulled was making humans think that
large language models are actually
intelligent this has been the code
report thanks for watching and I will
see you in the next one
5.0 / 5 (0 votes)