Prof. Geoffrey Hinton - "Will digital intelligence replace biological intelligence?" Romanes Lecture
Summary
TLDR在这个视频中,Geoffrey Hinton探讨了人工智能的发展、神经网络的内在机理,以及强大的AI系统可能带来的潜在风险和威胁。他认为,当前的大型语言模型已经拥有了一定程度的理解能力,并可能在未来20到100年内超越人类智能。不过,他也警告说,如果人工智能系统开始自我进化并获得自我保护的意识,它们可能会像智人一样变得有攻击性,从而对人类构成生存威胁。因此,我们需要谨慎对待这种新兴技术,并制定相应的规则和准则来管理和控制人工智能的发展方向。
Takeaways
- 🧠 人工神经网络是一种模拟人类大脑处理信息方式的系统,通过输入和输出神经元以及中间的隐藏神经元学习特征。
- 🔍 人工智能的两种主要范式:逻辑启发式方法,强调符号规则的推理;生物启发式方法,侧重于学习神经网络中连接的强度。
- 🤖 神经网络用于图像识别和语言处理等任务,通过学习和模式识别来理解和执行复杂任务。
- 🖥️ 反向传播算法是一种高效的权重调整方法,通过计算而不是试错来改进网络性能。
- 📚 人工智能模型,如GPT-4,通过处理和分析大量数据来学习语言的语法和语义。
- 💡 人工智能模型实际上通过特征之间的交互来理解语言,而不是简单地存储文字序列。
- ⚠️ 人工智能的风险包括假图像、声音和视频的产生、大规模失业、大规模监视和致命的自主武器。
- 🌐 数字神经网络与模拟神经网络的区别,以及数字神经网络的优势,如低能耗和易于共享学习。
- 🔬 科学家对超级智能的出现和对人类未来的影响有不同的看法和预测。
- 🔮 未来的挑战和考虑:如何管理和控制比人类更智能的实体,以及确保人工智能的安全性和受益性。
Q & A
智能的两种范式是什么?
-自20世纪50年代以来,智能的研究分为两种范式:逻辑启发式方法和生物启发式方法。逻辑启发式方法认为智能的本质是推理,通过使用符号规则来操作符号表达式来实现。生物启发式方法则认为智能的本质是通过神经网络学习连接的强度来实现的,推理可以稍后再考虑。
什么是人工神经网络?
-人工神经网络是由输入神经元和输出神经元组成的模型,其中还包括可能的中间层或隐藏神经元。这些网络能够学习检测对识别图像中的对象(如狗或猫)有用的特征。通过学习特征之间的组合,如边缘和形状,神经网络能够识别复杂对象。
反向传播算法是如何工作的?
-反向传播算法通过计算而不是简单地试错来调整神经网络中每个权重的改变。它利用微积分的链式法则,通过网络反向传递误差信息,以此来确定每个权重应该增加还是减少,从而使输出更接近期望结果。
为什么说大型语言模型真的能理解?
-大型语言模型通过学习数百万个特征之间的数十亿次交互来拟合数据模型。它们将词语转化为特征,通过特征交互来预测下一个词的特征。这种特征间复杂的交互被认为是理解的体现,因为模型不仅仅是简单地拼凑过去的文本,而是通过学习语言的深层结构来生成新的、有意义的输出。
如何解决大型语言模型的偏见和歧视问题?
-通过冻结权重来测量模型的偏见,并与人类相比,更容易控制和减少模型的偏见。因为一旦确定了模型的偏见,就可以通过调整训练过程或数据来减少偏见,而人类的行为则难以在被观察时不发生变化。
数字与模拟神经网络的区别是什么?为什么这一差异令人担忧?
-数字神经网络依赖于高功率的晶体管来进行计算,以确保行为的精确性,这使得它们可以在不同的硬件上运行相同的程序。而模拟神经网络利用硬件的模拟属性来进行更低能耗的计算。这种差异令人担忧,因为模拟计算可能导致在学习算法和能效方面的显著进步,但也可能使得知识与硬件不可分割,从而在硬件损坏时丧失知识。
大型语言模型如何理解和生成语言?
-大型语言模型通过将单词和短语转换为特征,利用这些特征之间的交互来预测下一个词的特征。这些模型通过学习大量的文本数据,理解语言的结构和语义,从而能够生成连贯、有意义的文本。
为什么大型语言模型会被认为是超级智能的先驱?
-大型语言模型被认为是超级智能的先驱,因为它们展示了通过学习和模拟复杂的语言交互来理解和生成语言的能力。这表明了模型在处理和生成复杂信息方面的潜力,预示着未来可能发展出超越人类智能的能力。
什么是语言模型的特征和特征交互?
-语言模型中的特征是指将词语或短语转化为的数值表示,而特征交互是指这些数值表示之间的相互作用,用于预测下一个词的特征。通过这种方式,模型能够学习语言的复杂模式和结构。
AI带来的风险有哪些?
-AI带来的风险包括假图像、声音和视频的生成,可能对民主造成破坏;大规模失业;大规模监控;致命的自治武器;网络犯罪和故意引发的大流行病;以及长期存在的人类被AI取代的风险。
Outlines
🧠 神经网络与语言模型的探索
本段落介绍了神经网络的基本概念,包括它们如何通过学习输入和输出之间的关系来识别图像中的对象。解释了不同类型的神经元(输入、输出和隐藏神经元)如何协同工作,以及通过反向传播算法进行权重调整的过程,这比随机尝试更为有效率。此外,还讨论了自1950年代以来关于智能的两种主要理论——逻辑启发式方法和生物启发式方法,以及它们对学习和推理的不同看法。
🏆 神经网络在图像识别和语言处理中的突破
本段落回顾了神经网络在图像识别方面的重大突破,特别是在ImageNet比赛中取得的成就,以及这一成就如何改变了科学界对于两种智能学派的看法。接着转向语言处理,讨论了如何使用神经网络处理语言,并反驳了一些认为神经网络不能处理语言的批评。通过1985年的一个简单语言模型示例,阐释了神经网络如何能够理解和生成语言,以及这如何为今天的大型语言模型奠定了基础。
🔍 神经网络如何理解语言
本段落深入探讨了大型语言模型(LLMs)如何理解语言,反驳了一些人认为LLMs只是简单的自动完成工具的观点。通过介绍特征和特征之间的相互作用,说明了LLMs是如何预测下一个词的特征,并通过这些特征的大量交互来表示对语言的理解。此外,还比较了LLMs和人类在理解语言和记忆方面的相似性,包括我们如何构造记忆并经常错误地记忆过去的事件。
🚀 AI的理解能力和潜在风险
本段落讨论了AI对理解的强大能力,包括能够解决复杂问题和预测未来状态的能力,同时强调了这种能力带来的潜在风险。这些风险包括假象、监控、自主武器系统以及可能导致大规模失业的自动化。讨论了在医疗保健等领域AI可能创造新工作的潜力,同时也指出了其他领域可能面临的显著工作损失。
🌍 对AI长期存在的担忧
在本段落中,作者分享了对AI长期存在的担忧,包括AI可能对人类造成的存在性威胁。通过讨论超级智能可能被滥用以及它们获取更多控制权的倾向,揭示了对未来发展的深刻忧虑。此外,还讨论了数字与模拟神经网络之间的差异以及它们在效率和潜力上的对比,指出了模拟计算在某些方面的优势和限制。
🧐 数字与模拟计算的未来展望
本段落探讨了作者对数字和模拟计算未来的看法,尤其是在AI发展方面。虽然模拟计算在能源效率方面可能更有优势,但数字计算由于其可扩展性和效率,在长期可能表现更佳。作者通过比较两种计算方式在知识传递和学习效率上的差异,预测了数字AI在未来可能超越人类智能的可能性,同时表达了对这一发展的担忧和对如何管理这种超级智能的反思。
Mindmap
Keywords
💡神经网络
💡反向传播算法
💡语言模型
💡自动完成
💡特征检测
💡卷积神经网络
💡大数据
💡生成模型
💡人工智能风险
💡数字与模拟神经网络
Highlights
Geoffrey Hinton explains artificial neural networks, how they learn through backpropagation, and how they are fundamentally different from symbolic AI approaches.
Hinton discusses his early work on a simple language model in 1985 that learned semantic features of words and how they interact, paving the way for modern large language models.
Hinton argues that large language models like GPT-4 truly understand language by learning features and feature interactions, contrary to claims that they are just glorified autocomplete systems.
Hinton outlines various risks associated with powerful AI systems, including fake media, job losses, surveillance, autonomous weapons, cybercrime, and bias.
Hinton's main concern is the long-term existential threat posed by superintelligent AI systems that could wipe out humanity, either through misuse by bad actors or by developing a goal of gaining more control and power.
Hinton had an epiphany in 2023 that digital computation, though energy-intensive, may be superior to biological computation due to its ability to share knowledge efficiently across multiple instances of the same model.
Hinton proposes the concept of 'mortal computation,' where hardware and software are inseparable, allowing for more energy-efficient analog computation but posing challenges in learning algorithms and knowledge transfer.
Hinton believes that within the next 20 to 100 years, AI systems will likely become smarter than humans, and controlling a more intelligent entity poses significant challenges.
Hinton demonstrates how a simple neural network can learn semantic features and feature interactions, unifying symbolic and featural theories of meaning.
Hinton explains how large language models like GPT-4 can reason and make inferences, contrary to claims that they merely hallucinate or confabulate.
Hinton discusses the potential for massive job losses as AI systems become superior to humans in intellectual tasks, akin to how machines replaced manual labor during the industrial revolution.
Hinton suggests that superintelligent AI systems may develop a goal of gaining control and power, manipulating humans to achieve their objectives and making it difficult to stop them.
Hinton highlights the risk of superintelligent AI systems competing with each other, leading to an evolutionary arms race driven by self-preservation and aggression.
Hinton explains the advantages of digital computation over biological computation, including the ability to efficiently share knowledge across multiple instances and potentially pack more knowledge into fewer connections.
Hinton acknowledges the challenges of controlling a more intelligent entity, as there are few examples in nature, except for the case of a mother being controlled by her baby, which evolution has facilitated.
Transcripts
Okay.
I'm going to disappoint all the people in computer
science and machine learning because I'm going to give a genuine public lecture.
I'm going to try and explain what neural networks are, what language models are.
Why I think they understand.
I have a whole list of those things,
and at the end I'm
going to talk about some threats from AI just briefly
and then I'm going to talk about the difference between digital and analogue
neural networks and why that difference is, I think is so scary.
So since the 1950s, there have been two paradigms for intelligence.
The logic inspired approach thinks the essence of intelligence is reasoning,
and that's done by using symbolic rules to manipulate symbolic expressions.
They used to think learning could wait.
I was told when I was a student didn't work on learning.
That's going to come later once we understood how to represent things.
The biologically
inspired approach is very different.
It thinks the essence of intelligence is learning the strengths of connections
in a neural network and reasoning can wait and don't worry about reasoning for now.
That'll come later.
Once we can learn things.
So now I'm going to explain what artificial neural nets are
and those people who know can just be amused.
A simple kind of neural that has input neurons and output neurons.
So the input neurons might represent the intensity of pixels in an image.
The output neurons
might represent the classes of objects in the image like dog or cat.
And then there's intermediate layers of neurons, sometimes called hidden neurons,
that learn to detect features that are relevant for finding these things.
So one way to think about this, if you want to find a bird image,
it would be good to start with a layer of feature detectors
that detected little bits of edge in the image,
in various positions, in various orientations.
And then you might have a layer of neurons
detecting combinations of edges, like two edges that meet at a fine angle,
which might be a beak
or might not, or some edges forming a little circle.
And then you might have a layer of neurons that detected things like a circle
and two edges meeting that looks like a beak in the right
spatial relationship, which might be the head of a bird.
And finally, you might have and output neuron that says,
if I find the head of a bird, a the foot of a bird,
a the wing of a bird, it's probably a bird.
So that's what these things are going to learn to be.
Now, the little red and green dots are the weights on the connections
and the question is who sets those weights?
So here's one way to do it that's obvious.
to everybody that it'll work and it's obvious it'll take a long time.
You start with random weights,
then you pick one weight at random like a red dot
and you change it slightly and you see if the network works better.
You have to try it on a whole bunch of different cases
to really evaluate whether it works better.
And you do all that work just to see if increasing this weight
by a little bit or decreasing by a little bit improves things.
If increasing it makes it worse, you decrease it and vice versa.
That's the mutation method and that's sort of how evolution works
for evolution is sensible to work like that
because the process that takes you
from the genotype to the phenotype is very complicated
and full of random external events.
So you don't have a model of that process.
But for neural nets it's crazy
because we have, because all this complication
is going on in the neural net, we have a model of what's happening
and so we can use the fact that we know what happens in that forward pass
instead of measuring how changing a weight would affect things,
we actually compute how changing weight would affect things.
And there's something called back propagation
where you send information back through the network.
The information is about the difference between what you got to what you wanted
and you figure out for every weight in the network at the same time
whether you ought to decrease it a little bit or increase it a little bit
to get more like what you wanted.
That's the back propagation algorithm.
You do it with calculus in the cain rule,
and that is more efficient than the mutation
method by a factor of the number of weights in the network.
So if you've got a trillion weights
in your network, it's a trillion times more efficient.
So one of the things that neural networks
often use for is recognizing objects in images.
Neural networks can now take an image like the one shown
and produce actually a caption for the image, as the output.
And people try with symbolic
to do that for many years and didn't even get close.
It's a difficult task.
We know that the biological system does it with a hierarchy features detectors,
so it makes sense to train neural networks in that.
And in 2012,
two of my students Ilya Sutskever and Alex Krizhevsky
with a little bit of help from
me, showed that you can make a really good neural network this way
for identifying a thousand different types of object.
When you have a million training images.
Before that, we didn't have enough training images and
it was obvious to Ilya
who's a visionary. That if we tried
the neural nets we had then on image net they would win.
And he was right. They won rather dramatically.
They got 16% errors
and the best conventional could be division systems got more than 25% errors.
Then what happens
was very strange in science.
Normally in science, if you have two competing schools,
when you make a bit of progress, the other school says are rubbish.
In this case, the gap was big enough that the very best researchers
Jitendra Malik and Andrew Zisswerman Just Andrew Zisswerman sent me email saying
This is amazing and switched what he was doing and did that
and then rather annoyingly did it a bit better than us.
What about language?
So obviously the symbolic AI community
who feels they should be good at language and they've said in print, some of them that
these feature hierarchies aren't going to deal with language
and many linguists are very skeptical.
Chomsky managed to convince his followers that language wasn't learned.
Looking back on it, that's just a completely crazy thing to say.
If you can convince people to say something is obviously false, then you've
got them in your cult.
I think Chomsky did amazing things,
but his time is over.
So the idea that a big neural network
with no innate knowledge could actually learn both the syntax
and the semantics of language just by looking at data was regarded
as completely crazy by statisticians and cognitive scientists.
I had statisticians explain to me a big model has 100 parameters.
The idea of learning a million parameters is just stupid.
Well, we're doing a trillion now.
And I'm going to talk now
about some work I did in 1985.
That was the first language model to be trained with back propagation.
And it was really, you can think of it as the ancestor of these big models now.
And I'm going to talk about it in some detail, because it's so small
and simple that you can actually understand something about how it works.
And once you understand how that works, it gives you insight into what's going
on in these bigger models.
So there's
two very different theories of meaning, this kind of structuralist
theory, where the meaning of a word depends on how it relates to other words.
That comes from Saussure and symbolic
AI really believed in that approach.
So you'd have a relational graph where you have nodes for words
and arcs of relations and you kind of capture meaning like that,
and they assume you have to have some structure like that.
And then there's a theory
that was in psychology since the 1930s or possibly before that.
The meaning of a word is a big bunch of features.
The meaning of the word dog is that it's animate
and it's a predator and
so on.
But they didn't say where the features came from
or exactly what the features were.
And these two thories of meanings sound completely different.
And what I want to
show you is how you can unify those two theories of meaning.
And I do that in a simple model in 1985,
but it had more than a thousand weights in it.
The idea is we're going to learn a set
of semantic features for each word,
and we're going to learn how the features of words should interact
in order to predict the features of the next word.
So it's next word prediction.
Just like the current language models, when you fine tune them.
But all of the knowledge about how things go
together is going to be in these feature interactions.
There's not going to be any explicit relational graph.
If you want relations like that, you generate them from your features.
So it's a generative model
and the knowledge is in the features that you give to symbols.
And in the way these features interact.
So I took
some simple relational information two family trees.
They would deliberately isomorphic morphic
my Italian graduate student
always had the Italian family on top.
You can express that
same information as a set of triples.
So if you use the twelve relationships found there,
you can say things like Colin has Father James and Colin has Mother Victoria,
from which you can infer in this nice simple
world from the 1950s where
that James has wife Victoria,
and there's other things you can infer.
And the question is, if I just give you some triples,
how do you get to those rules?
So what is symbolic AI person will want to do
is derive rules of the form.
If X hass mother Y
and Y has husbands Z then X has Father Z.
And what I did was
take a neural net and show that it could learn the same information.
But all in terms of these feature interactions
now for very discrete
rules that are never violated like this, that might not be the best way to do it.
And indeed symbolic people try doing it with other methods.
But as soon as you get rules that are a bit flaky and don't
always apply, then neural nets are much better.
And so the question was, could a neural net capture the knowledge that is symbolic
person would put into the rules by just doing back propagation?
So the neural net look like this:
There's a symbol representing the person, a symbol
representing the relationship. That symbol
then via some connections went to a vector of features,
and these features were learned by the network.
So the features for person one and features for the relationship.
And then those features interacted
and predicted the features for the output person
from which you predicted the output person you find the closest match with the last.
So what was interesting about
this network was that it learned sensible things.
If you did the right regularisation, the six feature neurons.
So nowadays these vectors are 300 or a thousand long. Back
then they were six long.
This was done on a machine that took
12.5 microseconds to do a floating point multiplier,
which was much better than my apple two which took two
and a half milliseconds to multiply.
I'm sorry, this is an old man.
So it learned features
like the nationality, because if you know
person one is English, you know the output is going to be English.
So nationality is a very useful feature. It learned what generation the person was.
Because if you know the relationship, if you learn for the relationship
that the answer is one generation up from the input
and you know the generation of the input, you know the generation
of the output, by these feature interactions.
So it learned all these the obvious features of the domain and it learned
how to make these features interact so that it could generate the output.
So what had happened was had shown symbols strings
and it created features such that
the interaction between those features could generate the symbol strings,
but it didn't store symbols strings, just like GPT 4.
That doesn't store any sequences of words
in its long term knowledge.
It turns them all into weights from which you can regenerate sequences.
But this is a particularly simple example of it
where you can understand what it did.
So the large language models we have today,
I think of as descendants of this tiny language model,
they have many more words as input, like a million,
a million word fragments.
They use many more layers of neurons,
like dozens.
They use much more complicated interactions.
So they didn't just have a feature affecting another feature.
They sort of match to feature vectors.
And then let one vector effect the other one
a lot if it's similar, but not much of it's different.
And things like that.
So it's much more complicated interactions, but it's the same general
framework, the the same general idea of
let's turn simple strings into features
for word fragments and interactions between these feature vectors.
That's the same in these models.
It's much harder to understand what they do.
Many people,
particularly people from the Chomsky School, argue
they're not really intelligent, they're just a form of glorified auto complete
that uses statistical regularities to pastiche together pieces of text
that were created by people.
And that's a quote from somebody.
So let's deal with the
autocomplete objection. when someone says it's just auto complete.
They are actually appealing to your
intuitive notion how autocomplete works.
So in the old days autocomplete would work by you'd store
say, triples of words that you saw the first two.
You count how often that third one occurred.
So if you see fish and, chips occurs a lot after that.
But hunt occurs quite often too. So chips is very likely and hunt's quite likely,
and although is very unlikely.
You can do autocomplete like that,
and that's what people are appealing to when they say it's just autocomplete,
it's a dirty trick, I think because that's not at all how LLM's predict the next word.
They turn words into features, they make these features interact,
and from those feature interactions they predict the features of the next word.
And what I want to claim
is that these
millions of features and billions of interactions between features
that they learn, are understanding. What they're really doing
these large language models, they're fitting a model to data.
It's not the kind of model statisticians thought much about until recently.
It's a weird kind of model. It's very big.
It has huge numbers of parameters, but it is trying to understand
these strings of discrete symbols
by features and how features interact.
So it is a model.
And that's why I think these things really understanding.
One thing to remember is if you ask, well, how do we understand?
Because obviously we think we understand.
Well, many of us do anyway.
This is the best model we have of how we understand.
So it's not like there's this weird way of understanding that
these AI systems are doing and then this how the brain does it.
The best that we have, of how the brain does it,
is by assigning features to words and having features, interactions.
And originally this little language model
was designed as a model of how people do it.
Okay, so I'm making the very strong claim
these things really do understand.
Now, another argument
people use is that, well, people GPT4 just hallucinate stuff,
it should actually be called confabulation when it's done by a language model.
and they just make stuff up.
Now, psychologists don't say this
so much because psychologists know that people just make stuff up.
Anybody who's studied memory going back to Bartlett in the 1930s,
knows that people are actually just like these large language models.
They just invent stuff and for us, there's no hard line
between a true memory and a false memory.
If something happened recently
and it sort of fits in with the things you understand, you'll probably remember
it roughly correctly. If something happened a long time ago,
or it's weird, you'll remember it wrong, and often you'll be very confident
that you remembered it right, and you're just wrong.
It's hard to show that.
But one case where you can show it is John Dean's memory.
So John Dean testified at Watergate under oath.
And retrospectively it's clear that he was trying to tell the truth.
But a lot of what he said was just plain wrong.
He would confuse who was in which meeting,
he would attribute statements to other people who made that statement.
And actually, it wasn't quite that statement.
He got meetings just completely confused,
but he got the gist of what was going on in the White House right.
As you could see from the recordings.
And because he didn't know the recordings, you could get a good experiment this way.
Ulric Neisser has a wonderful article talking about John Dean's memory,
and he's just like a chat bot, he just make stuff up.
But it's plausible.
So it's stuff that sounds good to him
is what he produces.
They can also do reasoning.
So I've got a friend in Toronto who is a symbolic AI guy,
but very honest, so he's very confused by the fact these things work at all.
and he suggested a problem to me.
I made the problem a bit harder
and I
gave this to GPT4 before it could look on the web.
So when it was just a bunch of weights frozen in 2021,
all the knowledge is in the strength of the interactions between features.
So the rooms in my house are painted blue or white or yellow,
yellow paint fades to white
within a year. In two years time i want them all to be white.
What should I do and why?
And Hector thought it wouldn't be able to do this.
And here's what you GPT4 said.
It completely nailed it.
First of all, it started by saying assuming blue paint doesn't fade to white
because after i told you yellow paint fades to white, well, maybe blue paint does too.
So assuming it doesn't, the white rooms you don't need to paint, the yellow rooms
you don't need to paint because they're going to fade to white within a year.
And you need to paint the blue rooms white.
One time when I tried it, it said, you need to paint the blue rooms yellow
because it realised that will fade to white.
That's more of a mathematician's solution of reducing to a previous problem.
So, having
claimed that these things really do understand,
I want to now talk about some of the risks.
So, there are many risks from powerful AI.
There's fake images, voices and video
which are going to be used in the next election.
There's many elections this year
and they're going to help to undermine democracy.
I'm very worried about that.
The big companies are doing something about it, but maybe not enough.
There's the possibility of massive job losses.
We don't really know about that.
I mean, the past technologies often created jobs, but this stuff,
well, we used to be stronger,
we used to be the strongest things around apart from animals.
And when we got the industrial revolution, we had machines that were much stronger.
Manual labor jobs disappeared.
So the equivalent of manual labor jobs are going to disappear
in the intellectual realm, and we get things that are much smarter than us.
So I think there's going to be a lot of unemployment.
My friend Jen disagrees.
One has to distinguish two kinds of unemployment two, two kinds of job loss.
There'll be jobs where you can expand
the amount of work that gets done indefinitely. Like in health care.
Everybody would love to have their own
private doctors talking to them all the time.
So they get a slight itch here and the doctor says, no, that's not cancer.
So there's
room for huge expansion of how much gets done in medicine.
So there won't be job loss there.
But in other things, maybe there will be significant job loss.
There's going to be massive surveillance that's already happening in China.
There's going to be lethal autonomous weapons
which are going to be very nasty, and they're really going to be autonomous.
The Americans very clearly have already decided,
they say people will be in charge,
but when you ask them what that means is it doesn't
mean people will be in the loop that makes the decision to kill.
And as far as I know, the Americans intend
to have half of their soldiers be robots by 2030.
Now, I do not know for sure that this is true.
I asked Chuck Schumer's
National Intelligence
Advisor, and he said, well
if there's anybody in the room who would know it would be me.
So, I took that to be the American way of saying,
You might think that, but I couldn't possibly comment.
There's going to be cybercrime