FSD v12: Tesla's Autonomous Driving Game-Changer w/ James Douma (Ep. 757)

Dave Lee
12 Apr 2024115:33

Summary

TLDRIn this engaging discussion, Dave and James delve into the recent developments at Tesla, focusing on the release of FSD V12 and the anticipated reveal of the Optimus robotaxi in August. They share firsthand experiences with FSD V12, noting its impressive capabilities and smoother performance compared to its predecessor. The conversation also explores the potential of Tesla's AI technology, the challenges of scaling up robot production, and the impact of competition in the AI field. The discussion highlights the rapid advancements in AI and the transformative potential of Tesla's upcoming projects.

Takeaways

  • 🚗 Tesla's FSD V12 release has shown significant improvements over previous versions, surpassing initial expectations.
  • 🌟 The V12 update introduced a drastic rewrite of Tesla's planning architecture, enhancing the overall driving experience.
  • 🧠 The neural network's ability to generalize from mimicking human driving behaviors has led to a more natural and smoother ride.
  • 🔧 Tesla's approach to developing FSD involves an end-to-end process, which has proven to be more sample-efficient and scalable.
  • 🚀 The potential for FSD to reach superhuman driving capabilities is evident as the system continues to learn and improve.
  • 🤖 The development of Tesla's humanoid robot, Optimus, is ongoing, with a focus on perfecting the hardware before scaling production.
  • 📈 The importance of data gathering in refining AI models like FSD and Optimus cannot be overstated, with real-world variability being crucial for training.
  • 🌐 Tesla's strategy for robotaxis involves a phased rollout, starting with select cities and gradually expanding the fleet.
  • 🚕 The economic and operational shift of Tesla from a car manufacturer to an AI company is becoming more apparent as software takes center stage.
  • 💡 The future of Tesla's products, including FSD and Optimus, hinges on continuous advancements in AI and the ability to scale effectively.
  • 🌟 The conversation highlights the rapid evolution of AI in the automotive and robotics industry, showcasing the potential for transformative changes in transportation and manufacturing.

Q & A

  • What significant update did Tesla release recently?

    -Tesla recently released the FSD (Full Self-Driving) V12 update.

  • What is the significance of the V12 release for Tesla's FSD?

    -The V12 release is significant because it represents a drastic rewrite of Tesla's planning architecture approach and a major leap in the capabilities of the FSD system.

  • What were some of the issues with the previous version of FSD?

    -The previous version of FSD had issues related to planning, such as not getting in the right lane, not moving far enough over, not knowing when it was its turn, and stopping in the wrong place.

  • How did the guest on the podcast describe their experience with the V12 update?

    -The guest described their experience with the V12 update as very positive, noting that it exceeded their expectations and that it was much more polished than they anticipated.

  • What is the robotaxi reveal that was mentioned in the transcript?

    -The robotaxi reveal mentioned in the transcript refers to Tesla's planned announcement of its robotaxi service, which is expected to be revealed in August.

  • What were some of the improvements observed with the V12 update compared to the previous version?

    -With the V12 update, improvements were observed in the planning stack, with old failings being addressed and not replaced by new issues. The system also seemed to drive more naturally and made better decisions in various driving scenarios.

  • What is the expected timeline for Tesla's robotaxi service rollout?

    -While a specific timeline was not provided in the transcript, it was suggested that Tesla might start testing unsupervised robo taxis on the streets in the second half of 2025.

  • What are some of the challenges that Tesla might face with the rollout of the robotaxi service?

    -Some challenges that Tesla might face include ensuring the safety and reliability of the robo taxis, navigating regulatory requirements, and managing the transition from a private vehicle manufacturer to a fleet operator.

  • What was the general sentiment towards the V12 update at the beginning of the podcast?

    -The general sentiment towards the V12 update at the beginning of the podcast was cautious optimism. The hosts were excited about the potential of the update but also aware of the challenges that might arise during its initial rollout.

  • How does the FSD V12 handle unexpected situations compared to the previous version?

    -The FSD V12 handles unexpected situations more gracefully compared to the previous version. It is designed to mimic human driving behaviors more closely, which allows it to adapt and react better to new or unforeseen scenarios.

Outlines

00:00

🚗 Introducing Tesla's FSD V12 and Optimus

The discussion begins with Dave and James catching up on recent developments, focusing on Tesla's Full Self-Driving (FSD) V12 release and the Optimus robot. Dave shares his experiences driving with FSD V12 for three weeks, highlighting its impressive capabilities and the significant improvements from V11. They also touch on the potential for the robotaxi reveal in August and the anticipation surrounding it.

05:01

🤖 Rethinking Tesla's Planning Stack

Dave and James delve into the technical aspects of Tesla's FSD V12, discussing the shift from heuristics to an end-to-end neural network approach. They explore the challenges of removing guardrails and the surprising lack of major mistakes in V12. The conversation also covers the potential methods Tesla might be using to achieve such polished results, including simulation and data curation.

10:04

🚦 Navigating Intersections and Planning

The talk moves to the intricacies of driving behavior, with Dave sharing his observations of FSD V12's handling of intersections and its ability to mimic human driving patterns. They discuss the importance of understanding the severity of different driving mistakes and the evolving nature of the system's learning process.

15:04

🌐 Global Perspectives on FSD

Dave and James consider the implications of FSD's global rollout, discussing the need for local adaptations and the potential for cultural differences in driving styles to impact the system. They also speculate on the future of Tesla's development process, including the possibility of using human drivers as data sources.

20:05

📈 Data-Driven Improvements in FSD

The conversation focuses on the role of data in refining FSD, with Dave sharing his insights on how Tesla's vast amounts of driving data contribute to the system's improvement. They discuss the potential for generalization and the challenges of addressing rare but critical scenarios.

25:07

🚗🤖 Reflecting on FSD and Optimus Developments

Dave and James recap the significant progress made in FSD and the potential impact of the upcoming robotaxi reveal. They discuss the broader implications of Tesla's advancements in autonomy and robotics, considering the future trajectory of the company and its products.

30:09

📅 Anticipating the Robotaxi Future

The discussion turns to predictions about Tesla's robotaxi service, with speculation on potential timelines and strategies for implementation. Dave and James consider the challenges of scaling up the service and the potential for Tesla to transition from a car manufacturer to a leader in autonomous transportation.

35:11

🤖🏭 Optimus: The Path to Production

Dave and James explore the potential timeline for Tesla's Optimus robot, discussing the challenges of industrializing humanoid robots and the importance of data gathering. They consider various methods for training the robots and the potential for real-world deployment.

40:12

🌟 The Future of AI and Tesla

In the final part of their conversation, Dave and James reflect on the broader implications of Tesla's AI developments, considering the potential for the company to evolve into a major player in the AI industry. They discuss the impact of open-source models and the future of AI in consumer products.

Mindmap

Keywords

💡Tesla's FSD V12 release

The FSD V12 release refers to the latest version of Tesla's Full Self-Driving software. It represents a significant update that has been highly anticipated by users and the tech community. In the context of the video, the release is discussed as a major milestone in the development of autonomous driving technology, with the interviewee sharing their first experiences and impressions of the new system.

💡Optimus robot

The Optimus robot is a humanoid robot developed by Tesla, designed to perform a variety of tasks and improve efficiency in both industrial and consumer settings. In the video, the discussion around Optimus touches on its potential capabilities, the challenges of manufacturing at scale, and the future implications for Tesla as a company transitioning into the AI and robotics sector.

💡Robotaxi reveal

The term 'robotaxi reveal' refers to the anticipated announcement of Tesla's autonomous taxi service, which is expected to disrupt the traditional taxi and ride-sharing industry by offering on-demand transportation services without the need for a human driver. The reveal is a significant event that marks Tesla's progress in autonomous vehicle technology and its potential impact on the future of transportation.

💡AI and machine learning

AI, or artificial intelligence, and machine learning are core technologies behind the development of systems capable of learning from data and improving their performance over time. In the context of the video, AI and machine learning are crucial for the advancement of Tesla's autonomous driving software and the functionality of the Optimus robot, enabling them to adapt to various situations and tasks.

💡Human mimicry

Human mimicry refers to the ability of a machine or AI system to imitate human behaviors, actions, or decision-making processes. In the context of the video, this concept is central to the development of both Tesla's FSD software and the Optimus robot, as they are designed to perform tasks and navigate environments in a way that closely resembles human capabilities.

💡End-to-end learning

End-to-end learning is a machine learning paradigm where the input data is directly mapped to the output through a single, comprehensive model. This approach is used in the development of complex systems like autonomous vehicles and robots, where the goal is to replicate the full range of tasks or behaviors that a human would perform. In the video, end-to-end learning is discussed as a key method for training Tesla's FSD system and Optimus robot to handle a wide variety of driving situations and tasks.

💡Perception stack

The perception stack refers to the set of technologies and algorithms used by autonomous systems to interpret and understand the environment around them, including the detection and identification of objects, signs, and other relevant features. It is a critical component for systems like Tesla's FSD, as it enables the vehicle to make informed decisions based on accurate environmental data.

💡Heuristics

Heuristics are problem-solving strategies or rules of thumb that are used to make decisions or solve problems quickly and efficiently. In the context of autonomous systems, heuristics often involve pre-programmed rules or behaviors that guide the system's actions. The video discusses the shift from relying on heuristics to more advanced machine learning techniques in the development of Tesla's FSD system.

💡Autonomous driving experience

The autonomous driving experience refers to the interaction between a driver and an autonomous vehicle, encompassing the vehicle's ability to navigate and respond to its environment with minimal or no input from the driver. The video discusses the improvements in Tesla's FSD V12 and how it enhances the autonomous driving experience by reducing the need for driver intervention and providing a smoother, more natural driving performance.

💡Neural networks

Neural networks are a type of machine learning model inspired by the human brain, composed of interconnected nodes or 'neurons' that process and transmit information. They are capable of learning complex patterns and making decisions based on large amounts of data. In the context of the video, neural networks are fundamental to the development of Tesla's FSD and Optimus, enabling these systems to learn from human behavior and improve their performance over time.

💡Strategic path

The strategic path refers to the planned course of action or series of decisions that a company like Tesla takes to achieve its long-term goals. In the context of the video, the strategic path involves the development and deployment of advanced technologies like FSD and Optimus, and the considerations for how these technologies will be integrated into the market and impact the company's future.

Highlights

Discussion on Tesla's FSD V12 release and its improvements

James' experiences with FSD V12 during a cross-country trip

Impressions of FSD V12's capability in rural and urban areas

Comparison of FSD V12 to V11 and the changes in planning architecture

Expectations for FSD V12 and its surprisingly polished performance

Discussion on the potential reasons behind FSD V12's success

The role of neural networks in achieving a more natural driving experience

Thoughts on how Tesla might have achieved the polish in FSD V12

The importance of end-to-end training in neural networks

Discussion on the challenges of removing heuristics from the planning stack

The potential for FSD to exceed human driving capabilities

Expectations for future improvements in FSD based on current trends

The significance of the transition from heuristics to neural networks in FSD

The potential impact of FSD V12 on driver intervention and safety

Speculations on the future of Tesla's Autopilot and FSD

Transcripts

00:00

hey it's Dave welcome today I'm joined

00:02

by James dama and we've got a whole host

00:04

of things to talk about we've got um

00:07

Tesla's FSD V12 release that just

00:09

happened this past month we've got um

00:12

Optimus to talk about um and this robot

00:15

taxy reveal in August so anyway it's

00:19

been a long time it's been like at least

00:21

a half a year was last August or

00:23

something like that so yeah yeah I

00:24

remember the last time we met we talked

00:26

about V12 cuz they did a demo mhm and um

00:30

we were quite excited about the

00:32

potential but also a little bit cautious

00:35

in terms of how it will first roll out

00:38

and how capable but um curious just what

00:41

has been your first experiences and

00:43

first impressions of you talk how long

00:45

have you been driving it for uh I got it

00:47

a few Sundays back I think I I got it

00:50

the first weekend that it really went

00:52

right so I think I've had it three weeks

00:53

or something like that maybe four three

00:56

probably and uh of course drove it out

00:59

here to Austin from Los Angeles drove it

01:02

quite a bit in Los Angeles on the way

01:04

out here so my my wife has this hobby of

01:07

like visiting supercharges we've never

01:09

been to so every cross country trip

01:11

turns it's ends up being way longer than

01:13

otherwise would be but one of the cool

01:15

things about that on the FSD checkout to

01:16

her is that we end up driving around all

01:19

the cities on the way you know because

01:20

you're driving around to the different

01:21

Chargers and stuff and so you get a

01:23

chance to see what it's like in you know

01:26

this town or that town or um different

01:28

you know highways are different we drive

01:30

a lot of rural areas so I got lots of

01:32

rural we uh we did like the whole back

01:34

Country tour coming out here through

01:36

across Texas and so feel like it was it

01:39

was a good experience for like trying to

01:40

compress a whole lot of FSD yeah and I

01:43

got to say I'm just like really

01:45

impressed like it's I was not expecting

01:47

it to be this good because it's a really

01:50

like this is not a small change to the

01:53

planner was yeah with v11 we had gotten

01:58

to a point where the perception stack

02:00

was good enough that we just weren't

02:02

seeing perception failures I mean they

02:04

just but people almost all the

02:06

complaints people had had to do with

02:07

planning not getting in the right lane

02:09

not being able to move far enough over

02:11

um not knowing when it was its turn uh

02:13

stopping in the wrong place creeping the

02:15

wrong way these are all planning

02:16

elements they're not uh you know so if

02:21

you're going to take a planning stack

02:23

that you've been working on for years

02:25

you've really invested a lot and you

02:26

like literally throwing it away like

02:29

there just not retaining any at least

02:30

that's what they tell us they got rid of

02:32

300K lines they went end to end it's

02:34

harder to actually mix heuristics into

02:37

end to end so it makes sense that they

02:38

actually got rid of almost everything

02:40

anything they have in there that's

02:42

heuristic now would be built new from

02:43

scratch for the end to end stack and yet

02:46

they managed to

02:48

outdo in what seems to me like a really

02:50

short because they weren't just

02:51

developing this they were developing the

02:53

way to develop it you know they were

02:56

having to figure out what would work

02:57

there's all of these layers of stuff

02:59

that they had to do so my you know my

03:04

expectation was that the first version

03:06

that we were going to see was going to

03:08

be like on par it would have some

03:10

improvements it would have a couple of

03:12

meaningful regressions and there would

03:14

they would be facing some challenges

03:15

with you know figuring out how to

03:17

address so because it makes sense that

03:19

they want to get it out soon and the

03:21

sooner they get it out into the fleet

03:23

the faster they learn um but the the

03:26

degree of polish on this was yeah in a

03:29

much higher than I expected and like you

03:33

know Bradford stopped by and I got a

03:36

chance to see 1221 as he was coming

03:39

through we only had about 40 minutes

03:42

together I think I it was just like the

03:44

spur of the moment thing and uh and yet

03:47

even in because he was kind enough to to

03:49

take it places that I knew well that I

03:52

had driven on 11 a lot and I think it

03:55

took me about three blocks to realize

03:58

like right away and after 45 minutes I

04:00

just knew that this is going to be

04:02

completely different and every

04:04

everything that I've experienced since

04:06

getting it and

04:09

I you know what have I got I'm I must be

04:12

at like 50 hours in the seat with it

04:14

right now a few thousand miles highly

04:17

varied stuff yeah it's super solid yeah

04:20

yeah I think um yeah I wanted to dive

04:22

into kind of how big of a jump this fsd2

04:25

is because when I drove it I was shocked

04:28

um because this is not like a is I think

04:33

V12 is a little bit of a misnomer

04:35

because this is a drastic you know

04:38

rewrite of their whole planning

04:40

architecture approach different

04:43

different um I mean on their perception

04:46

it seems like they probably kept a lot

04:48

of their neuron Nets um in terms of the

04:51

perception stack added on as well but in

04:54

their planning stack this is where they

04:57

pretty much it seemed like they're

04:59

starting from I would say scratch

05:01

completely but they're taking out all of

05:03

the guard rails all their hortic and

05:06

they're taking putting on this n10

05:09

neural approach where it's deciding

05:11

where and how to navigate right the the

05:14

perceived environment but I would have

05:16

imagined and this is kind of my

05:18

expectation also is like you you would

05:21

be better in some ways it would be more

05:22

natural Etc but then there would be some

05:25

just like weird mistakes or things that

05:28

it just doesn't get because all of the

05:30

guard rails are off theistic ones and so

05:33

you're just like it's D more dangerous

05:34

than some other ways right and that on

05:37

par though Tesla would wait until it

05:39

would be a little more safer before

05:41

releasing V12 but what we ended up

05:43

getting was we got this V12 that just

05:45

seems like really polished you know

05:48

we're not it's not easy to catch those

05:50

big mistakes in V12 and I'm kind of like

05:53

where did all these big mistakes go like

05:55

you know that was my expectation at

05:56

least and so I'm wondering like like

05:59

what was your did that catch you off

06:00

guard like just seeing the the the small

06:03

number you know of of big mistakes or

06:06

seeing how polish this V12 is um and

06:09

then I also wanted to go into like how

06:11

did Tesla do that in terms of um because

06:14

once you take off the heris sixs at

06:16

guardrails you really have to

06:20

like like be confident you need I don't

06:23

know like yeah I'm curious to hear

06:25

what's your take on how you think they

06:27

achieve this with B12 you know the the

06:29

the the Polish they have well first yeah

06:32

it

06:32

was well there's two components of like

06:36

starting out experience there's like my

06:39

sort of abstract understanding of the

06:42

system and what I sort of rationally

06:44

expected and then there's you know

06:46

there's my gut you know because I've got

06:48

I've got like 200,000 miles on various

06:51

layers of autopilot including you know

06:53

maybe I don't know 50,000 miles on FSD

06:56

so I have this muscle memory and this

06:58

you know sort of sense of the thing and

07:02

I expected that to sort of be dislocated

07:05

I mean you know going from 10 to 11 and

07:10

was also I mean they added a lot this is

07:13

not the first time that they've made

07:14

pretty substantive changes it's the

07:15

biggest change for sure right but I was

07:18

expecting it to feel a little bit weird

07:20

and uncomfortable but but sort of

07:23

intellectually I was expecting all the

07:25

old problems to go away and a new set of

07:27

problems to come in because it's a

07:29

different product

07:31

like because the perception was pretty

07:33

polished and and the things that people

07:35

were most aware of is failings of the

07:38

system were essentially baked into this

07:40

heuristic code well of course you take

07:41

theistic code away all those failings go

07:43

away too but what do you get with the

07:45

new thing right so and you know so that

07:49

did happen like all the old failings

07:51

went away like rationally right but it

07:53

was weird to sit in the SE in the seat

07:55

and you know there you know there's this

07:57

street you've driven over and over and

07:59

over again where there was this

08:01

characteristic behavior that it had

08:02

which is you know maybe not terrible but

08:04

not comfortable maybe or less ideal than

08:07

you would are slower annoying whatever

08:08

the deal and those are just gone like

08:10

all of them not just like one or two

08:11

they're just like gone all of them so

08:13

that was sort of like it was such a big

08:17

disconnect that it was kind of

08:18

disquieting the first you know week or

08:20

two I mean delightful but also

08:22

disquieting because now you're like

08:24

Uncharted Territory you know what demons

08:27

are looking here that I'm not prepared

08:29

to

08:30

you know after you drive theistic thing

08:31

for all you kind of got a sense of the

08:33

character of the failures I mean even if

08:35

you haven't seen it before you know the

08:36

kind of thing that's not going to work

08:38

and now but I didn't I didn't really

08:40

find those like I haven't really found I

08:43

haven't seen something and I was

08:46

expecting to see a couple of things that

08:49

were kind of worrisome and where I

08:51

wasn't clear to me how they were going

08:52

to get go about addressing them and I

08:54

just I really haven't right and so like

08:56

in that sense I'm really I'm more

08:58

optimistic about it than I expected to

09:00

be at this point um how do they do it

09:04

yeah okay so let me give context to that

09:06

question a bit more because I know it

09:07

could be open-ended so I would imagine

09:10

that if you go end to end with planning

09:12

that um driving is is very high stakes

09:16

you have one mistake let's say you go

09:18

into the center divider aisle or there's

09:21

a there's a concrete wall or you there's

09:23

a signpost you drive into or a treat or

09:26

something it just seems like you have

09:28

one second of mistake or even Split

09:30

Second and your car is you know it's

09:33

just catastrophic it could be and with

09:36

V1 up until v11 you had these guard

09:38

rails of like oh stay in the lane and do

09:40

this and all that stuff but with those

09:42

guard rails off like V12 could when it's

09:47

confused just make a bad move you know

09:50

and just go into some you know another

09:54

car another Lane another you know object

09:55

or something but what about it is

09:58

preventing it you know without the

10:01

guardrails is it just the data of

10:03

mimicking humans or is there something

10:06

else attached on top of that where

10:08

they're actually doing some simulation

10:10

or stuff where it's showing like what

10:12

happens when you go out of the lane into

10:14

the next Lane you know into oncoming

10:16

traffic or if you do something like is

10:17

it is are they you know pumping the the

10:22

the the neuron nest with lots of

10:24

examples of bad things also that could

10:26

happen if you know if it doesn't you

10:28

know follow a certain path like what's

10:30

your take on

10:31

that um so that question prompts a

10:34

couple of thoughts um so one

10:37

thought are okay first of all preface at

10:40

all like I don't know what the nuts and

10:43

bolts of how they are tuning the system

10:46

they've told us it's end to end right so

10:49

that basically constrains the things

10:51

that they could be doing but when you

10:53

train in a system you can you don't have

10:56

to train it end to end I mean some

10:57

training will be done endend end but you

10:59

can break it into blocks and you can

11:00

pre-train blocks in certain ways and we

11:02

know that they can use simulation we

11:04

know that they can curate the data set

11:06

um so there're you know what's the mix

11:09

of stuff that they're doing is really

11:10

hard to predict they're going to be a

11:12

bunch of you know uh learned methods for

11:16

things that work well that are going to

11:18

be really hard to predict externally

11:20

just from first principles um this whole

11:22

field it's super empirical one thing

11:25

that we keep learning about neural

11:27

networks even like the language models

11:28

we can talk about those some if you want

11:30

to cuz that's also super exciting but

11:32

the they keep surprising us right like

11:35

so you take somebody who knows the field

11:37

pretty well and you at one point and

11:39

they make predictions about what's going

11:41

to be the best way to do this and

11:42

whatnot and aside from some really basic

11:44

things I mean there's some things are

11:45

just kind of P prohibited by basic

11:47

information Theory right but when you

11:50

start getting into the Nuance of oh will

11:52

this way of tweaking the system work

11:54

better than that way or if I scale it if

11:57

I make this part bigger and that part

11:58

smaller will that be a win or a lot you

12:00

know there's so many small decisions and

12:03

the training is like that too like how

12:05

do you curate the data set like what in

12:07

particular matters what makes data good

12:10

like that's a surprisingly subtle thing

12:12

we know that good data like some

12:14

training sets get you to a good result

12:16

much faster than other training sets do

12:18

and we have theories about what makes

12:20

one good and what makes one bad and

12:22

people on some kinds of things like text

12:24

databases a lot of work has been done

12:26

trying to figure this out and we have

12:27

some ideas but at the end the day this

12:29

is super empirical and we don't really

12:31

have good theory behind it so for me to

12:34

kind of sit here not having seen what

12:35

they have going on in the back room and

12:36

guess I'm just guessing so just like

12:38

frankly like I have ideas about what

12:41

they could be

12:41

doing um but you know I would expect

12:45

them to have many clever things that

12:47

never would have occurred to me yeah

12:49

that they've discovered are important

12:51

and they may be doubling down and we we

12:52

actually don't know the fundamental

12:54

mechanism of like how they're going

12:55

about doing the mimicry like what degree

12:58

of we you know we know that the you know

13:00

they have told us that the final thing

13:02

is photons in controls out as end to end

13:04

would be right

13:07

but uh so the the final architecture but

13:09

like how you get to the result of the

13:12

behavior that you want you're going to

13:13

break the system down

13:15

like I don't know it's it's just like

13:17

there are many possibilities that are

13:20

credible picking them and they vary a

13:22

lot and picking the one that's going to

13:24

be the best like that's a hard thing to

13:25

do sitting in a chair not knowing um

13:29

they are doing it really clearly and

13:31

they're getting it to work like the

13:32

reason why I I it fascinates me on the

13:36

on what type of like um uh kind of

13:41

catastrophic scenarios or dangerous

13:44

things that there may be putting in like

13:46

it it the reason why it fascinates me is

13:48

because with driving part of the driving

13:50

intelligence is knowing that if your car

13:54

is like one foot into this Lane and it's

13:56

oncoming traffic that that's really

13:59

really bad like you know be a huge

14:01

accent versus if there's um no cars or

14:05

something then it's okay or if there's

14:08

or just it the driving intelligence just

14:10

requires an awareness of how serious

14:13

mistakes are in different situations in

14:16

some situations they're really really

14:18

bad in some situations the same driving

14:20

maneuver is not that dangerous and so it

14:23

just seems to me like there have to be

14:25

some way to train that right to teach

14:27

the the neuronist that so there's an

14:29

interesting thing about the driving

14:30

system that we have and

14:32

people okay first so the failure you're

14:36

describing is much more likely with

14:37

heuristics like heuristics you build

14:39

this logical framework a set of rules

14:42

right where um you know when heuristic

14:45

Frameworks break they break big like

14:48

they because you can get something

14:50

logically wrong and there's this gaping

14:52

hole this scenario that you didn't

14:53

imagine where the system does exactly

14:55

the opposite of what you intended

14:57

because you have some logical flaw in

14:58

the reasoning that got you to there

15:00

right so you know bugs that crash the

15:03

computer that take it out like we you

15:06

know computers generally don't fail

15:07

gracefully heuristic computers right

15:10

neural networks do tend to fail

15:12

gracefully so that's one thing right

15:13

they they they're less likely to crash

15:16

and they're more likely to give you a

15:18

slightly wrong answer or a you know to

15:21

get almost everything right and have one