Complete Pytorch Tensor Tutorial (Initializing Tensors, Math, Indexing, Reshaping)

Aladdin Persson
28 Jun 202055:33

Summary

TLDR本视频深入探讨了PyTorch中张量的基本操作,包括张量的初始化、数学运算、索引、重塑等。通过实例演示了如何创建张量、进行元素级运算、矩阵乘法、广播、索引和高级索引技巧。同时,还介绍了如何使用不同的方法来重塑张量,以及如何进行张量的拼接和转置。这些基础知识为深入学习深度学习打下了坚实的基础。

Takeaways

  • 📘 学习基本的张量操作是深入理解PyTorch和深度学习的基础。
  • 🔢 张量的初始化可以通过多种方式,如列表嵌套、特定数值填充等。
  • 📈 张量可以设置数据类型(如float32)和设备(如CPU或CUDA)。
  • 🔧 张量的形状、数据类型和设备等属性可以通过特定的方法进行查询和设置。
  • 🔄 张量支持多种数学和比较操作,如加法、减法、除法、指数运算等。
  • 🔍 张量索引允许访问和操作张量的特定元素或子张量。
  • 🔧 张量可以通过reshape和view方法进行形状变换。
  • 🔄 张量可以通过广播机制进行自动维度扩展以执行元素级操作。
  • 🔍 张量提供了高级索引功能,如根据条件选择元素、使用高级索引器等。
  • 📊 张量操作包括矩阵乘法、矩阵指数运算、元素级乘法等。
  • 🔧 张量可以通过特定的函数进行操作,如torch.where、torch.unique、torch.numel等。
  • 🔄 张量可以通过concatenate、permute等方法进行维度操作和轴交换。

Q & A

  • 如何初始化一个PyTorch张量?

    -可以通过多种方式初始化张量,例如使用列表创建、使用torch.empty创建未初始化的数据、使用torch.zeros创建全零张量、使用torch.rand创建均匀分布的随机数张量等。

  • PyTorch张量的数据类型和设备如何设置?

    -可以通过设置dtype属性来指定数据类型,如torch.float32。设备可以通过设置device属性来指定,如使用CUDA或CPU。

  • 张量的基本属性有哪些?

    -张量的基本属性包括其设备(device)、数据类型(dtype)、形状(shape)以及是否需要梯度(requires_grad)。

  • 如何在PyTorch中进行张量数学运算?

    -PyTorch提供了丰富的张量运算方法,如加法(torch.add)、减法、除法(torch.true_divide)、元素乘法(torch.mul)、矩阵乘法(torch.matmul)等。

  • 什么是张量索引?

    -张量索引允许我们访问和操作张量的特定元素或子张量。可以通过指定索引、切片、布尔索引等方式进行。

  • 如何使用广播(broadcasting)进行张量操作?

    -广播允许在形状不完全相同的张量之间进行操作。PyTorch会自动扩展较小的张量以匹配较大张量的形状,然后进行相应的运算。

  • 张量重塑(reshaping)是什么意思?

    -张量重塑是指改变张量的形状而不改变其数据。可以使用view或reshape方法来实现,其中view要求张量在内存中是连续存储的。

  • 如何使用torch.cat进行张量拼接?

    -torch.cat用于将多个张量沿着指定的维度拼接。需要将张量作为元组传递,并指定拼接的维度。

  • 张量转置(transpose)是如何实现的?

    -张量转置可以通过permute方法实现,指定新的维度顺序。对于二维张量,可以直接使用T属性或torch.transpose方法。

  • 如何使用torch.where进行条件索引?

    -torch.where根据给定的条件返回满足条件的元素的索引。可以用于创建基于条件的新张量或对现有张量进行操作。

  • 张量的独特值(unique)如何获取?

    -可以使用torch.unique方法获取张量中所有独特的值。该方法返回一个排序后的独特值张量。

Outlines

00:00

📚 深度学习基础:张量操作入门

介绍了深度学习中张量操作的重要性,强调了学习张量操作是深入理解深度学习的基础。视频将分为四个部分:张量的初始化、张量数学运算、张量索引和张量重塑。鼓励观众观看完整视频以掌握这些基础操作,即使不能全部记住,至少了解它们的存在,这将在未来节省大量时间。

05:00

🔢 张量初始化与属性

详细讲解了如何初始化张量,包括使用列表、指定数据类型、设置设备(CPU或CUDA)、设置梯度要求等。还介绍了如何查看张量的设备位置、数据类型、形状和是否需要梯度等属性。

10:00

📈 张量数学运算与比较

介绍了张量的基本数学运算,如加法、减法、除法、元素级指数运算等。还讲解了如何进行矩阵乘法、矩阵指数运算、元素级比较以及如何使用广播功能。

15:01

🔄 张量索引与操作

解释了如何通过索引访问和修改张量中的特定元素,包括基本索引、高级索引技巧、条件索引等。还介绍了如何使用`torch.where`进行条件赋值和`torch.unique`获取张量中的唯一值。

20:05

📊 张量重塑与变形

展示了如何使用`view`和`reshape`方法改变张量的形状,包括转置、展平、改变维度顺序等。强调了`view`要求张量在内存中连续存储,而`reshape`则没有这个要求。还介绍了如何使用`torch.cat`进行张量拼接。

25:05

🎉 总结与结束

视频总结,强调了学习张量操作的重要性,并鼓励观众在评论区提问。提醒观众,掌握这些基础操作将使后续的深度学习任务变得更加容易。

Mindmap

Keywords

💡张量(Tensor)

张量是深度学习中用于表示数据的基本数据结构。在视频中,张量的创建、操作和变换是核心内容。例如,通过初始化张量来存储和处理数据,使用张量进行数学运算,以及通过张量索引来访问特定数据。

💡PyTorch

PyTorch是一个开源的机器学习库,广泛用于计算机视觉和自然语言处理等领域。视频中提到了如何使用PyTorch进行张量操作,如张量数学、比较操作、索引和重塑等。

💡初始化(Initialize)

在编程中,初始化是指为变量分配初始值的过程。视频中介绍了多种初始化张量的方法,如使用列表、指定数据类型和设备等。例如,通过`torch.tensor`来创建一个张量并初始化其值。

💡数学运算(Math Operations)

数学运算是张量操作中的基础,包括加法、减法、乘法等。视频中展示了如何使用PyTorch进行这些运算,例如通过`torch.add`进行加法,`torch.sub`进行减法。

💡索引(Indexing)

索引是访问张量中特定元素的方法。视频解释了如何通过索引来获取或修改张量中的元素,例如使用`X[0]`来获取第一个元素,或者使用`X[:, 0]`来获取所有行的第一个列。

💡重塑(Reshaping)

重塑张量是指改变张量的形状而不改变其数据。在视频中,通过`view`或`reshape`方法来改变张量的形状,如将二维张量变为一维张量,或者改变其行数和列数。

💡广播(Broadcasting)

广播是一种在不同形状的张量之间进行运算的机制。当进行运算的张量在某些维度上的大小不匹配时,PyTorch会自动扩展较小的张量以匹配较大的张量。视频中通过示例展示了广播在减法和指数运算中的应用。

💡设备(Device)

在深度学习中,设备通常指的是计算资源,如CPU或GPU。视频提到了如何设置张量在特定设备上运行,例如使用`.cuda()`将张量移动到GPU上,以提高计算效率。

💡梯度(Gradient)

梯度是机器学习中用于优化模型的关键概念,它表示损失函数相对于模型参数的变化率。视频中提到了`requires_grad`属性,它用于指示张量是否需要计算梯度。

💡矩阵乘法(Matrix Multiplication)

矩阵乘法是线性代数中的基本运算,也是深度学习中常用的操作。视频展示了如何使用`torch.mm`或`@`操作符来执行两个张量的矩阵乘法。

💡批量矩阵乘法(Batch Matrix Multiplication)

批量矩阵乘法是处理多个矩阵乘法的一种方法,它允许同时对多个矩阵进行乘法运算。视频中通过`torch.bmm`展示了如何在批量数据上执行矩阵乘法。

Highlights

介绍了张量的基本操作,强调了在深度学习中学习这些操作的重要性。

展示了如何初始化张量,包括使用列表和指定数据类型。

解释了如何设置张量在CUDA或CPU上运行。

介绍了张量的属性,如设备位置、数据类型和是否需要梯度。

提供了多种创建张量的方法,如使用torch.empty、torch.zeros、torch.rand等。

讨论了如何在不同设备之间移动张量,以及如何设置张量的设备。

展示了如何进行张量的基本数学运算,包括加法、减法和除法。

介绍了张量的广播(broadcasting)概念,以及如何在不同维度上进行操作。

解释了如何进行张量的索引,包括基本索引和高级索引技巧。

讨论了如何重塑张量,包括使用view和reshape方法。

展示了如何进行张量的矩阵乘法和矩阵指数运算。

介绍了如何进行张量的元素级操作,如元素级乘法和点积。

解释了如何使用torch.where进行条件索引和赋值。

展示了如何使用torch.unique获取张量中的唯一值。

讨论了如何使用torch.cat进行张量的拼接。

介绍了如何使用torch.permute和torch.squeeze进行张量的维度操作。

强调了学习张量操作对于未来节省时间的重要性。

提供了一个视频,旨在帮助观众掌握张量操作的基础知识。

Transcripts

00:00

learning about the basic tensor

00:01

operation is an essential part of pi

00:03

torch and it's worth to spend some time

00:05

to learn it and it's probably the first

00:08

thing you should do before you do

00:09

anything related to deep learning what

00:21

is going on guys hope you're doing

00:22

awesome and in this video we're gonna go

00:24

through four parts so we're gonna start

00:26

with how to initialize a tensor and

00:29

there are many ways of doing that we're

00:31

gonna go through a lot of them and then

00:33

we're gonna do you know some tensor math

00:36

math and comparison operations we're

00:39

gonna go through tensor indexing and

00:41

lastly we're gonna go through tensor

00:43

reshaping and I just want to say that I

00:46

really encourage you to watch this video

00:48

to the end so you get a grasp on these

00:50

tensor operations even if you don't

00:52

memorize them after this video and

00:54

there's probably no way you can memorize

00:56

all of them you would at least know that

00:58

they exist and that will save you a lot

01:00

of time in the future so in this video I

01:02

will cover the basics but I will also go

01:05

a bit beyond that and show you a lot of

01:08

useful things or operations for

01:10

different scenarios so there's no way

01:12

I'm able to cover all of the tensor

01:14

operations there are a bunch of more

01:16

advanced ones that I don't even know

01:18

about yet but these are definitely

01:20

enough to give you a solid foundation so

01:23

I guess we'll just get started and the

01:26

first thing I'm going to show you is how

01:28

to create a tensor so what we're gonna

01:31

do is we're gonna do my tensor and we're

01:34

gonna do torch tensor and we're gonna do

01:38

the first thing we're gonna do is a list

01:40

and we're gonna do a list inside that

01:42

list we're gonna do one two three and

01:45

then let's do another list and we're

01:49

gonna do four five six so what this

01:52

means right here is that we're gonna do

01:54

two rows so this is the first row and

01:57

then this is the second row all right so

02:00

we're gonna have two rows in this case

02:02

and we're gonna have three columns

02:05

so we could do then is we can do print

02:07

my tensor and we can run that and we're

02:10

gonna just get in an in a nice format so

02:13

we're gonna get one for for the first

02:16

column two five and three six now what

02:19

we can do as well is we can set the type

02:21

of this tensor so we can do D type

02:24

equals torch dot float and we can do

02:28

let's say float 32 so then if we print

02:33

it again we're gonna get that those are

02:35

float values now another thing we can do

02:37

is we can set the device that this

02:39

tensor should be on either CUDA or the

02:42

CPU now if you have a CUDA enabled GPU

02:45

you should almost always have a sensor

02:48

on the on CUDA otherwise you're gonna

02:50

have to use the CPU but we can specify

02:53

this using the device so we can do

02:55

device equals then we can set this to

02:58

CUDA if you have that available and then

03:02

if we print my tensor again we can say

03:05

see that the device says CUDA right here

03:07

if you do not have a it could enable GPU

03:11

then you're gonna have to write CPU

03:13

right here I think that CPU is also the

03:15

default she don't have to write it but

03:18

it can help to just be specific now if

03:21

we run this the it doesn't the device

03:24

doesn't show which means that it's on

03:26

the CPU can also set other arguments

03:29

like requires gradient which is

03:32

important for auto grab which I'm not

03:35

going to cover in this video but

03:36

essentially for computing the gradients

03:38

which are used when we do the backward

03:41

propagation through the computational

03:43

graph to update our parameters through

03:46

gradient descent but anyways I'm not

03:48

gonna go in-depth on that one thing we

03:50

can do as well is that you're gonna see

03:52

a lot when in my videos and just if you

03:55

read PI torch code is that people often

03:59

write device equals CUDA if torch that

04:05

CUDA dot is available like that

04:11

else CPU all right and in this case what

04:14

happens is that if you have CUDA enabled

04:16

the device is going to be set to CUDA

04:19

and otherwise it's going to be set to

04:20

the CPU kind of that priority that if

04:23

you have enabled and you should use it

04:25

otherwise you're gonna be stuck with the

04:27

CPU but that's all you have so what we

04:30

can do instead of writing the string

04:32

here is that we can just write device

04:35

like this

04:39

and now the great thing about this is

04:41

that two people can run it and if one

04:44

has kuda it's going to run on the GPU if

04:47

they don't have it's gonna run in the

04:48

cpu but the code works no matter if you

04:52

have it or not

04:52

now let's look at some attributes of

04:56

tensors so what we can do is we can as I

05:00

said I print my tensor and we just I get

05:03

so we can do that again and we just get

05:05

some information about the tensor like

05:07

what device it's on and if it requires

05:09

gradient what we can also do is we can

05:12

do my tensor and we can do dot D type so

05:16

that would we'll just in this case print

05:19

towards up float32 and we can also do

05:23

print my tensor that device what that's

05:27

gonna do is gonna show us what there is

05:30

detention or zone so CUDA and then

05:32

you're gonna get this right here which

05:33

essentially if you have multiple GPUs

05:35

it's gonna say on which GPU it's on in

05:39

this case I only have one GPU so it's

05:41

gonna say 0 and 0 I think is the default

05:44

one if you don't specify and then what

05:48

we can do as well we can do print my

05:50

tensor dot shape so yeah this is pretty

05:54

straightforward it's just gonna print

05:55

the shape which is a two by three what

05:59

we can do as well is we can do print

06:02

might answer that requires grad which is

06:06

gonna tell us if that answer requires

06:09

gradient or not which in this case we've

06:11

set it to true alright so let's move on

06:14

to some other common initialization

06:17

methods

06:21

what we can do is if we don't have the

06:24

exact values that we want to write like

06:26

in this case we had one two three and

06:28

four five six we can do x equals torch

06:32

that empty and we can do size equals

06:35

let's say three by three and what this

06:38

is gonna do is it's gonna create a three

06:40

by three

06:41

tensor and our matrix I guess and it's

06:47

gonna be empty in that it's going to be

06:48

uninitialized data but these days this

06:52

data values that isn't gonna be in this

06:55

industry are gonna be just whatever is

06:58

in the memory at that moment so the

07:01

values can really be random so don't

07:03

think that this should be zeros or

07:05

anything like that

07:06

it's just gonna be unleash uninitialized

07:09

theta now if you would want to have

07:11

zeros you can do torched add zeros

07:14

and you don't have to specify the the

07:19

size equals since it's the first

07:21

argument we can just write three three

07:24

like that and that's gonna be so what we

07:27

can do actually we can print let's see

07:30

we can print X right here and we can see

07:33

what value is it gets and in this case

07:36

it actually got zero zeros but that's

07:39

that's not what it's gonna happen in

07:42

general and yeah if you print X after

07:46

this it's also going to be zeros now

07:49

what we can also do is we can do x

07:51

equals torch dot R and and we can do

07:54

three three again and and what this is

07:58

gonna do it it's gonna initialize a

08:00

three by three matrix with values from a

08:03

uniform distribution in the interval 0 &

08:07

1 another thing we could do is we could

08:10

do X equal torched at once I'm looking

08:12

again to I don't a by 3 and this is just

08:17

gonna be a three by three matrix with

08:19

all values of 1 another thing we can do

08:23

is torch that I and we can and this is

08:26

we're gonna send in five by five or

08:29

something like that and this is gonna

08:31

create an identity matrix so we're gonna

08:35

have ones on the diagonal and the rest

08:38

will be will be zeros so if you're

08:41

curious why it's called I is because I

08:43

like that is the identity matrix how you

08:46

write in mathematics and if you say I it

08:50

kind of sounds like I yeah that makes

08:53

sense

08:53

so anyways one more thing we can do is

08:57

we give you something like torch that a

08:59

range and we can do start we can do end

09:04

and we can do step so you know basically

09:07

the arrange function is exactly like the

09:11

range function in Python so this should

09:13

be nothing nothing weird one thing I

09:15

forgot to do is just print them so we

09:16

can see what they look like so if we

09:19

print that one as I said we're gonna

09:20

have once on the diagonal and the rest

09:22

will be 0 if we print X after the

09:27

arranged

09:28

it's going to start at zero it's going

09:30

to have a step of one and the end is a

09:32

non-inclusive v del v value or five so

09:38

we're going to have 0 1 2 3 4 so if we

09:41

print X we're gonna see exactly that 0

09:45

and then up to inclusive for another

09:48

thing we can do is we can do x equals

09:50

torsion linspace and we can specify

09:53

where it should start so we can do start

09:57

equals 0.1 and we could do something

10:00

like end equals 1 and we could also do

10:02

step equals 10 so what this is gonna do

10:08

is it's gonna write this should be steps

10:11

so what this is gonna do is it's gonna

10:13

start at 0.1 and then it's gonna end at

10:16

1 and it's gonna have 10 values in

10:18

between those so what's going to happen

10:21

in this case it's gonna take the first

10:23

value 0.1 then the next one it's going

10:25

to be a point to a point 3 0.4 etc up to

10:29

1 so just to make sure we can print X

10:31

and we see that that's exactly what

10:34

happens and if we calculate the number

10:36

of points so we have 1 2 3 4 5 6 7 8 9

10:41

10 right so we're gonna have it equal 2

10:45

steps amount of point so then we can do

10:48

also x equals torch that empty as we did

10:52

for the first one and we can set the

10:54

size and I don't know 1 and 5 or

10:57

something like that and what we can do

10:59

then is we can do dot normal and we can

11:04

do mean equals 0 and the standard

11:08

deviation equals 1 and essentially so

11:12

what this is gonna do it's gonna create

11:13

the initialized data of size 1 1 by 5

11:17

and then it's just gonna make those

11:20

values normally distribute normally

11:22

distributed with a mean of 0 and

11:24

standard deviation of 1 so we could also

11:27

do this with something like we can also

11:30

do something like with the uniform

11:31

distribution so we can do dot uniform

11:33

and then we can do 0 and 1 which would

11:36

also be similar to what we did up here

11:39

for the torch ran

11:41

but of course here you can specify

11:42

exactly what you want for the lower and

11:45

the upper of the uniform distribution

11:47

another thing we can do is to torch dot

11:50

d AG and then we can do torch dot ones

11:54

of some size let's say three it's going

11:57

to create a diagonal matrix of ones on

12:01

the diagonal it's going to be shape

12:02

three so essentially this is gonna

12:05

create a 3x3 diagonal matrix essentially

12:09

this is gonna create a an identity

12:12

matrix which is three by three so we

12:14

could adjust as well used I

12:17

but this diagonal function can be used

12:20

on on it on any matrix so that we

12:23

preserve those values across a diagonal

12:26

and in this case it's just simple to use

12:29

torched at once now one thing I want to

12:32

show as well is how to initialize tensor

12:34

to different types and how to convert

12:36

them to different types

12:42

so let's say we have some tensor and

12:45

we're just going to do a torch dot a

12:47

range of 4 so we have 0 1 2 3 and yea so

12:53

here I set to start the end and this

12:56

step similarly to Python the step will

13:00

be 1 by default and the start will be 0

13:02

by default so if you do a range for

13:05

you're just gonna do that's the end

13:07

value now I think this is this is

13:09

initialized as a 64 by default but let's

13:13

say we want to convert this into a

13:15

boolean operator so true or false what

13:20

we can do is we can do tensor dot bool

13:22

and that will just create false true

13:26

true true so the first one is 0 that's

13:28

gonna be false and the rest will be

13:30

through true now what's great about

13:33

these as I'm showing you now when you

13:36

dot boo and I'm also going to show you a

13:38

couple more is that they work no matter

13:41

if you're on CUDA or the CPU so no

13:45

matter which one you are these are great

13:47

to remember because they will always

13:48

work now the next thing is we can do

13:51

print tensor that's short and what this

13:54

is gonna do is it's gonna create two

13:56

int16

13:58

and I think both of these two are not

14:01

that often used but they're good to know

14:04

about and then we can also do tensor dot

14:06

loan and what this is gonna do is it's

14:09

gonna do it to in 64 and this one is

14:13

very important because this one is

14:14

almost always used we're going to print

14:17

tensor 1/2 and this is gonna make it to

14:21

float 16 this one is not used that often

14:24

either but if you have well if you have

14:27

newer GPUs in in the 2000 series you can

14:31

actually train your your networks on

14:33

float 16 and that's that that's when

14:37

this is used quite often but if you

14:39

don't have such a GPU I don't have that

14:42

that new of a GPU then it's not possible

14:45

to to Train networks using float 16 so

14:50

what's more common is to use tenser dot

14:53

float

14:55

so this will just be a float 32-bit and

14:57

this one is also super important this

15:00

one is used super often so it's good to

15:03

remember this one and then we also have

15:06

tensor dot double and this is gonna be

15:08

closed 64 now the next thing I'm gonna

15:11

show you is how to convert between

15:15

tensor and let's say you have a numpy

15:18

array so we'll say that we import numpy

15:22

as MP now let's say that we have some

15:25

numpy array we have numpy zeros and I'd

15:28

say we have a 5 by 5 matrix and then

15:32

let's say we want to convert this to a

15:34

tensor well this is quite easy we can do

15:38

tensor equals Torche dot from numpy and

15:43

we can just sending that numpy array

15:45

that's how we get it to a tensor now if

15:48

you want to convert it back so you have

15:50

a a back the number array we can just do

15:54

let's say numpy array back we can do

15:57

tensor dot numpy and this is gonna bring

16:00

back the number array perhaps there

16:02

might be some numerical roundoff errors

16:04

but they will be exactly identical

16:07

otherwise so that was some how to

16:10

initialize any tensor and some other

16:13

useful things like converting between

16:16

other types in float and double and also

16:20

how to convert between numpy arrays and

16:25

tensors now we're going to jump to

16:28

tensor math and comparison operations so

16:32

we're gonna first initialize two tensors

16:35

which we know exactly how to do at this

16:37

point we're going to torch that tensor

16:40

one two three and we're gonna do some

16:44

y2b torsa tensor and then I don't know

16:48

nine eight seven and

16:53

we're going to start real easy so we're

16:55

just going to start with addition now

16:58

there are multiple ways of doing

16:59

addition I'm going to show you a couple

17:01

of different points so we can do

17:04

something like is that one to be torched

17:07

empty of three values then we can do

17:10

torch that add and we can do X Y and

17:13

then we can do out equals Z one now if a

17:16

print said one we're going to get 10 10

17:20

and 10 because we've added these these

17:23

together and as we can see 1 plus 9 is

17:26

10 2 plus 8 and 3 plus 7 so this is one

17:31

way another way is to just do Z equals

17:36

torch dot add of x and y and we're gonna

17:40

get exactly the same result now another

17:43

way and this is my preferred way it's

17:46

just to do X Z equals X plus y so real

17:51

simple and real clean and you know these

17:55

are all identical so they will do

17:57

exactly the same operations and so in my

18:00

opinion there's really no way no reason

18:02

not to use just the normal addition for

18:06

subtraction there are again other ways

18:09

to do it as well but I recommend doing

18:13

it like this so we do Z equals X minus y

18:19

now for division this is a little bit

18:22

more clunky in my opinion but I think

18:26

they are doing some changes for future

18:29

versions of Pi torch but we can do Z

18:33

equals torch dot true divide and then we

18:38

can do x and y what's what's going to

18:40

happen here is that it's going to do

18:42

element wise division if they are of

18:44

equal shape so in this case it's going

18:47

to do 1/9 as its first element 2 divided

18:50

by 8 3 divided by 7 let's say that Y is

18:53

just an integer so Y is I don't know -

18:56

then what's gonna happen it's gonna

18:58

divide every element in X by 2 so it

19:02

would be 1/2 3/2 and 3/2 if Y would be

19:06

in

19:07

now another thing I'm gonna cover is in

19:10

place operations so let's say that we

19:13

have T equals towards that zeros of

19:17

three elements and let's say we want to

19:20

add X but we want to do it in place and

19:23

what that means it will mutate the

19:24

tensor in place so it doesn't create a

19:27

copy so we can do that by T dot ad

19:30

underscore X and whenever you see any

19:34

operation followed by an underscore

19:36

that's when you know that the operation

19:39

is done in place so doing these