Shai Rosenfeld - Such Blocking, Very Concurrency, Wow

ScaleConf
20 Jun 201442:19

Summary

TLDRThe speaker discusses various models of concurrency and parallelism, highlighting their similarities and differences. They delve into concepts such as threads, mutexes, message passing, and shared memory, as well as different scheduling methods like preemptive and cooperative. The talk emphasizes the importance of understanding these models to effectively handle the complexity of modern computing systems, where doing more things at once is crucial for scaling and performance.

Takeaways

  • ๐Ÿ“ˆ The talk focuses on various models of concurrency and parallelism, explaining their similarities and differences.
  • ๐Ÿ”„ Concurrency is about dealing with multiple things at once, while parallelism is about doing multiple things at once.
  • ๐Ÿ’ก Rob Pike differentiates concurrency and parallelism, with concurrency being about structuring programs to do more things, and parallelism about executing them simultaneously.
  • ๐ŸŒŸ Joe Armstrong's analogy explains concurrency as one coffee machine with two queues, and parallelism as two coffee machines with two queues each.
  • ๐Ÿ”ง Operating systems use threads and processes as the primary mechanisms for executing work.
  • ๐Ÿ”„ Scheduling can be either preemptive, where the system can interrupt tasks at any time, or cooperative, where tasks yield control to others explicitly.
  • ๐Ÿค Concurrency models need to coordinate execution, often dealing with atomicity and ensuring data consistency.
  • ๐Ÿ”’ Mutexes and locks are commonly used in concurrency models to manage access to shared resources and prevent data corruption.
  • ๐Ÿ“จ Message passing and shared memory are two primary methods of communication between executing entities in concurrent systems.
  • ๐Ÿš€ Different concurrency models have their advantages and disadvantages, and the choice of model depends on the specific requirements and constraints of the problem at hand.
  • ๐ŸŽ“ The subject of concurrency is vast and complex, with each model deserving deep exploration and understanding.

Q & A

  • What is the main topic of the discussion?

    -The main topic of the discussion is concurrency and scaling, with a focus on different models and their applications in handling multiple tasks simultaneously.

  • What is the difference between concurrency and parallelism as explained in the transcript?

    -Concurrency is about dealing with a lot of things at once, structuring a program to do more things, whereas parallelism is about doing a lot of things at once. Parallelism is a condition that arises when two things are happening literally at the same second, such as running two processes or two threads on a multi-core computer.

  • What are the two primary ways of executing tasks as mentioned in the transcript?

    -The two primary ways of executing tasks are preemptive scheduling, where the system reserves the right to interrupt any task at any time, and cooperative scheduling, where tasks voluntarily yield control to other tasks.

  • What is the significance of atomicity in the context of concurrency?

    -Atomicity is significant in concurrency because it is the fundamental reason why concurrency is challenging. Non-atomic operations can lead to issues like race conditions and data corruption when multiple threads or processes try to modify shared data simultaneously.

  • How does the use of mutexes address the problem of atomicity?

    -Mutexes, or mutual exclusion locks, are used to ensure that only one thread can access a shared resource at a time, thus preventing other threads from interfering and ensuring atomicity of operations.

  • What are the advantages and disadvantages of using threads with mutexes?

    -Advantages include widespread availability and familiarity among programmers. Disadvantages include potential issues with deadlocks, live locks, and the overhead of managing locks.

  • How does message passing differ from shared memory communication?

    -Message passing involves sending and receiving messages between processes or threads, which can avoid the complexity of shared memory management. Shared memory involves writing and reading from shared memory locations, which can lead to atomicity issues if not properly managed.

  • What is the actor model as explained in the transcript?

    -The actor model is a concurrency model that uses message-passing to manage state and communication between entities called actors. Each actor has its own state and mailbox for receiving messages, and they communicate by sending messages to each other's mailboxes.

  • What are the benefits of using the actor model?

    -Benefits include the ability to express concurrency in a concise and object-oriented manner, the avoidance of shared state which reduces the risk of corruption, and the natural fit for distributed systems due to the message-passing nature of actors.

  • How does event-driven programming differ from other concurrency models?

    -Event-driven programming is single-threaded and relies on callbacks and events to handle asynchronous input, without the need for multi-threading or multi-processing. It's typically used in scenarios where resource management is crucial, such as handling a large number of concurrent connections.

  • What are the challenges of implementing event-driven programming?

    -Challenges include the difficulty in debugging due to the loss of context and call stack, the potential for callback hell, and the reliance on state management to navigate the program flow.

Outlines

00:00

๐Ÿ”— Introduction to Concurrency and Scaling

The speaker introduces the dense topic of concurrency in computing, encouraging the audience to follow along with detailed slides available online. The discussion is set to explore scaling and concurrency, crucial for performing multiple operations simultaneously. The speaker works at Engine Yard but will focus on general scaling rather than specific services. Cats are humorously included in the presentation as a nod to typical tech talks. The overall aim is to map out various concurrency models and their practical applications.

05:01

๐Ÿ”„ Understanding Concurrency and Parallelism

This section dives deeper into the definitions and distinctions between concurrency and parallelism. The speaker clarifies these concepts through various authoritative views, including Rob Pike's differentiation: concurrency as handling many things at once and parallelism as doing many things at the same time. The segment includes analogies like queues at coffee machines to make these concepts more relatable and easier to grasp for the audience.

10:01

๐Ÿ“Š Models of Concurrency: Threads and Mutexes

The discussion moves to practical models of concurrency, starting with the basic 'threads and mutexes'. This model involves threads (units of execution) and mutexes (locks that prevent simultaneous execution of critical sections of code) to manage atomicityโ€”ensuring operations complete in whole steps without interruption. The speaker explains how mutexes help prevent data corruption by controlling the sequence of thread operations, highlighting both the advantages and limitations of this approach.

15:03

๐Ÿ”„ Advanced Concurrency: Threads and Transactions

Exploring more sophisticated concurrency mechanisms, the speaker introduces 'threads and transactions', which uses a transactional memory approach to manage shared data. This model provides an 'optimistic' strategy, where operations assume no data conflicts and only rollback if conflicts occur, similar to database transactions. This section discusses the benefits of increased concurrency and better composability, along with the drawbacks of potential rollback complexities.

20:04

๐Ÿ”ฎ Futures and Promises: Asynchronous Programming

The talk progresses to 'futures and promises', a model that allows for asynchronous programming. This approach involves using 'futures' to represent pending results and 'promises' as guarantees of future values. The model facilitates non-blocking operations, where tasks can proceed without waiting for other tasks to complete, thus enhancing efficiency. The speaker covers the advantages of abstraction from direct locking mechanisms and the challenges of potential delays in execution.

25:05

๐Ÿ” Processes and Inter-process Communication (IPC)

Focusing on 'processes and inter-process communication (IPC)', this segment discusses the use of separate processes to perform concurrent tasks, communicating via messages rather than shared memory. This model is especially useful in systems where processes operate independently, reducing the risks of data corruption and deadlock. The speaker explains the use of IPC for scalability and the potential high costs associated with process management.

30:06

๐Ÿ”„ Communicating Sequential Processes (CSP) and Actors

The presentation examines CSP and the actor model, both of which manage concurrency through message passing between independent agents or 'actors'. Each actor processes messages sequentially, ensuring that operations are isolated and reducing the chances of data conflict. This model is highlighted for its scalability and robustness in systems like telecoms where reliability is critical. The speaker emphasizes the flexibility and potential complexity of implementing these models.

35:07

๐Ÿ”„ Coroutines and Event-driven Programming

The speaker discusses 'coroutines' and 'event-driven programming', emphasizing their utility in scenarios where non-blocking operations are critical, such as user interfaces and network servers. Coroutines allow for cooperative task management, while event-driven architectures handle tasks as they become actionable, leading to efficient resource use. The segment covers the strengths and weaknesses of these models, particularly in contexts requiring high concurrency with minimal overhead.

40:08

๐Ÿ”— Recap and Future Directions in Concurrency

Concluding the presentation, the speaker reiterates that no single concurrency model is universally best, urging the audience to consider the specific needs and constraints of their projects. The talk aims to expand the listeners' understanding of concurrency, providing a foundation for further exploration and innovation in this complex field. The session ends with a Q&A, emphasizing interactive learning and the ongoing development of concurrency technologies.

Mindmap

Keywords

๐Ÿ’กConcurrency

Concurrency refers to the concept of dealing with multiple tasks or processes simultaneously. In the context of the video, it is a fundamental aspect of computer programming where the speaker discusses various models and techniques to handle concurrent operations. The speaker illustrates this with examples such as threads and mutexes, and the challenges that arise from ensuring atomicity in operations.

๐Ÿ’กParallelism

Parallelism is the practice of executing multiple processes or tasks at the same time, typically across different processing units like multi-core CPUs. While similar to concurrency, parallelism focuses on the actual simultaneous execution of tasks. The speaker differentiates between concurrency and parallelism using the analogy of a single coffee machine versus two coffee machines, where the latter represents parallel processing.

๐Ÿ’กMutex

A mutex, short for mutual exclusion, is a locking mechanism used in concurrent programming to ensure that only one process can access a shared resource at a time. Mutexes are essential in preventing race conditions and ensuring data integrity during concurrent operations. The speaker explains that using a mutex can help in structuring a program to handle multiple tasks without corrupting shared data.

๐Ÿ’กRace Condition

A race condition occurs in concurrent programming when the outcome of the program depends on the relative timing or interleaving of events, such as the execution of threads. It often leads to unexpected behavior because the operations are not atomic, and the sequence of operations can vary with each execution. The speaker discusses the importance of atomicity in preventing race conditions.

๐Ÿ’กEvent Loop

An event loop is a programming construct that waits for events and dispatches them to event handlers or callbacks. It is central to event-driven and asynchronous programming models, allowing non-blocking operations. The speaker mentions event loops in the context of evented programming, where it processes events and associated callbacks without blocking the main thread of execution.

๐Ÿ’กCallback

A callback is a function that is passed as an argument to another function and is intended to be executed at a later time. In the context of evented programming, callbacks are used to handle events once they occur. The speaker discusses the use of callbacks in event handlers, where they are invoked when specific events are detected by the event loop.

๐Ÿ’กActor Model

The actor model is a mathematical model for concurrent computation in which parallelism is expressed in terms of message passing between entities called actors. Each actor is an independent unit with its own state and behavior, and communicates with other actors by sending and receiving messages. The speaker explains the actor model as a concurrency model that uses message passing through mailboxes, similar to CSP but with actors having identities.

๐Ÿ’กMessage Passing

Message passing is a method of communication between processes or threads where a message is sent from one entity to another, and the recipient executes actions based on the content of the message. It is a fundamental concept in concurrent programming that helps avoid shared state and the associated synchronization issues. The speaker emphasizes message passing as a means of communication in models like CSP and the actor model.

๐Ÿ’กShared Memory

Shared memory is a method of inter-process communication where multiple processes have access to the same region of memory. It allows for efficient data sharing but requires careful synchronization to avoid concurrency issues. The speaker discusses shared memory as a means of communication between threads and processes, and the challenges it presents in ensuring atomic operations.

๐Ÿ’กPreemptive Scheduling

Preemptive scheduling is a method used by an operating system to manage computer processes, where the OS can interrupt a currently running process to give CPU time to another process. This allows for multitasking and efficient use of system resources. The speaker contrasts preemptive scheduling with cooperative scheduling, highlighting how it can lead to concurrency and the need for proper synchronization mechanisms.

๐Ÿ’กCooperative Scheduling

Cooperative scheduling is a method of task scheduling where tasks voluntarily yield control to other tasks, rather than being interrupted by the scheduler. This approach requires each task to release control, often using a form of a 'yield' or 'sleep' function. The speaker discusses cooperative scheduling in the context of evented programming and coroutines, where tasks pass control to each other to avoid blocking the main execution thread.

Highlights

Concurrency and parallelism are distinct concepts; concurrency is about dealing with many things at once, while parallelism is about doing many things at once.

Rob Pike, creator of Go, differentiates concurrency as structuring programs to do more things, and parallelism as the occurrence of many things happening at once.

Joe Armstrong, creator of Erlang, explains concurrency and parallelism with the analogy of one coffee machine with two queues versus two coffee machines with two queues.

The fundamental challenge in concurrency is atomicity; ensuring operations are executed in a single, uninterruptible run is crucial for data integrity.

Operating systems execute work through threads and processes, with threads being either native OS threads or green threads within a virtual machine.

Scheduling mechanisms for executing tasks include preemptive scheduling, where the OS can interrupt tasks, and cooperative scheduling, where tasks voluntarily yield control.

Concurrency models can be broadly categorized by their communication methods: shared memory, message passing, and channels.

The use of mutexes (locks) is a common method to handle atomicity in concurrent programming, ensuring that shared data is accessed by only one thread at a time.

Software transactional memory (STM) provides an optimistic approach to concurrency, allowing for rollbacks in case of conflicts, similar to database transactions.

Message passing through channels or futures and promises is a concurrency model that abstracts away the need for locks, handling communication between executing entities.

Inter-process communication (IPC) is a method for separate processes to communicate, often avoiding shared memory to prevent corruption and simplify atomicity management.

Communicating Sequential Processes (CSP) is a model that uses channels for message passing, influencing languages like Go, which heavily relies on channels for concurrency.

The Actor model is similar to CSP but uses actors with individual mailboxes for message passing, making it akin to object-oriented threads.

Coroutines are single-threaded, cooperative concurrency models that allow for pausing and resuming execution, useful for managing local state without locks.

Event-driven programming is a concurrency model that uses a single execution thread, relying on callbacks, event handlers, and an event loop to manage non-blocking operations.

Event-driven programming can scale well in terms of resource usage, as seen in web servers like Nginx, which handle many connections with fewer threads than traditional models.

The choice of concurrency model should be influenced by the specific needs of the application, considering factors like simplicity, scalability, and the potential for parallelism.

Transcripts

00:00

all right uh so such blocking very

00:02

concurrency

00:03

wow that's uh what i'm going to talk

00:05

about um so the

00:07

the slides are going to be pretty dense

00:08

today i'm going to talk about a lot of

00:10

material so if you guys

00:11

um yesterday when i was sitting here i

00:13

had internet so if you guys want to pull

00:14

this up on your

00:15

laptop or ipad or whatever you can pull

00:18

this up it's

00:18

this link it's basically my name and

00:21

then

00:22

sb vc such blocking very concurrency so

00:24

you can

00:25

pull that up and follow along so like

00:28

jonathan says i work at engine yard it's

00:30

a company that basically

00:32

provisions stuff like windows azure and

00:35

amazon and provides you with a

00:38

ready stack to use your application with

00:40

so you just

00:41

it's basically similar to heroku and

00:43

other platforms as the services

00:45

that you just push your application

00:47

there similar to google app engine as

00:48

well

00:49

you just push your application and then

00:50

it kind of works but i'm not going to

00:52

talk about

00:53

engineer today i'm going to talk about

00:55

cats

00:57

and because every tech talk needs a cat

00:59

and actually the other day when we were

01:00

at that beer

01:02

tap place jonathan said that we should

01:05

put

01:05

every tech talking is a cat in it so i

01:07

put some images here for you to

01:09

enjoy there's a cat there's another cat

01:23

all right well i really want to talk

01:24

about scaling because this is scale conf

01:26

so it's actually not going to be related

01:27

to engine yard

01:28

what i want to talk about is uh scaling

01:31

and concurrency

01:32

so really what what is scaling right we

01:35

have a certain set amount of time to do

01:36

something

01:37

we have these things that we need to do

01:39

and then we need to do more of them

01:41

and that's what scaling is right so

01:42

scaling is really doing more things at

01:44

once

01:46

so when i was invited to come here i

01:48

decided okay i'm going to talk about all

01:49

the ways you can do more things

01:51

okay i'm going to talk about all the

01:53

talk about all the things

01:54

that you can do and to accomplish this

01:57

goal to do more things

01:59

so i'm going to map them all out right

02:00

and i decided to talk about concurrency

02:02

and parallelism because apparently

02:03

they're different

02:04

and i kept on kind of looking into it

02:06

and i realized well there's like a ton

02:07

of

02:08

models it wasn't really going to put me

02:09

off i was going to talk about all the

02:11

things

02:12

and there's just a ton of calculus and

02:14

all these you know mathematical models

02:15

and it just keeps on going and going and

02:17

going so i decided you know maybe i'm

02:18

not going to talk about everything in

02:20

depth

02:20

but i do i do want to to talk about

02:23

concurrency models

02:24

so what i ended up deciding on is i'm

02:26

going to kind of do a tour

02:28

of the different models that exist i'm

02:30

going to talk about

02:31

some general concepts that all these

02:33

models kind of share

02:35

and then i'm going to go through some of

02:36

the models and talk about them and kind

02:37

of

02:38

iterate some advantages and

02:39

disadvantages that each one of these

02:41

things have

02:42

and then i'm going to let you sit with

02:43

it in your own head and figure it out

02:45

because really

02:46

this is a huge subject it's i'm

02:48

surprised myself that i put all this

02:50

information in one in one presentation

02:52

because really even one specific model

02:54

you can spend a whole lifetime on

02:56

right this is it's a really big subject

02:58

and so i'm

02:59

going to leave it for you guys to to dig

03:01

in further

03:03

so talk about uh start with the the

03:06

general concepts which are the common

03:07

ingredients that all these models share

03:09

and what i've kind of mapped it out to

03:10

is the following things that i'm going

03:12

to start

03:13

just diving into so concurrency versus

03:15

parallelism are apparently different

03:17

right so what what is the difference

03:19

i found this on the internet says on the

03:22

haskell blog and it says not all

03:23

programmers agree on the meaning of the

03:24

terms parallelism and concurrency

03:26

they may define them in different ways

03:27

or do not distinguish them

03:29

at all and so i decided okay well let's

03:31

ask the authority figures

03:33

what what is going on and so rob pike

03:35

created go was a pretty good authority

03:37

figure for concurrency

03:38

says that concurrency is about dealing

03:40

with a lot of things at once and

03:41

parallelism is about doing a lot of

03:42

things at once

03:44

and they're they're similar but they're

03:46

a little different and he basically

03:47

talks

03:47

in this um this talk that he gave in san

03:50

francisco it's a really good talk and

03:52

you're going to have resources and links

03:53

that you guys can check out on the

03:54

slides but he basically says that

03:55

concurrency

03:56

is is structuring your program your

03:58

program to do more things

04:00

whereas parallelism is is the fact that

04:02

it happens a lot at once

04:03

and so the other authority figure i

04:05

decided to ask was monty python

04:07

figures and actually it was joe

04:09

armstrong

04:11

so erling is another highly concurrent

04:13

language and he

04:14

says when concurrency is two queues one

04:16

coffee machine

04:17

and parallelism is two queues two coffee

04:19

machines that's how he decides to

04:20

explain it to a five-year-old

04:22

it's a similar it's a similar concept

04:24

you can see parallelism is is happening

04:26

at the same time whereas concurrency is

04:27

just the way you structure it to have

04:29

two lines

04:29

happen at the same time but you still

04:31

only have this one

04:33

one coffee machine and so the way that

04:35

that it made sense to me

04:36

was that parallelism is really a

04:38

condition that arises when two things

04:39

are happening literally at the same

04:41

second so for example if you run

04:43

two processes or two threads on a

04:44

multi-core computer and they're

04:46

executing at the same second that's

04:47

parallel whereas concurrency is kind of

04:49

it's almost like a super set

04:51

of parallelism it doesn't have to be

04:52

parallel but it can be

04:54

and that's it's not entirely true

04:56

because some i've i've also read that

04:58

some people say that there's you know

04:59

there are certain things that are

05:00

parallel and not concurrent but i think

05:02

it's more of like a

05:03

philosophical programming kind of debate

05:06

thing so

05:08

so operating system mechanisms are we

05:10

have things that we need to execute to

05:11

do work right so how do we execute these

05:13

things

05:14

so really it's just threads and

05:15

processes right there's not there's not

05:16

much else to it

05:18

you have a regular operating system

05:19

process a linux task

05:21

and we have threads native operating

05:23

system threads

05:24

again same thing but they they just

05:26

share memory instead of

05:27

being contained self-contained and then

05:30

there are green threads

05:31

and virtual machine processes these are

05:33

basically kind of user level threads

05:36

that are done within the virtual machine

05:37

that you're running

05:38

or the interpreter and they're they're

05:40

similar to regular threads except

05:42

they're not on the operating system

05:43

level and for the purpose of the talk

05:45

i'm just going to really uh split it

05:47

into

05:47

processes and threads and the reason i

05:49

put leonardo dicaprio there is because

05:52

it's a virtual machine process it's a

05:53

process within a process so it's kind of

05:55

conception

05:58

so scheduling so we have these these

06:00

executing things right process and

06:02

threads is what i'm going to refer to

06:03

now as an executing thing

06:05

and these executing things need to be

06:06

scheduled so how how do they run like

06:09

when do they run

06:10

so there's really only two ways you can

06:11

that this is usually done right there's

06:12

the preemptive way

06:13

and then there's the cooperative way and

06:15

the preemptive way

06:17

is basically saying we reserve the right

06:18

to attack anyone at any given time so

06:20

basically anything can happen

06:21

right you have one thing that's running

06:23

and then who knows what happens who

06:25

knows how long has passed something else

06:26

gets invoked at the same

06:27

you know immediately and so that's you

06:30

know interrupts our

06:31

preemptive and scheduling threads and

06:33

operating system are preemptive

06:34

and that's preemption whereas the

06:36

cooperative uh

06:38

way of scheduling is basically it's kind

06:40

of handing off the baton to something

06:41

it's like

06:42

fred is running fred is doing all this

06:44

stuff and then he stops he pauses and

06:45

says okay

06:47

i'm done it's now your turn to do

06:48

something and then joe picks up the

06:50

baton and starts

06:51

you know making hot potatoes or

06:53

something stops when he's done he passes

06:55

on to harry and then the circle kind of

06:56

goes around but they're all

06:58

they're all working together and this is

06:59

the cooperative way of scheduling

07:02

so really what what this is is that it

07:04

relies on the program or the thing

07:05

that's running the executing thing

07:07

that's running to say i'm done

07:08

it's now your turn

07:12

so this is probably the most important

07:14

um common

07:15

common ingredient that all these things

07:17

share because really we have all these

07:19

things that are executing they're

07:20

running either preemptively or

07:21

cooperative

07:21

cooperatively or whatever but they they

07:24

need to coordinate between them what's

07:25

happening right because they're all

07:26

doing work with the same stuff so how do

07:28

they how do we make sure

07:29

that these things don't trip over each

07:30

other this kind of leads me into the

07:32

concept of

07:33

atomicity and atomicity is basically

07:36

the the fundamental reason of why

07:38

concurrency is hard

07:40

right this is the reason that that you

07:43

know people do phds on this stuff

07:45

is because things are not atomic and the

07:47

reason that it can create havoc is

07:48

because if you you look at that example

07:50

right there

07:51

you'll see you know the first thread is

07:52

trying to increment a variable right in

07:54

this case we want two threads to

07:56

increment a variable the first thread is

07:57

going to read the variable from ram

07:59

it's going to push it into the cpu into

08:01

some cpu register make the calculation

08:03

to

08:03

you know up it by one and save its local

08:05

stack and then

08:06

the other thread gets preemptively

08:08

scheduled it reads the variable from ram

08:10

and if you can see the first thread

08:12

hasn't written it back to memory yet so

08:13

it reads it and it's zero

08:15

and then uh the first thread writes it

08:17

back to ram the second thread uh picks

08:19

up from where it's left it increments it

08:21

it's one

08:22

and it pushes it down to ram and it's

08:23

it's one whereas in reality this should

08:25

have been two

08:26

right that's a serious problem because

08:28

you can't trust your programs doing what

08:29

you're doing it's it's bad

08:30

right and um yeah it's really bad

08:34

people can die so the way that you

08:38

commute these things communicate with

08:39

each other

08:40

is there really two kind of main ways to

08:42

do it one is is shared memory

08:44

and shared memory is basically writing

08:45

some state or some value to

08:47

uh you know variable in in ram basically

08:50

and then the other executing thing looks

08:51

at that and decides what in it what it

08:53

needs

08:54

to do according to that value right now

08:55

shared memory and it's a pretty

08:56

efficient means

08:57

of passing data between between these

08:59

executing things

09:01

the other way is through message passing

09:03

and this is a

09:05

we'll get to it later but message

09:06

passing is essentially instead of

09:08

invoking a certain function

09:10

that needs to be done you you pass the

09:12

message of what needs to be done to some

09:14

kind of proxy and then that thing

09:16

decides to you know it could either be

09:18

invoked automatically you know at the

09:20

same time or it could be done later

09:22

or whatever but the the idea of message

09:24

passing is that you don't directly

09:25

invoke something you just

09:26

tell something to happen

09:29

and um and channels is kind of a subset

09:32

to me it's a subset of message passing

09:34

except that your interface is through a

09:36

stream so instead of uh you having to

09:38

like

09:38

send a certain message and and have to

09:40

deal with that your api is basically

09:42

like a file

09:42

and this is pretty pretty cool because

09:44

if you think about like the unix

09:45

paradigm of everything as a file

09:47

you know having a file stream api for

09:50

message passing is

09:50

is nice so

09:54

i'm going to talk about the model so

09:55

those were kind of the the general

09:57

concepts that we're going to use to

09:58

try and categorize all these different

10:01

things in a way that

10:02

that hopefully will be easier to kind of

10:04

grasp and then go deeper

10:06

you know on your own time so this is the

10:09

link again

10:10

um because the slides are going to be

10:12

you know pretty dense

10:13

if you haven't pulled it up you're

10:14

welcome to pull it up now

10:18

so uh this is the this table that took

10:21

me quite a while to put together

10:22

and it's trying to take all these uh

10:24

these things that we've talked about and

10:26

trying to classify all of the the models

10:28

that we have

10:29

and i'm going to uh iterate through

10:31

every one of these and you'll see

10:32

you know the um you'll see this the

10:35

table kind of row on the top so you can

10:37

refer to that

10:38

as to when i continue going along

10:41

so threads in mutex is is is the pretty

10:45

you know uh pretty fundamental kind of

10:47

concurrency

10:48

thing that you do i'm pretty sure most

10:50

of you guys know it yesterday

10:52

i noticed that most of you are back in

10:53

developers so i'm pretty sure you guys

10:55

are familiar with

10:56

most of these things um but threads in

10:58

mutexism i'm sure is probably

11:00

pretty pretty something you're pretty

11:01

familiar with so

11:03

so again how do we deal with like the

11:05

atomicity problem right how do we make

11:06

sure that data doesn't corrupt it well

11:07

use a mutex right use a lock

11:09

and um i'm just going to the table on

11:11

the top right there you can see so

11:14

this uses threads and mutex is kind of

11:16

the name of this this pattern

11:17

um it uses threads right and it's it's

11:20

preemptively scheduled because operating

11:22

system threads are promptly

11:23

scheduled and it uses shared memory as a

11:26

means to communicate

11:27

and it does it through locks but with

11:29

locks but shared memory is kind of

11:31

the method of communication and it's