Shai Rosenfeld - Such Blocking, Very Concurrency, Wow
Summary
TLDRThe speaker discusses various models of concurrency and parallelism, highlighting their similarities and differences. They delve into concepts such as threads, mutexes, message passing, and shared memory, as well as different scheduling methods like preemptive and cooperative. The talk emphasizes the importance of understanding these models to effectively handle the complexity of modern computing systems, where doing more things at once is crucial for scaling and performance.
Takeaways
- 📈 The talk focuses on various models of concurrency and parallelism, explaining their similarities and differences.
- 🔄 Concurrency is about dealing with multiple things at once, while parallelism is about doing multiple things at once.
- 💡 Rob Pike differentiates concurrency and parallelism, with concurrency being about structuring programs to do more things, and parallelism about executing them simultaneously.
- 🌟 Joe Armstrong's analogy explains concurrency as one coffee machine with two queues, and parallelism as two coffee machines with two queues each.
- 🔧 Operating systems use threads and processes as the primary mechanisms for executing work.
- 🔄 Scheduling can be either preemptive, where the system can interrupt tasks at any time, or cooperative, where tasks yield control to others explicitly.
- 🤝 Concurrency models need to coordinate execution, often dealing with atomicity and ensuring data consistency.
- 🔒 Mutexes and locks are commonly used in concurrency models to manage access to shared resources and prevent data corruption.
- 📨 Message passing and shared memory are two primary methods of communication between executing entities in concurrent systems.
- 🚀 Different concurrency models have their advantages and disadvantages, and the choice of model depends on the specific requirements and constraints of the problem at hand.
- 🎓 The subject of concurrency is vast and complex, with each model deserving deep exploration and understanding.
Q & A
What is the main topic of the discussion?
-The main topic of the discussion is concurrency and scaling, with a focus on different models and their applications in handling multiple tasks simultaneously.
What is the difference between concurrency and parallelism as explained in the transcript?
-Concurrency is about dealing with a lot of things at once, structuring a program to do more things, whereas parallelism is about doing a lot of things at once. Parallelism is a condition that arises when two things are happening literally at the same second, such as running two processes or two threads on a multi-core computer.
What are the two primary ways of executing tasks as mentioned in the transcript?
-The two primary ways of executing tasks are preemptive scheduling, where the system reserves the right to interrupt any task at any time, and cooperative scheduling, where tasks voluntarily yield control to other tasks.
What is the significance of atomicity in the context of concurrency?
-Atomicity is significant in concurrency because it is the fundamental reason why concurrency is challenging. Non-atomic operations can lead to issues like race conditions and data corruption when multiple threads or processes try to modify shared data simultaneously.
How does the use of mutexes address the problem of atomicity?
-Mutexes, or mutual exclusion locks, are used to ensure that only one thread can access a shared resource at a time, thus preventing other threads from interfering and ensuring atomicity of operations.
What are the advantages and disadvantages of using threads with mutexes?
-Advantages include widespread availability and familiarity among programmers. Disadvantages include potential issues with deadlocks, live locks, and the overhead of managing locks.
How does message passing differ from shared memory communication?
-Message passing involves sending and receiving messages between processes or threads, which can avoid the complexity of shared memory management. Shared memory involves writing and reading from shared memory locations, which can lead to atomicity issues if not properly managed.
What is the actor model as explained in the transcript?
-The actor model is a concurrency model that uses message-passing to manage state and communication between entities called actors. Each actor has its own state and mailbox for receiving messages, and they communicate by sending messages to each other's mailboxes.
What are the benefits of using the actor model?
-Benefits include the ability to express concurrency in a concise and object-oriented manner, the avoidance of shared state which reduces the risk of corruption, and the natural fit for distributed systems due to the message-passing nature of actors.
How does event-driven programming differ from other concurrency models?
-Event-driven programming is single-threaded and relies on callbacks and events to handle asynchronous input, without the need for multi-threading or multi-processing. It's typically used in scenarios where resource management is crucial, such as handling a large number of concurrent connections.
What are the challenges of implementing event-driven programming?
-Challenges include the difficulty in debugging due to the loss of context and call stack, the potential for callback hell, and the reliance on state management to navigate the program flow.
Outlines
🔗 Introduction to Concurrency and Scaling
The speaker introduces the dense topic of concurrency in computing, encouraging the audience to follow along with detailed slides available online. The discussion is set to explore scaling and concurrency, crucial for performing multiple operations simultaneously. The speaker works at Engine Yard but will focus on general scaling rather than specific services. Cats are humorously included in the presentation as a nod to typical tech talks. The overall aim is to map out various concurrency models and their practical applications.
🔄 Understanding Concurrency and Parallelism
This section dives deeper into the definitions and distinctions between concurrency and parallelism. The speaker clarifies these concepts through various authoritative views, including Rob Pike's differentiation: concurrency as handling many things at once and parallelism as doing many things at the same time. The segment includes analogies like queues at coffee machines to make these concepts more relatable and easier to grasp for the audience.
📊 Models of Concurrency: Threads and Mutexes
The discussion moves to practical models of concurrency, starting with the basic 'threads and mutexes'. This model involves threads (units of execution) and mutexes (locks that prevent simultaneous execution of critical sections of code) to manage atomicity—ensuring operations complete in whole steps without interruption. The speaker explains how mutexes help prevent data corruption by controlling the sequence of thread operations, highlighting both the advantages and limitations of this approach.
🔄 Advanced Concurrency: Threads and Transactions
Exploring more sophisticated concurrency mechanisms, the speaker introduces 'threads and transactions', which uses a transactional memory approach to manage shared data. This model provides an 'optimistic' strategy, where operations assume no data conflicts and only rollback if conflicts occur, similar to database transactions. This section discusses the benefits of increased concurrency and better composability, along with the drawbacks of potential rollback complexities.
🔮 Futures and Promises: Asynchronous Programming
The talk progresses to 'futures and promises', a model that allows for asynchronous programming. This approach involves using 'futures' to represent pending results and 'promises' as guarantees of future values. The model facilitates non-blocking operations, where tasks can proceed without waiting for other tasks to complete, thus enhancing efficiency. The speaker covers the advantages of abstraction from direct locking mechanisms and the challenges of potential delays in execution.
🔁 Processes and Inter-process Communication (IPC)
Focusing on 'processes and inter-process communication (IPC)', this segment discusses the use of separate processes to perform concurrent tasks, communicating via messages rather than shared memory. This model is especially useful in systems where processes operate independently, reducing the risks of data corruption and deadlock. The speaker explains the use of IPC for scalability and the potential high costs associated with process management.
🔄 Communicating Sequential Processes (CSP) and Actors
The presentation examines CSP and the actor model, both of which manage concurrency through message passing between independent agents or 'actors'. Each actor processes messages sequentially, ensuring that operations are isolated and reducing the chances of data conflict. This model is highlighted for its scalability and robustness in systems like telecoms where reliability is critical. The speaker emphasizes the flexibility and potential complexity of implementing these models.
🔄 Coroutines and Event-driven Programming
The speaker discusses 'coroutines' and 'event-driven programming', emphasizing their utility in scenarios where non-blocking operations are critical, such as user interfaces and network servers. Coroutines allow for cooperative task management, while event-driven architectures handle tasks as they become actionable, leading to efficient resource use. The segment covers the strengths and weaknesses of these models, particularly in contexts requiring high concurrency with minimal overhead.
🔗 Recap and Future Directions in Concurrency
Concluding the presentation, the speaker reiterates that no single concurrency model is universally best, urging the audience to consider the specific needs and constraints of their projects. The talk aims to expand the listeners' understanding of concurrency, providing a foundation for further exploration and innovation in this complex field. The session ends with a Q&A, emphasizing interactive learning and the ongoing development of concurrency technologies.
Mindmap
Keywords
💡Concurrency
💡Parallelism
💡Mutex
💡Race Condition
💡Event Loop
💡Callback
💡Actor Model
💡Message Passing
💡Shared Memory
💡Preemptive Scheduling
💡Cooperative Scheduling
Highlights
Concurrency and parallelism are distinct concepts; concurrency is about dealing with many things at once, while parallelism is about doing many things at once.
Rob Pike, creator of Go, differentiates concurrency as structuring programs to do more things, and parallelism as the occurrence of many things happening at once.
Joe Armstrong, creator of Erlang, explains concurrency and parallelism with the analogy of one coffee machine with two queues versus two coffee machines with two queues.
The fundamental challenge in concurrency is atomicity; ensuring operations are executed in a single, uninterruptible run is crucial for data integrity.
Operating systems execute work through threads and processes, with threads being either native OS threads or green threads within a virtual machine.
Scheduling mechanisms for executing tasks include preemptive scheduling, where the OS can interrupt tasks, and cooperative scheduling, where tasks voluntarily yield control.
Concurrency models can be broadly categorized by their communication methods: shared memory, message passing, and channels.
The use of mutexes (locks) is a common method to handle atomicity in concurrent programming, ensuring that shared data is accessed by only one thread at a time.
Software transactional memory (STM) provides an optimistic approach to concurrency, allowing for rollbacks in case of conflicts, similar to database transactions.
Message passing through channels or futures and promises is a concurrency model that abstracts away the need for locks, handling communication between executing entities.
Inter-process communication (IPC) is a method for separate processes to communicate, often avoiding shared memory to prevent corruption and simplify atomicity management.
Communicating Sequential Processes (CSP) is a model that uses channels for message passing, influencing languages like Go, which heavily relies on channels for concurrency.
The Actor model is similar to CSP but uses actors with individual mailboxes for message passing, making it akin to object-oriented threads.
Coroutines are single-threaded, cooperative concurrency models that allow for pausing and resuming execution, useful for managing local state without locks.
Event-driven programming is a concurrency model that uses a single execution thread, relying on callbacks, event handlers, and an event loop to manage non-blocking operations.
Event-driven programming can scale well in terms of resource usage, as seen in web servers like Nginx, which handle many connections with fewer threads than traditional models.
The choice of concurrency model should be influenced by the specific needs of the application, considering factors like simplicity, scalability, and the potential for parallelism.
Transcripts
all right uh so such blocking very
concurrency
wow that's uh what i'm going to talk
about um so the
the slides are going to be pretty dense
today i'm going to talk about a lot of
material so if you guys
um yesterday when i was sitting here i
had internet so if you guys want to pull
this up on your
laptop or ipad or whatever you can pull
this up it's
this link it's basically my name and
then
sb vc such blocking very concurrency so
you can
pull that up and follow along so like
jonathan says i work at engine yard it's
a company that basically
provisions stuff like windows azure and
amazon and provides you with a
ready stack to use your application with
so you just
it's basically similar to heroku and
other platforms as the services
that you just push your application
there similar to google app engine as
well
you just push your application and then
it kind of works but i'm not going to
talk about
engineer today i'm going to talk about
cats
and because every tech talk needs a cat
and actually the other day when we were
at that beer
tap place jonathan said that we should
put
every tech talking is a cat in it so i
put some images here for you to
enjoy there's a cat there's another cat
all right well i really want to talk
about scaling because this is scale conf
so it's actually not going to be related
to engine yard
what i want to talk about is uh scaling
and concurrency
so really what what is scaling right we
have a certain set amount of time to do
something
we have these things that we need to do
and then we need to do more of them
and that's what scaling is right so
scaling is really doing more things at
once
so when i was invited to come here i
decided okay i'm going to talk about all
the ways you can do more things
okay i'm going to talk about all the
talk about all the things
that you can do and to accomplish this
goal to do more things
so i'm going to map them all out right
and i decided to talk about concurrency
and parallelism because apparently
they're different
and i kept on kind of looking into it
and i realized well there's like a ton
of
models it wasn't really going to put me
off i was going to talk about all the
things
and there's just a ton of calculus and
all these you know mathematical models
and it just keeps on going and going and
going so i decided you know maybe i'm
not going to talk about everything in
depth
but i do i do want to to talk about
concurrency models
so what i ended up deciding on is i'm
going to kind of do a tour
of the different models that exist i'm
going to talk about
some general concepts that all these
models kind of share
and then i'm going to go through some of
the models and talk about them and kind
of
iterate some advantages and
disadvantages that each one of these
things have
and then i'm going to let you sit with
it in your own head and figure it out
because really
this is a huge subject it's i'm
surprised myself that i put all this
information in one in one presentation
because really even one specific model
you can spend a whole lifetime on
right this is it's a really big subject
and so i'm
going to leave it for you guys to to dig
in further
so talk about uh start with the the
general concepts which are the common
ingredients that all these models share
and what i've kind of mapped it out to
is the following things that i'm going
to start
just diving into so concurrency versus
parallelism are apparently different
right so what what is the difference
i found this on the internet says on the
haskell blog and it says not all
programmers agree on the meaning of the
terms parallelism and concurrency
they may define them in different ways
or do not distinguish them
at all and so i decided okay well let's
ask the authority figures
what what is going on and so rob pike
created go was a pretty good authority
figure for concurrency
says that concurrency is about dealing
with a lot of things at once and
parallelism is about doing a lot of
things at once
and they're they're similar but they're
a little different and he basically
talks
in this um this talk that he gave in san
francisco it's a really good talk and
you're going to have resources and links
that you guys can check out on the
slides but he basically says that
concurrency
is is structuring your program your
program to do more things
whereas parallelism is is the fact that
it happens a lot at once
and so the other authority figure i
decided to ask was monty python
figures and actually it was joe
armstrong
so erling is another highly concurrent
language and he
says when concurrency is two queues one
coffee machine
and parallelism is two queues two coffee
machines that's how he decides to
explain it to a five-year-old
it's a similar it's a similar concept
you can see parallelism is is happening
at the same time whereas concurrency is
just the way you structure it to have
two lines
happen at the same time but you still
only have this one
one coffee machine and so the way that
that it made sense to me
was that parallelism is really a
condition that arises when two things
are happening literally at the same
second so for example if you run
two processes or two threads on a
multi-core computer and they're
executing at the same second that's
parallel whereas concurrency is kind of
it's almost like a super set
of parallelism it doesn't have to be
parallel but it can be
and that's it's not entirely true
because some i've i've also read that
some people say that there's you know
there are certain things that are
parallel and not concurrent but i think
it's more of like a
philosophical programming kind of debate
thing so
so operating system mechanisms are we
have things that we need to execute to
do work right so how do we execute these
things
so really it's just threads and
processes right there's not there's not
much else to it
you have a regular operating system
process a linux task
and we have threads native operating
system threads
again same thing but they they just
share memory instead of
being contained self-contained and then
there are green threads
and virtual machine processes these are
basically kind of user level threads
that are done within the virtual machine
that you're running
or the interpreter and they're they're
similar to regular threads except
they're not on the operating system
level and for the purpose of the talk
i'm just going to really uh split it
into
processes and threads and the reason i
put leonardo dicaprio there is because
it's a virtual machine process it's a
process within a process so it's kind of
conception
so scheduling so we have these these
executing things right process and
threads is what i'm going to refer to
now as an executing thing
and these executing things need to be
scheduled so how how do they run like
when do they run
so there's really only two ways you can
that this is usually done right there's
the preemptive way
and then there's the cooperative way and
the preemptive way
is basically saying we reserve the right
to attack anyone at any given time so
basically anything can happen
right you have one thing that's running
and then who knows what happens who
knows how long has passed something else
gets invoked at the same
you know immediately and so that's you
know interrupts our
preemptive and scheduling threads and
operating system are preemptive
and that's preemption whereas the
cooperative uh
way of scheduling is basically it's kind
of handing off the baton to something
it's like
fred is running fred is doing all this
stuff and then he stops he pauses and
says okay
i'm done it's now your turn to do
something and then joe picks up the
baton and starts
you know making hot potatoes or
something stops when he's done he passes
on to harry and then the circle kind of
goes around but they're all
they're all working together and this is
the cooperative way of scheduling
so really what what this is is that it
relies on the program or the thing
that's running the executing thing
that's running to say i'm done
it's now your turn
so this is probably the most important
um common
common ingredient that all these things
share because really we have all these
things that are executing they're
running either preemptively or
cooperative
cooperatively or whatever but they they
need to coordinate between them what's
happening right because they're all
doing work with the same stuff so how do
they how do we make sure
that these things don't trip over each
other this kind of leads me into the
concept of
atomicity and atomicity is basically
the the fundamental reason of why
concurrency is hard
right this is the reason that that you
know people do phds on this stuff
is because things are not atomic and the
reason that it can create havoc is
because if you you look at that example
right there
you'll see you know the first thread is
trying to increment a variable right in
this case we want two threads to
increment a variable the first thread is
going to read the variable from ram
it's going to push it into the cpu into
some cpu register make the calculation
to
you know up it by one and save its local
stack and then
the other thread gets preemptively
scheduled it reads the variable from ram
and if you can see the first thread
hasn't written it back to memory yet so
it reads it and it's zero
and then uh the first thread writes it
back to ram the second thread uh picks
up from where it's left it increments it
it's one
and it pushes it down to ram and it's
it's one whereas in reality this should
have been two
right that's a serious problem because
you can't trust your programs doing what
you're doing it's it's bad
right and um yeah it's really bad
people can die so the way that you
commute these things communicate with
each other
is there really two kind of main ways to
do it one is is shared memory
and shared memory is basically writing
some state or some value to
uh you know variable in in ram basically
and then the other executing thing looks
at that and decides what in it what it
needs
to do according to that value right now
shared memory and it's a pretty
efficient means
of passing data between between these
executing things
the other way is through message passing
and this is a
we'll get to it later but message
passing is essentially instead of
invoking a certain function
that needs to be done you you pass the
message of what needs to be done to some
kind of proxy and then that thing
decides to you know it could either be
invoked automatically you know at the
same time or it could be done later
or whatever but the the idea of message
passing is that you don't directly
invoke something you just
tell something to happen
and um and channels is kind of a subset
to me it's a subset of message passing
except that your interface is through a
stream so instead of uh you having to
like
send a certain message and and have to
deal with that your api is basically
like a file
and this is pretty pretty cool because
if you think about like the unix
paradigm of everything as a file
you know having a file stream api for
message passing is
is nice so
i'm going to talk about the model so
those were kind of the the general
concepts that we're going to use to
try and categorize all these different
things in a way that
that hopefully will be easier to kind of
grasp and then go deeper
you know on your own time so this is the
link again
um because the slides are going to be
you know pretty dense
if you haven't pulled it up you're
welcome to pull it up now
so uh this is the this table that took
me quite a while to put together
and it's trying to take all these uh
these things that we've talked about and
trying to classify all of the the models
that we have
and i'm going to uh iterate through
every one of these and you'll see
you know the um you'll see this the
table kind of row on the top so you can
refer to that
as to when i continue going along
so threads in mutex is is is the pretty
you know uh pretty fundamental kind of
concurrency
thing that you do i'm pretty sure most
of you guys know it yesterday
i noticed that most of you are back in
developers so i'm pretty sure you guys
are familiar with
most of these things um but threads in
mutexism i'm sure is probably
pretty pretty something you're pretty
familiar with so
so again how do we deal with like the
atomicity problem right how do we make
sure that data doesn't corrupt it well
use a mutex right use a lock
and um i'm just going to the table on
the top right there you can see so
this uses threads and mutex is kind of
the name of this this pattern
um it uses threads right and it's it's
preemptively scheduled because operating
system threads are promptly
scheduled and it uses shared memory as a
means to communicate
and it does it through locks but with
locks but shared memory is kind of
the method of communication and it's
concurrent and parallel that's what
those that cp thing
means means you can run it literally at
the same time on a multi-core computer
and all of these models are concurrent
so it's really question if it's parallel
uh parallel or not
and then mutex is an example for this
pattern so how do we deal with
atomicity well we just use a mutex right
we lock around we have that
incrementing thing that we want to make
sure equals two at the end
so thread one tries to increment that
counter
and then thread two tries to increment
it but it can't access that shared data
because there's a lock around it
and so when it reads that variable from
ram it will equals one and then
in the end when it's done it'll be two
two and that'll be good right that's
what we want
so some uh you know pros and cons it's
the biggest advantage is that it's it's
really i mean if you're on any platform
you have threads and mutexes
it's everywhere it's pretty common um
as a programmer you know you you get
familiarized with it pretty much in the
beginning it's
a pretty common pattern and that's
that's an advantage because there's a
lot of resources around it
but uh you know disadvantages is that
because you have to deal with locking
you run into all these issues
of you know live lock and and deadlock
and you know if you have one thread
waiting on another thread then they just
it stalls and your computer crashes and
you see the blue screen
right that's that's basically uh
deadlock
so the next model is threads and
transactions and it's similar to
uh threads and mutexes but it works a
little differently it's kind of actually
like a database transaction so
essentially this uses threads
it's preemptive and it uses shared
memory but instead of using locks it
uses a commit abort
semantic and stm's shared transactional
memory and that's kind of the
the paradigm that's used and so like i
said it's
it's basically like a data database
transaction and
essentially dealing with atomicity is
done by
by explicitly saying this is an atomic
block and then when it needs to commit
when it needs to write to memory it
checks to see if the data hasn't changed
and if it can it'll commit that that
data and if it doesn't
then it won't commit that data and it'll
roll back and all the computation that
it's done will be reversed
well nothing will be written basically
so it can import at any time it can roll
back this is very similar to database
transactions
and so this is kind of the example of if
we're iterating a variable so we would
read it
and then we kind of surround that's not
it
uh we surround the the block of code
that we would need to be atomic in this
like atomic block
and then either it will get written or
it won't get written and what's what's
nice about this is that
you know instead of locking if you lock
around a big data structure
the other thread might need to wait
whereas in reality if the let's say he
was only changing one specific
member of that data structure you don't
need to lock right you've just you've
wasted resources on trying to lock
and when reality you if we would have
used stm two of the things could have
done it at the same time and they both
would have committed
so it's kind of like it's an optimistic
approach because it goes into it takes
into account that
you know you can do all this stuff that
you might be wasting you might be
waiting on for for no reason
so the the that's kind of you know one
advantage is that it's it's a little bit
increased in currency compared to
threads and mutexes
and um you know i don't have much
experience with this but um
they they you know they say that there's
basically stm kind of composes a little
better so if you have two abstractions
that use
uh threads and mutexes and you try to
join them into another abstractions
you still might need to to use locking
to kind of not step over each other
whereas
stm is supposed to compose a little
better you can look at that link if you
want to get into that a little more um
but the the main disadvantage for stm is
that because you have rollbacks
you can't actually guarantee that a
chunk of code is going to complete
because it could roll back
so that's that's a disadvantage you know
if you have some operation that needs to
complete no matter what you know you
you have to make sure that there's a
possibility that you that might not
happen
okay so futures and promises these also
uh use threads and uh this is kind of a
somewhat of a cooperative
scheduling and it's it's a little bit of
a funky
funky pattern in terms of the
categorizations that i've tried to use
it doesn't fit in perfectly but it's a
it's a cooperative model it uses message
passing as in
what you're calling the future and the
promise is is the message
and um and it's paralyzable because it
uses threads
and some examples are data flow and oz
variables
or as programming language and dataflow
variables and this is kind of an example
you can you can see that um what we do
is we say you know pinger dot so this is
you know a pinger and a ponger right we
want to make sure that these two
executing things can communicate back
and forth
and so we'll have a pinger you
know.future and that future is a
reference to something that is executing
in the background and then we can do
whatever we want and then when we call
value we we block until we either get
that value or if it has already
completed we're just going to continue
continue through it right and that's a
future and a future is the reference to
something that's going to be evaluated
and then the promise
oh sorry that's the promise and then the
future is the that value
right and so it's cooperative because
this thing is
is basically pausing until that other
thing is done right
and it uses message passing because that
that reference to that variable is
is that is that message right
so what's nice about this is it kind of
abstracts the the notions of locks away
so you as a programmer don't really have
to deal with locks all you do is
use futures and promises and that
abstraction of oh am i going to step
over myself is
is hidden because of the framework of
promises and futures
and um the disadvantage is similar to
locking which i didn't actually put
there but
essentially you know if you're not ready
to to get that data back like you're
going to block and
you're basically wasting time blocking
on that futures value if it hasn't
completed yet
all right so uh processes and
inter-process communication is actually
i was when i was putting these slides
together i was thinking oh maybe i
should put this before threads and
mutexes because this is this is
really the like canonical oh i want to
do more things at once i'm just going to
run another process of this thing
right and inter-process communication
uses processes it's pre-emptive because
just like
threads the operating system can run any
process at any given time
and i put shared memory because you
could use shared memory with processes
but really it's about message passing
and um so how do we how do we handle the
problem of atomicity with processes well
we just we don't share memory right and
then we don't really have to think about
uh what happens you know how are we
going to make sure that that the
the thing that we're trying to access
that shared thing
is going to be okay well we just don't
share anything if we don't share
anything
then we're all good right we don't have
to we don't have to worry about that
but then we run into issue of oh how do
we communicate well let's pass message
let's pass messages right and so i
take the opportunity to say well message
passing the ipc
is is heavily used with with channels
right like sockets
and pipes and all this stuff is
basically message passing through
channels
and um i found this quote which which i
i liked a lot and
let's just read it passing data through
a channel to a receiver implies
transfer of ownership of the data it's
important to to kind of grasp that
because
what you're doing is by passing a
message to through a channel to someone
or actually just message passing
you're saying well this shared thing
that we need to both you know
uh mutate or whatever well it's now your
turn
it's now your turn to do something i'm
done with it and that's how we we know
that like if we did have an atomicity
problem well we're communicating by
saying
it's it's your turn now and i'm done
like you can increment that variable and
i'm not gonna you know i've already
written it to memory for example
so anyway regarding uh processes in ipc
so
pipes i mean every time you open a shell
you're using inter-process communication
and channels and yes some pseudo code
you could do something like this right
the internet is processes and ipc
it's a huge concurrent system it's just
kind of cool
so some pros and and cons uh again you
you don't share any memories so you
can't corrupt it it's pretty easy
and because you have this this uh this
advantage you know you don't have to
deal with locking so you don't have any
other problems with locking
and uh the real biggest advantage with
this is that you can kind of scale it
horizontally right you can just add
boxes and boxes and boxes and boxes and
more and more computers
and just spawn off more processes and
you're good to go right
but then at the same the flip side of
that coin is that you know it could get
really expensive to use processes
whereas if
you could save a lot more ram and and
money by just using threads for example
also you know if you if you in your
program you actually need to spawn
threads and processes and not use you
know some pretty
pool of things you know that could take
more time because spawning a process is
more expensive than
than threads so
uh next up is uh csp so csp is
communicating sequential processes it's
a paper that was written in like the
1970s
you know someone has told me you know
everything in computer science was
created in the 1970s like since then
it's like you know you just repeat it
gets repeated but everything was created
in the 1970s
and so basically this could use threads
or processes csp is more of a
theoretical
paper but nowadays what is heavily used
that is influenced by csp is go
and if you um see that talk that i
linked with rob
pike beforehand and some other talks
that he's given he he talks a lot about
how csp has influenced you know creating
go basically
and go uses channels really heavily and
that's what uh csp talks about
is using channels as a way for
communication and you can see in this
example right here
um it's similar to ipc right we're using
it's kind of like using a pipe but you
see that construct of
that arrow um it's saying oh i send this
message
and then when that receiver listens on
that you know that message that channel
and then when it's when it gets that
message it'll block and then it'll get
that message and continue and do
whatever it needs to do
similar to select and kq and e-paul and
all that stuff
so uh so again the advantages to csp is
that you know
we're using message passing and channels
very heavily and like we had talked
about the ownership concept it's it's
pretty there's a lot of advantages to
that
and then some disadvantages is that
well i kind of kind of made these up but
you know i mean shared data could be
simpler to implement sometimes i mean
depending on the on the framework that
you're using
if you're using something that doesn't
have channels as like this first class
thing
you know using and you don't need a
complicated concurrency model
you know sometimes just having some
shared memory in a lock is is simpler
and this could be over engineering right
and then um there's the issue of if you
use message passing very heavily
you need to deal with big messages right
if you're sending or a ton of messages
you either have really really big
messages and your queue is going to
overflow
or you know you're passing tons and tons
of messages like continuously
and you're just filling that up as well
and so you have to deal with back
pressure which helps solve these
problems but that's something you need
to think of when you when you do uh
channels
so uh actors is similar to csp and um
again it uses threads and and or
processes it's preemptive
and and it uses message message passing
similar to csp but it kind of uses the
concept of the mailbox
i'm going to give an example but erlang
or the monty python guy he
the cellul um erlang is basically a in
an actor language right
and so in this example you can see we'll
have you know this ping actor and this
pong actor
and they're going to communicate with
each other by saying you know at the
very bottom start ping pong
it'll say async.ping and then you know
that will that message will be sent to
this mailbox and i'll show you a little
diagram that kind of shows it that
basically
you know this guy is going to send a
message to the pong actor
and it's going to go in that mailbox and
in its own timeline
it's going to process the mailbox and
say oh i need to ping now i need a pong
now and then when it's done it can
either reply with a message or not
but all the communication and all the
interaction is happening through these
mailboxes
and this is essentially how how an actor
model works and um
yeah don't communicate by sharing state
share state by communicating by sending
messages to two mail boxes
so so back to this example you can see
when we're when we're doing this you
know async.ping
it's basically calling you know the
first uh the first actor
and it's sending this ping message to
the mailbox
and then in its own timeline it will do
something and
what's really the way i find to
understand actors is that it's basically
it's object-oriented threads really
that's the way i see it and it
it's nice it's it's a nice kind of
concise way to express concurrency
so like i said it's similar to csp but
the main
difference with actors and or the kind
of way you can compare actors in csp
is that while csp communicate over
channels
and actors the identity of the actor is
kind of the
the the concept that you use to
communicate so
it's really the con it's really the
difference is what is kind of the first
class way of communicating
and so with csp as channels and with
actors it's the
object-oriented thread that you're
trying to talk to or the actor
so let's see yeah so so some of the
advantages again this is
similar to csp you know you use message
message passing which is through
mailboxes um pretty heavily which is
alternative to locks because of the
whole
ownership concept and that's pretty good
uh there's
uh well there's no there could be no
shared state i mean if you have
uh vm processors that don't share memory
then you have the advantage of like uh
processes in ipc where you just don't
share state and you can't really trip
over each other
although you could and again the
disadvantages are similar to csp
you know you have a lot of messages you
have really big messages that could be a
problem
so we're going to step into now we're
going to step into kind of a little bit
of a different
uh kind of track basically this is a
single threaded model
right this is co routines and um
coroutines
are you know if you use windows they're
fibers if you use ruby they're they're
fibers
um python has i forget the name but a
similar
similar concept and this
what's cool about this kind of sidetrack
is that we're going to is that it's the
cooperative state right up until now
we've done
all these preemptive kind of stuff and
the cooperative state is cool because
again this is like handing the baton
right this is fred joe and harry and all
those and
they're passing around the execution
rights and so you have this executing
thing but really it's
it's just a local stack this is just
like its own context
and you have you know the context of
what you're doing
and then you have this other concept and
it's kind of similar to an actor right
like you have the pinger and the ponger
and that's
those are the things that are in charge
of those uh things that they need to do
but they're but they're actually not an
executing theory they're not a thread
they're just
they're just a context that's saved it's
just saved local state
with the ability to stop and pause at
any given time and transfer the baton to
someone else and say now it's your turn
to do something
and what's cool about this is that you
wouldn't think like oh okay well that's
interesting but not you know that
amazing but it's it's kind of amazing
that there's a guy
who created an operating system he
basically built an operating system a
scheduler just using co routines
and there's no evented style there or
anything it's literally just co-routines
that you know work with each other and
you have an operating system
that that has a scheduler and all this
stuff and that's
um pretty amazing i actually i think
that's pretty amazing you can check out
these links
and um he goes it's a course where he'll
show you how to implement this whole
thing
i think he's in chicago
so some pros and cons again because we
have this
local context which is what a co-routine
really is
it's really expressive in terms of state
because you just pause where you are and
then when you
continue to execute you just execute
where you left off and so you don't need
to like
pass variables and functions and stuff
you just wherever you are you continue
right and you have
your local context of the variables and
so you just continue from where you were
and
um again because this is single threaded
there's no need for locks
so you know it's cooperative scheduling
and so you don't need
locks because things aren't going to
step over each other because there's
only one thing happening in
ever at any given time and uh
so this is scales vertically because if
you have a faster cpu then your you know
single executing thread is going to run
faster
but really you know that's kind of a
disadvantage because you can't
scale it horizontally right you can only
have one instance of this thing running
that's why i put on the top
you know it's it's concurrent it's
definitely concurrent but it's not i
mean you can build an operating system
with that
but it's not it's not parallel right you
can only run one instance of this you
can't run this on a multi-core
computer but the the biggest
disadvantage of this really is that
you're constrained to have all of these
things work together
because it's cooperative you know if if
you have one bad apple
they're all rotten you know if you have
one thing that's that's gonna
get stuck they're all gonna you know
be stuck so um
all right so the the next one is a
vended programming which is
uh similar to co routines in that it's
cooperative but it's different
and instead of using message passing
it uses shared memory as a way to kind
of communicate but it doesn't really
communicate because similar to
tino to uh to car routines we're just
we just have one single execution that's
running right and this is again
cooperative
and similar to coroutines it's not
parallel right we only have one instance
of this thing running
and so so i'm sure also you guys know
you know
i mean i'm sure you guys know about all
these things but you know vented
programming
is is in recent years got a lot of
popularity um you know the whole c taken
problem you know ten thousand concurrent
connections how do we solve that
uses you know a vented i o and um uses
the evented programming model
as tons of frameworks you know twisted
in python that's that's evented
node.js who is like the hippest language
right now right um that's evented
even heck 20 years ago you know ajax
that's evented
um so the the way that uh this is done
is again i'm going to kind of try and
step through the common building blocks
that evented programming
frameworks have in common
and then we'll kind of talk about the
the pros and and cons
so again we have event handlers and
and callbacks these are what some of the
common ingredients that most frameworks
will have
these these the event uh handlers are
basically that it's a function
or it's a closure that will get invoked
when something happens
and so when something happens we know
that we have to
run this procedure and that's what a
callback is that's what an event handler
is
and then we'll have a concept usually of
a dispatcher which is essentially kind
of
a registry to hold all those callbacks
and say
when something needs to get executed
it'll kind of look up in that registry
and see
oh justin bieber is playing on the radio
we should change the station that's the
call back that we've registered
and then you know the data that's
associated which is baby baby baby
so that's the dispatcher and then most
of these frameworks will also have
timers because a major concept of
invented programming is you know you
don't block the event loop
which means that you know because we
have only one executing thing and the
whole concept of you know one bad apple
ruins them all
if you have one callback that gets stuck
everything gets stuck so you can't sleep
so instead of sleeping you have timeouts
right and you have oh you have timers
and then when that timeout fires then
you you know you execute that callback
and so javascript which is you know one
of the first
you know web kind of async stuff um
uses timers and you know twisted uses
timers all the stuff
and then there's the event loop which is
the core of all the evented uh paradigms
and it's basically just a loop that
processes things right and you can see
you know redis right there that c code
it has that while
you know while don't stop just keep on
going and process all these things
twisted has a main loop every event uh
framework has a
a main loop or an event loop
and there are two ways to implement this
loop
there is the reactor pattern and there's
the proactor pattern
and the they're both similar they both
want to achieve
not blocking the the the loop and just
continue so that it keeps on working
uh you know seamlessly the reactor
pattern uses
usually some methods of you know the
select and e paul and kq and all that
stuff
to make sure that you know the readiness
effect right like make sure that
when you access this data it's not going
to block the kernel has read all the
data from the device
it's copied it into the user level space
and you know you can just
read it off some buffer and it'll work
and
that's the reactor pattern whereas the
proactive pattern
is uh it doesn't actually
make sure that that it's ready what it
does is you'll have an
extra callback on top of of what you've
already created
that's the completion callback and so
when something happens you just
immediately execute it in the background
and when that's done you can pass the
result back to the main thread
with that completion callback and
windows completion ports as an example
um there's other examples out in the
wild but that's that's how you implement
that main loop
how much time how much time do i have
uh yeah we're close to the end so um the
so libby v is a pretty you know common
um framework that's used to build other
frameworks it's written in c
uh by some russian guy all the russians
are pretty hard-ass
and um so this is what uh
what libiv actually does but really to
simplify it what what it's doing is that
it's processing events
when it gets to certain events it'll
look up in that dispatcher or the
registry of all the watchers or all the
callbacks they're different they're
different terminologies for for
different frameworks but that's the idea
and then it will process that either in
the proactor way or the reactor way and
that's really what what a vented
programming
is and so the
the big advantage of this is that
because we're single threaded and we're
using all these like readiness tricks
and things like that
we avoid polling right and polling is
kind of evil because you know instead of
asking something to say oh are we there
yet are we there yet are we there yet
are we there yet
you just when you're there you get a
notification oh look you're here now
what do you want you want to eat that
cookie have that cookie
so um yeah so basically you avoid
polling
and you're much closer to cpu bound than
i o bound which is
a huge win because i o is low
and right you're not doing what you
don't need to do you're just doing it at
the given time that needs to be executed
and um actually you know if you come
from a background of you know sequential
programming this could i i think it's a
it's a advantage
you know async programming is hard to
fit in your brain when you're not used
to it
it's uh it's very different it's a very
different way to think of how you do
things
and um it does so the kind of last point
right there is
it scales scales well versus threads but
there's a ton of research you know
saying well it actually doesn't it's not
much faster than threads it's actually
pretty similar
but the one place that it kind of does
shine in comparison
is that when you have a lot of threads
then you have to manage the overhead of
having a lot of threads if you have you
know
ten thousand threads then whatever
scheduling that needs to make sure that
it can run between those threads whereas
if you have just one
single thing that's running you don't
you don't have that overhead and so
that's why
for example nginx you know beats apache
you know hands down is because of that
it doesn't you know while apache
forks off a process or responds a thread
for every connection nginx doesn't do
that it just
it multiplexes it uses you know these
select and event loops and all these
things
and that's kind of why it scales very
well but
but again it's not necessarily faster
right like if you have a low amount of
threads maybe
it'll be a lot simpler to use threads
and not have the complexity of your
async programming
logic and so that that kind of leads me
to the disadvantages is that really
you know using callbacks is is kind of
hell
because you you lose the context of
where you came from right in sequential
programming you have you know some
function that calls another function
calls another function
and they all build each other on top of
a stock a stack and in one you know in
any function you can stop and look to
see oh
where did i come from right where was i
called from what's my
my caller stack whereas in callbacks
you're lost
right like you know this thing this
thing happens right now this function is
invoked i know that this is the
state of the program i can check all the
variables and see that this is what it
is
but i can't but who called me like i
just know that i'm called right now
that's it
and so it's really hard to debug because
you lose the stack and you can get into
callback hell which is kind of a little
separate but
also disadvantage of using callbacks
because you know you can just
it gets kind of messy right the callback
calling a callback and
things like that and so i don't know why
i put rely heavily on state as a con but
um but that's what you need to do for
vended programming
to kind of find your way around and also
using languages that have kind of
closures or first class
functions as as a as a thing it's
probably easier to do so like if you
want to do a vented programming in
languages that don't have that support
it's probably going to be harder to
implement because
those callbacks are basically functions
right they're basically closures
so having a framework that enables you
to do that it's going to be easier
and the evented kind of approach
like i said you know nginx versus apache
so that's the c taken problem there's
this guy who's uh i found online
is talking about you know there's this
manifesto he put up that's like the uh
what
you know it's time for the web to handle
10 million concurrent connections now
like why don't we do that why are we
talking about 10 000 connections
and it's kind of crazy but but he he
there's this link you can follow as well
and he basically talks about instead of
uh put instead of using the kernel as a
way to check when something is ready
just
push your application into the kernel
and make your application a driver
you know so it's an interesting
concept and you can check out those
links
so all things aren't black and white
right like it's it's easy as a
programmer to trying to get into a
mindset where
you know this is this and this is x and
this is why and this is the thing that
that needs to happen right now and try
to categorize things and
actually that's what this talk is all
about right i'm trying to like fit
things into neat little boxes and
try and make sure that oh this is black
and this is white and this is this and
but the reality is you know the world
isn't isn't like that
it's not black and white you know and as
programmers you can fall into that
thinking of like oh this is this is the
way it needs to be or
something like that whereas in reality
you know different different tools and
different models and different
constraints like for example
you know you have constraints of
business constraints and things like
that like all these different things
can lead you into using different things
at different times right but but
different models shine
in different places and so threads and
locks
are good for simple use cases they're
also good for
you know implementing other models a lot
of other models use threads and mutexes
in the background
and then actors and and csp you know
they
they build telecoms with erlang and you
know
goling's doing a ton of stuff that is
similar stuff and uh
whereas the invented model is kind of
good for you know
ui stuff where you're waiting for some
keyboard input for example and you want
to
you don't want to waste resources
waiting on something so when that thing
happens just do it right that's
kind of gui programming is very vented
that's why browsers and javascript kind
of work
so really you know it's about it's about
using the the best tool for the job
and again like the constraints of your
your life and your business and whatever
you're doing you know you have to find
what's right for you there is no
best or worst or model thing like that
you know
because you don't want to be stuck in
that situation so
so yeah and i think i think the the most
important really thing of it all is
is all these different models can really
you know expand your brain and and
really the point of this talk was to try
and get you to start thinking this isn't
a comprehensive kind of list of all the
things
but it's enough to kind of you know get
your brain juices flowing and being like
oh you know i wanna
that sounds interesting i wanna look
more into that and i'm by no means
you know the the world expert on the
matter so i'd be interested in you guys
teaching me what you know because i'm
definitely you know
just just learning just learning this
stuff so that's it
thank you
so questions
questions something i didn't see on your
list was data flow programming
yeah oh you you did see that no i didn't
was it there oh yeah it was it was in
the futures and promises
kind of uh uh yeah it uses data flow
programming is is kind of
a way to structure your programming
language to kind of flow according to
the data that that like you link data
together and then
you kind of flows according to that and
it uses futures and promises very
heavily data flow is
uses futures and promises so
it's a little larger than just
concurrency because it's a larger
context but it yeah it kind of falls
into that category so
um just by the way we did tweet a link
to those slides i mean you weren't
supposed to read everything along
during that what's that i tweeted a link
to your slides so
yeah um you weren't supposed to follow
everything along as he was going like
that's a lot of
cross-references for you to check out
later yeah there's a lot a lot of
references
and there's a lot of links and uh
maybe i can just go back to the uh to
the link and you guys can
just see it while they switch
yeah so you can pull that up any other
questions
hi um what's the largest
uh production concurrency
stuff that you've run and what language
what do i do basically what's the what's
the largest production system
what's the largest concurrently
concurrent
evented production system that you've
run
and what language well we don't run
actually a lot of evented
stuff at engine yard but we do use some
frameworks for example we have
you know similar github has like a
similar kind of bot that they do
you know stuff and essentially how big
uh
not very big it's internal system stuff
basically yeah
yeah anyone else
okay we're good thanks thank you
5.0 / 5 (0 votes)
Why the Future of AI & Computers Will Be Analog
Back when the internet was fun. (1999 Apple iBook)
Trixie and Brittany Broski Manifest Their Destinies (with Arts & Crafts)
Motherboard Default settings could be COOKING your CPU!
Mind-bending new programming language for GPUs just dropped...
Trope Talk: Trickster Heroes