Additionally I'll show how little effort is required to make this a distributed application, as asyncronous Actor based concurrency is much more resilient regarding network induced latency.
I'll use my Kontraktor Actor library version 2.0 beta to also illustrate how the "message == asynchronous method call" approach of Kontraktor works out in "practice".
Resources:
Kontraktor lib: https://github.com/RuedigerMoeller/kontraktor.
Full source of sample is here.
Description of the philosophers dining problem:
http://en.wikipedia.org/wiki/Dining_philosophers_problem
(I am using a non-fair solution to keeps things simple.)
The Table
One actor represents the table. For each fork, a list of philosophers waiting for "fork is free"-notification is kept.
Short explanation: In Kontraktor all public methods are translated into asynchronous messages put on to an actors queue ("mailbox") behind the scenes. The worker thread of an actor takes the messages of the mailbox and calls the "real" implementation. So its guaranteed an actor is executing single threaded. To identify asynchronous calls easily, I always put a $ in front of public actor methods.
Explanation: if a fork is taken and the queue of waiting philosophers is empty, a fulfilled promise is returned, else an unfulfilled promise is added to the queue. The caller code then looks like:
Once a fork is returned, the next Promise in the arraylist (if any) is notified. signal() is equivalent to receive("dummy",null).
Note that the "result" object is unused, I just need a signal from the future returned. That's why the fulfilled Promise gets a "void" as a dummy result object (which will then trigger immediate execution of the then-closure on caller side).
Note how I can use simple single threaded collections and data structures as the actor is guaranteed to be executed single threaded.
The Philosopher
A philosopher has a wonderful life which can be divided into 3 states:
10 think randomTime
20 be_hungry_and_try_to_grab_2_forks
30 eat randomTime
40 goto 10
[rumors are of lost lines 15 and 35 related to indecorous social activities introduced after Epicurus happened to join the table]
The meat is in the $live method:
After thinking, the philosopher tries to grab one fork, the closure given to the "then" method gets executed once the fork becomes available. Once both forks have been grabbed, another "delayed" call keeps the "eat" state for a while, then both forks are returned and a $live message is put onto the actors mailbox (=message queue) to start thinking again.
Remarks:
- "delayed" just schedules the given closure onto the actors mailbox with a delay.
- "self().$live()" puts a $live message onto the actors mailbox. If "this.$live()" would be called, the $live() message would be executed directly [synchronous] like a normal method. In this example this would not hurt, however in many cases this makes a difference.
- Wait-Free. The executing thread is never blocked. One could easily simulate thousands of philosophers onto a single thread
Setup and run
The main method to start the life of 5 philosophers looks like this:
Note Hoarde is a utility to create and manage groups of Actors. "startReportingThread" just starts a thread dumping the state of all philosophers:
"Is this still Java?" .. probably weird looking stuff for the untrained eye ..
(note Philosopher.$getState is async and yield() is used to wait for all futures to complete before executing the then() closure printing the state)
BTW: spacing is somewhat inconsistent as I started to reformat code partially during edit .. but won't do screenshots again ..
Yay, done:
Let's make it a distributed application
The killer feature of Actor based asynchronous+lock/wait free programming is simple distributability. A solution using blocking mutexes would have to get a rewrite or rely on synchronous remote calls, so the bigger the network latency, the lower the throughput. Actors do not suffer from that.
We can just change the startup to make the table run remotely:
Is this cool or what ? I now can feed hoardes of philosophers with a single table server: