Erlang: client/server
Join the DZone community and get the full member experience.
Join For FreeHow do they deal with concurrent requests?
Actually, they don't. Like for OS processes, each process has a single thread of execution, receiving one message at a time, working on it and restarting the original handling routine.
The catch is that each process strictly contains its data structures, so two different processes running concurrently can never create a race condition on a shared list for example. Even when serving multiple clients, a single process acts as a Facade that receives all requests.
Enough, show us the code!
Let's build a very simple guestbook: people can leave messages on it that have to follow the insertion order. People can read the last message at any time (for simplicity's sake in this example); we're actually building an insertion-only stack, in computer science terms.
If we write a test, we can use a separate process for the server and simulate multiple requests from a single client (we will see later how to write tests with multiple clients, that require synchronization).
client_server_test() -> Server = new_book(), ok = new_post(Server, "Hello, world"), ok = new_post(Server, "Greetings"), Post = last_post(Server), ?assertEqual("Greetings", Post).
Server contains the pid of the new process, while new_post/2 and last_post/1 are primitives called on the client that send a request to Server and wait for a response. The response is ok for void operations, while it is a string for last_post/1. We can distinguish between variable parameters and atoms as long as we don't use the latter.
Here's how these primitives work:
new_post(Server, Text) -> call(Server, Text). last_post(Server) -> call(Server, last). call(Server, Request) -> Server ! {self(), request, Request}, receive {reply, Reply} -> Reply end.
I follow the suggestion of the Erlang Programming book of extracting requests to a call/2 function. The same could be done for server replies (shown later in this article.)
The interaction pattern is synchronous, as after sending a message we wait for a new one to come back. We have to attach the current pid to the request so that the server knows who to send back the reply to. As always, atoms like request and reply are used for matching the tuples contained in the messages.
As I anticipated, the server part deals with one request at the time:
new_book() -> spawn(fun() -> book() end). book() -> book([]). book(Posts) -> receive {Sender, request, last} -> Sender ! {reply, head(Posts)}, NewPosts = Posts; {Sender, request, NewPost} -> Sender ! {reply, ok}, NewPosts = lists:append([NewPost], Posts) end, book(NewPosts). head([]) -> 'No messages yet.'; head([Head|_Tail]) -> Head.
book/1 receives a single message, act on it and restart the loop. I want you to notice some pecualiar differences with imperative programming.
The while(true) cycle is implemented via tail-call optimization: book/1 calls book(NewPosts) when it has handled a message, and these calls may continue indefinitely without exhausting the stack.
Data structures are immutable: we have to create a NewPosts variable if we want to add something to the list representing the server's state.
In an imperative "new post" handler, we would issue a synchronized operation on Posts and only then answer ok. Here we have freedom to work on Posts as much as we want because no new message will be delivered to the process until we ask for it with receive. In a way, the default is to be already synchronized on the object representing the process: it's very hard to introduce a race condition in server code within these constructs.
Note also the pairing of server identifiers and primitives with the identifier as their first argument. It's like having a Server object: this duality between objects and functions can be seen very often in Erlang (e.g. the lists module), if you have an habit of thinking of the world in object terms.
Conclusions
I leave the command for stopping the server as an exercise for the reader; you can start from the code in the Github repository for this series.
But we are still dealing with single request at the time. It's a low performance solution, so will try to do something better: the equivalent of a multithreaded server.
Opinions expressed by DZone contributors are their own.
Comments