Over a million developers have joined DZone.

Rust-Based Load Balancing Proxy Server With Async I/O

DZone 's Guide to

Rust-Based Load Balancing Proxy Server With Async I/O

My problem is that every single change that I made in the code had repercussions down the line that were hard for me to predict.

· Performance Zone ·
Free Resource

In my previous Rust post, I built a simple echo server that spun a whole new thread for each connection. In this one, I want to do this in an async manner. Rust doesn’t have the notion of async/await, or something similar to go green threads (it seems that it used to, and it was removed as a costly abstraction for low-level system languages).

I’m going to use Tokio.rs to do that, but sadly enough, the example on the front page is about doing an async echo server. That kinda killed the mood for me there since I wanted to deal with actually implementing it from scratch. Because of that, I decided to do something different and build an async Rust based TCP level proxy server.

Expected usage: cargo run live-test.ravendb.net:80 localhost:8080.

This should print the port that this proxy runs on and then route the connection to one of those endpoints.

This led to something pretty strange. Check out the following code:


Can you figure out what the type of addr is? It is inferred, but from what? The addr definition line does not have enough detail to figure it out. Therefore, the compiler actually goes down and sees that we are passing it to the bind() method, which takes std::net::SocketAddra value. So, it figures out that the value must be a std::net::SocketAddr.

This seems to be utterly backward and fragile to me. For example, I added this:


The compiler was very upset with me:


I’m not used to the variable type being impacted by its usage. It seems very odd and awkward. It also seems to be pretty hard to actually figure out what the type of a variable is from just looking at the code. Also, there isn’t an easy way to get it short of causing an intentional compiler error that would reveal those details.

The final code looks like this:

extern crate futures;
extern crate tokio_core;
extern crate rand;
use rand::Rng;
use std::env;
use futures::{Future, Stream};
use tokio_core::io::{copy, Io};
use tokio_core::net::TcpListener;
use tokio_core::reactor::Core;
use tokio_core::net::TcpStream;
fn main() {
let mut core = Core::new().unwrap();
let handle = core.handle();
let addr = "".parse().unwrap();
let sock = TcpListener::bind(&addr, &handle).unwrap();
println!("Listening on{}",
let mut urls: Vec<std::net::SocketAddr> = Vec::new();
for url in env::args().skip(1) {
let parts: Vec<&str> = url.split(":").collect();
print!("{} was resolved to: ", parts[0]);
for mut host in std::net::lookup_host(parts[0]).unwrap() {
print!(" {},", host);
let server = sock.incoming().for_each(|(client_stream, remote_addr)| {
let index = rand::thread_rng().gen_range(0, urls.len());
let (client_read, client_write) = client_stream.split();
println!("{} connected and will be forwarded to {}", &remote_addr, &urls[index]);
let send_data = TcpStream::connect(&urls[index], &handle).and_then(|server_stream| {
let (server_read, server_write) = server_stream.split();
let client_to_server = copy(client_read, server_write);
let server_to_client = copy(server_read, client_write);
// erase the types
.map(|(_client_to_server,_server_to_client)| {} ).map_err(|_err| {} );

At the same time, there is a lot going on here and this is very simple.

Lines 1-15 are really not interesting. Lines 17-29 are about parsing the user’s input, but the fun stuff begins from line 30 and onward.

I use fun cautiously; it wasn’t very fun to work with, to be honest. On lines 30 and 31, I setup the event loop handlers. And then bind them to a TCP listener.

On lines 40-62, I’m building the server (more on that later) and on line 64, I’m actually running the event loop.

The crazy stuff is all in the server handling. The incoming().for_each() call will call the method for each connected client, passing the stream and the remote IP. I then split the TCP stream into a read half and a write half, and select a node to load the balance to.

Following that, I’m doing an async connect to that node, and if it is successful, I’m splitting the server and then reverse them using the copy methods. Basically, I'm attaching the input and output of each to the other side. Finally, I’m joining the two together, so we’ll have a future that will only be done when both sending and receiving is done, and then I’m sending it back to the event loop.

Note that when I’m accepting a new TCP connection, I’m not actually pausing to connect to the remote server. Instead, I’m going to setup the call and then pass the next stage to the event loop (the spawn) method.

This was crazy hard to do and generated a lot of compilation errors along the way. Why? See line 57, where we erase the types?

The type of  send_data without this line is something like Future<Result<(u64,u64), Error>>. However, the map and map_err turn it into just a Future. If you don’t do that? Well, the compiler errors are generally very good, but it seems that inference can take you into la-la land. See this compiler error. That reminds me of trying to make sense of C++ template errors in 1999.


Now, here is the definition of the spawn method:


I didn’t understand this syntax at all. Future is a trait, and it has associated types, but I’m thinking about generics as only the stuff inside the <>, so that was pretty confusing.

Basically, the problem was that I was passing a future that was returning values, while the spawn method expected one that was expecting none.

I also tried to change the and_then to just then, but at that point I got:


At which point I just quit.

However, just looking at the code on its own, it is quite nicely done, and it expresses exactly what I want it to. My problem is that every single change that I made there had repercussions down the line, which is hard for me to predict.

performance ,rust ,proxy ,async ,i/o

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}