boost.png (6897 bytes) Home Libraries People FAQ More

PrevUpHomeNext

Events: Boost.Asio

Introduction

While Boost.Coroutine has grown up as general coroutine implementation, its original design goal was to help write asynchronous applications based on Boost.Asio.

For a long time, threads have been considered a bad choice for building high concurrency servers capable of handling an high number of clients at the same time. Thread switching overhead, lock contention, system limits on the amount of threads, and the inherent difficulty of writing scalable highly threaded applications have been cited as the reasons to prefer event driven dispatch loop based model. This has been the main reason Boost.Asio has been written. See [Ousterhout95] and [Kegel99] for reference.

Many researchers believe today (see [Adya02] and [VonBehren03] for the most known examples) that the best way to write high concurrency servers is to use a cooperative task model with an underlying scheduler that used asynchronous dispatching. This gives the performance of event driven designs without the need to divide the processing of a job in a myriad of related callbacks.

Boost.Coroutine fits perfectly the role of the cooperative task model, while Boost.Asio can be used seamlessly as a coroutine scheduler.

Usage

A coroutine cannot currently be used as an asio::io_service callback, because Asio requires all callback objects to be copyable. In the future Asio might relax this requirement and require only copyability. In the mean time shared_coroutine can be used as a workaround.

Asynchronous operations can be waited using a future object. For example:

void foo(coro::coroutine<void()>::self& self) {
  typedef boost::asio::ip::tcp::socket socket_type;
  typedef boost::asio::error error_type;

  char token[1024];
  socket_type source;
  coro::future<error_type, std::size_t> read_result(self);
  ...
  boost::asio::async_read(source, 
                          boost::asio::buffer(token, 1024),
                          coro::make_callback(read_error));
  ...
  coro::wait(source);
  if(source->get<0>()) {
    std::cout <<"Error\n!";
  } else {
    std::cout <<"Written "<<source->get<1>()<<" bytes";
  }
}

wait will appropriately cause the coroutine to be rescheduled in the asio::io_service when the read will be completed.

There is no function to simply yield the CPU and be executed at a latter time, but the following code may be equivalent. Let demux be an instance of an asio::io_service:

coro::future<> dummy(self);
demux.post(coro::make_callback(dummy));
coro::wait(dummy); // the current coroutine is rescheduled
...

Will cause the current coroutine to be rescheduled by the io_service. Notice that simply invoking self.yield will not work, as io_service will not automatically reschedule the coroutine. Also, it is not possible to yield if there are any pending operations.

For a more complex example see token_passing.cpp.

Conclusions

Boost.Coroutine can potentially greatly simplify the design of event driven network applications when used in conjunction with Boost.Asio. If you plan to use multiple threads, be sure to read the about the thread safety guarantees of Boost.Coroutine.

Copyright 2006 Giovanni P. Deretta

PrevUpHomeNext