boost.png (6897 bytes) Home Libraries People FAQ More

PrevUpHomeNext

Multitasking

Coroutines can be used to implement multitasking in a very simple and efficient way. Each coroutine represent a job; a scheduler is responsible of executing each job serially in FIFO order. Every job is responsible of yielding control to the scheduler once in a while. We use a coroutine<void()> to represent a job:

typedef coro::coroutine<void()> job_type;

The scheduler is just a simple wrapper around a std::queue:

#include<queue>

class scheduler {
public:
  void add(job_type job) {
    m_queue.push(job);
  }

  job_type& current() {
    return m_queue.front();
  }

  void run () {
    while(!m_queue.empty()) {
      current()(std::nothrow);
      if(current()) 
        add(current());
      m_queue.pop();
    }
  }
private:
  std::queue<job_type> m_queue;
};

When a job yields, it is rescheduled again unless it has exited. Notice the use of std::nothrow to correctly handle exiting tasks. For simplicity we declare a global scheduler object:

scheduler global_scheduler;

Here is a generic job body:

void printer(job_type::self& self, std::string name, int iterations) {
  while(iterations --) {
    std::cout<<name <<" is running, "<<iterations<<" iterations left\n";
    self.yield();
  }
  self.exit();
}

Notice that exit() is in this case superfluous. When void a function returns it is as it had exited (that is, if std::nothrow is used, or else operator() would throw an exception). Let's give some job to the scheduler:

...
global_scheduler.add(boost::bind(printer, _1, "first", 10));
global_scheduler.add(boost::bind(printer, _1, "second", 5));
global_scheduler.add(boost::bind(printer, _1, "third", 3));
...

Calling global_scheduler.run(); will print:

first is running, 9 iterations left
second is running, 4 iterations left
third is running, 2 iterations left
first is running, 8 iterations left
second is running, 3 iterations left
third is running, 1 iterations left
first is running, 7 iterations left
second is running, 2 iterations left
third is running, 0 iterations left
first is running, 6 iterations left
second is running, 1 iterations left
first is running, 5 iterations left
second is running, 0 iterations left
first is running, 4 iterations left
first is running, 3 iterations left
first is running, 2 iterations left
first is running, 1 iterations left
first is running, 0 iterations left

Multitasking versus multithreading

What we have seen so far is a cooperative implementation of multitasking, that is, each task must explicitly yield control to the central scheduler to allow the next task to run. This means that a misbehaving task that never yields control, can starve all other tasks.

Multithreading on the other hand, at least on most implementations, implies preemptive multitasking; each task is allowed to run for a certain amount of time, called time-slice. When the time-slice is over the task is forcibly interrupted and the scheduler select the next task. If the interrupted task was manipulating some shared resource, this can be left in an undefined state. A task cannot control when is preempted, so it must be pessimistic and lock all shared resources that it uses. As any programmer that had to work with heavily threaded applications knows, dealing with complex locking is not a trivial task. In addition both locking and thread switching imposes some overhead.

Cooperative multitasking has not such problems as long as a task never yields while manipulating shared state.

This does not means that multithreading has not its place, there are at least two scenarios where true concurrency and preemption are required:

Unfortunately threads are often abused for general multitasking, where preemption is a burden instead of a benefit.

Cooperative multitasking implemented with coroutines is often a better choice.

conclusions

Coroutines can act as an extremely lightweight multitasking abstraction. Not only they scale much better than threads, but also much simpler to use because they require no locking.

The simple solution presented above has a fundamental problem: if a task blocks waiting for I/O, all tasks are blocked. This is can be easily solved with asynchronous functions, but this will be explained in an advanced section. Next section will show a simplified solution.

Copyright 2006 Giovanni P. Deretta

PrevUpHomeNext