The basic idea is to rigorously define, in terms of sets and functions, the abstract data types (ADTs) that are being manipulated. Then it becomes much easier to describe the classes that we build by simply specifying the ADTs that they model.
It is crucial to get these definitions clear, because it is very easy to get different ADTs mixed up, which leads to a muddled idea of what a class is supposed to do. For example, TAI time (Atomic Time) and UTC look very much alike, but they are in fact two different ADTs. The time returned by most system clocks (e.g. Windows) purports to be UTC, but in practice it is usually yet a third type.
That's all well and good, except for one little detail: Most folks just want to know "what time it is" and don't care about TAI vs. UTC vs. time_t vs. SYSTEMTIME. So what is a user to do? Well, remember that a "clock object" (i.e. time source) is quite different from a type. The simplest thing to do is wrap the "get current time and date" system call in a clock class that returns a time-point of the same type as the system clock. However, there is nothing to stop us from building smarter clock classes, so that - for example - the clock translates time_t into a true UTC or TAI value. If a high-frequency timer is available on the machine (which is actually true on many PCs), the clock could even provide much higher resolution than the standard system clock.
My intuition at this point is that the main thing we'll have to make clear to the average user is the difference between TAI, UTC, and local time. Each is appropriate for different purposes, and each has its own quirks, so it's important that we help the users make the correct choice for their applications.
This page is going to explain all this in detail.