The unintuitive, machine-centric conversion behaviour of most type save|
The unintuitive, machine-centric conversion behaviour of most type-safe|
The initial version of the arbitrary precision arithmetic library has three components:
basic_natural basic_integer basic_rational
All of these are class templates with two template parameters: a policy class which encapsulates the number system implementation and an allocator.
The library should provide a clean framework to easily add more components, such as floating point, complex numbers, vectors, matrices, polynomials...
An is-a hierarchy exists for natural numbers, integers and rationals. Put in slightly different words (closer to the preferred implementation): A natural number may also be viewn as an integer or rational; an integer may also be viewn as a rational. So
rational a = b // where b is of type integer
just works. One could say that there is implicit widening conversion (which does not mean it automatically requires an operation -- viewing the integer as a rational while assignment is sufficient, here).
Narrowing conversion should always be explicitly requested by the user:
integer a = convert(b) // where b is of type rational // natural a = b // ERROR!
Using types with different policies will require an explicit conversion as well while types with different allocators are compatible.
The framework should allow new components to implement explicit conversion even if otherwise incompatible.
The library models a truly algebraic syntax:
rational c = a / b // where a and b are integers a=1 b=2 integer d = a / b // c is one half, d is zero
The unintuitive, machine-centric conversion behaviour of most type-safe programming languages can be enforced like this:
rational e = convert(a / b) // e is zero // analogy: // float x = 1/2; // x == 0.0f
This paper can serve as a basis for further parts of the interface:
Basic algorithms (like the ones D.Knuth describes / used in the bigint in the sandbox CVS) will do (those with flat complexity tend to have higher constants; maybe further parametrization is a good idea?) at least for an initial version.
Expression templates (which we need to implement the conversion rules above) can also be used to reduce the temporaries, exploit their destructibility within algorithms, and reduce allocation costs (I'll have to add more details here -- there is a draft design, but it will take me some time to properly communicate it).
Memory management is done on a worst-case basis:
// unsigned arithmetic r = a + b | max_digits(r) = max(digits(a),digits(b))+1 r = a - b | max_digits(r) = digits(a) r = a * b | max_digits(r) = digits(a)+digits(b) r = a / b | max_digits(r) = digits(a)-digits(b)+1
It could be a good idea to allocate always one extra digit for normalization. Components (such as numerator/denominator for rationals or exponent/mantissa for floatingpoint real numbers) should be allocated in one shot.