I would like to discuss a complex issue with you that could take up several swirls of discussion, inclusively in other places with other people.
Here's an idea of writing and reading arbitrarily long numbers, generally integers in binary form:
Write the number in such a way that when reading it you can do without knowing its maximum length.
Read the number being normally limited by computational resources and not by a predefined mental maximum.
Why so, what is the gain? Just a cruel example: use of numbers in network addresses.
Node addresses can be adequate in size to fit local needs, not squeezing the universal expansion of the network.
So here is an algorithm for this writing/reading:
Binary Indicators-Termination Sequence: A pattern for writing down and recognition of binary integer numbers, which consist of arbitrary count of bits
A binary indicators-termination sequence (BIT sequence) is the following binary sequence for a given unsigned integer:
– bits in reverse order of a binary sequence formed for the given unsigned integer by the following rules:
1) Each component number is in binary form and contains only significant bits ordered by significance ascendingly.
2) The given unsigned integer is a component of the sequence.
3) If the given unsigned integer is not zero, before that number the sequence contains a single bit set to zero. (termination)
4) Immediately after each component number with count of bits greater than one, a number, less by one than that count, follows as a component. (indicators)
I know there are numerous software libraries for arbitrary-precision arithmetic. Don't know whether any of them implement a universal algorithm.
Would you encourage use of arbitrary-length numbers in any area of implementation? And would you advocate software development for that purpose?
Processor architecture: CPU ALUs
Network architecture: Addressing