stub

Cantor, Countability and the Limits of Significant Properties Schemes

In XXXX G Cantor described countable and uncountable sets.

Why does this matter?

Nature appears to operate on uncountable sets. Take a single blue photon… A computer the size of the universe etc..

How does this apply here?

Because all forms of communication, including digital media and formats, use countable sets to describe uncountable ones.

The most basic example of this is the familiar dilemma of the choice of resolution when digitising. Every choice reflect a set of value judgements and envisaged use cases. The choice for reading are different from the choices of studying the ink, or the paper fibres, of the isotopes….

But the same applies more subtly elsewhere. Any scientific dataset, whether experimentally generated, or generated without any kind of digitisation but intended to describe some aspect of the natural world will also face the same dilemma.

This already points towards one factor, that the scales and resolutions that underlie data formats are in themselves value judgements of what the most significant properties of are.

This implies something more.

When moving data between representations, it is commons to adopt sig prop approach where the entire content of the broken down into a new model so that this info content can be compared.

XCL example.

This seems to work fine, the image data, etc., but the colour spaces as the critical example…

Colour spaces are each different models of colour. Some of how it is captured by hardware, and other of how it is perceived by us.

These models are generally incongruent, that is they are incompatible representations. If the value of a pixel on one colour space might be (0,0,255), in some other colour space the exact representation might be 0 (127.9999999… 128.111111111…). Those infinite digits trailing cannot be represented in the pixel values, so we get (0 128, 128).

Perhaps that is good enough, perhaps not, but the point is that an entirely lossless transformation is being evaluate using a framework that is forcing loss to occur.

The remedy is that each incompatible model has to live along side the other.

The risk remains that this loss is obscured by the apparent completeness and complexity of the model. Also, given that the meta-model is now just a superset of all the formats, it’s not clear what we’ve gained. As we’ve seen colour spaces are usually dictated by a combination of hardware design and perception values, neither of which is likely to remain the same over time. Therefore, some future development in either area will mean the introduction of new colour spaces, which will have to be added to the model, which only exists for us.

Derivations
Creative Commons Licence
Keeping Codes
by Andrew N. Jackson
comments powered by Disqus