Over the weekend I have been do some experiments with color spaces and converting to more restrictive color schemes like 4:4:4 (12bit) or 3:4:3 (10 bit). The use of fp 0-1 for each component seems to make sense in a language like Lua, and it has been widely adopted, but it does have some complications.
One problem is that the inclusion of 1.0 as the maximum color value is inconsistent with the actual physical hardware that we use, which most definitely uses a binary integer to represent the r g b components. 8 bit numbers are 0 - 255, and 256 is a 9 bit number. That means mapping 0-1 float to an 8 bit number needs an a particular algorithm where 1.0 -> 255. Where does 254 start? So, suppose I have an image and I read all the pixels and want to check for the color (27,35,203)? When I read the pixels I get back fp numbers in the range 0 - 1. Comparing floats is probably going to fail. Converting back to an integer will be better , but unless the fp->int conversion matches the int->fp that was done when the image was loaded, then the comparison may not be perfect. That means fp range tests must be used.
In the non-gpu world of Digital to Analog converters, the usual way to deal with variable bit resolution is to use a large integer left justified. So a 32 bit integer can represent 8, 12, 16, 24 bits by just ignoring the lsb's that can't be used. The 32 bit int is actually just being treated as a binary fraction. A scheme like this would map back to the old integer representation of color components. A 32 bit binary fraction is like
0.34a7f839 The range is 0 -> 0.ffffffff, and you can just remove the trailing digits as required. Sub-ranges can be explicitly determined with a boolean mask. Notice that the 32 bit fraction does not include 1.0, so it is entirely consistent with hardware implimentations.
The idea of color components over the fp range 0-1 is a kind of mathematical fiction that is a hangover from the 19th century. If you are an engineer or a programmer, you know that a fp64 number is a grid of 2^64 numbers - it is not a continuum. fp64 is so big that we tend to think of it as a continuum, but when tried to force fit to real world applications you get problems. A 32 bit fraction is based on rational integers and can precisely represent discrete values between 0 and 1, excluding 1. In a graphics system, color components may be represented by fp16, fp3 or fp64 and whether these are equivalent may be a matter of implimentation.
I think it is too late to do anything about this, but the use of fp color components is not as simple as it looks a first glance.
Floating Point color
- zorg
- Party member
- Posts: 3468
- Joined: Thu Dec 13, 2012 2:55 pm
- Location: Absurdistan, Hungary
- Contact:
Re: Floating Point color
You could make the issue the reverse, by swapping the <= and < signs so that 0.0 would not have a unique representation instead, and be counted in the 1st bin (if we started counting from 1 to 2^n).
In my view, integral shows the "insides" of such bins, float shows the boundaries.
Me and my stuff True Neutral Aspirant. Why, yes, i do indeed enjoy sarcastically correcting others when they make the most blatant of spelling mistakes. No bullying or trolling the innocent tho.
Re: Floating Point color
In 11.3 we will have love.math.colorToBytes and love.math.colorFromBytes, that will help making things easier.
You don't need binary fractions. Normalized 8-bit colour components round-trip when represented with 3 decimal digits and rounded to nearest. There are no ties so it doesn't matter whether 0.5 is rounded up or down.
The conversion algorithm is pretty straightforward. I think this one is guaranteed to work for floats with at least 16-bit precision, maybe even for as low as 8 bits but I haven't verified; GPU floats used for this purpose usually have 24 anyway, and Lua floats have 53.
I think it's the opposite. Thanks to the progress in the speed of floating-point math units, we are now able to use normalized values that we weren't unable in past, when FP calculations had to be carried out by CPU subprograms. In other words, progress has made it possible.
Your username suggests you're used to integer math (have you ever used e.g. SSE2 FP registers or even x87 instructions?), but Lua 5.1 only uses double precision floating-point math (except for some FFI types in LuaJIT). I suggest you learn more about FP numbers to be more familiar with their format, their limitations and their quirks. There's a lot of math to explore if you want to get well acquainted with them and know what to expect, when and why. Albeit the mandatory reference is https://docs.oracle.com/cd/E19957-01/80 ... dberg.html, I think Knuth's is a somewhat gentler introduction: Knuth, D.E., The Art of Computer Programming, Volume 2 "Seminumerical Algorithms", section 4.2.2, "Accuracy of Floating Point Arithmetic". Here's the opening first sentences of the third edition (the one I have), from p.229:
You don't need binary fractions. Normalized 8-bit colour components round-trip when represented with 3 decimal digits and rounded to nearest. There are no ties so it doesn't matter whether 0.5 is rounded up or down.
The conversion algorithm is pretty straightforward. I think this one is guaranteed to work for floats with at least 16-bit precision, maybe even for as low as 8 bits but I haven't verified; GPU floats used for this purpose usually have 24 anyway, and Lua floats have 53.
Code: Select all
local floor = math.floor
-- Works even if f is rounded to a decimal fraction with 3 digits
function colorToBytes(f)
return floor(f * 255 + 0.5)
end
function colorFromBytes(i)
return i/255
end
[citation required]MachineCode wrote: ↑Mon Jan 28, 2019 2:54 pm The idea of color components over the fp range 0-1 is a kind of mathematical fiction that is a hangover from the 19th century.
I think it's the opposite. Thanks to the progress in the speed of floating-point math units, we are now able to use normalized values that we weren't unable in past, when FP calculations had to be carried out by CPU subprograms. In other words, progress has made it possible.
... you should know that using values normalized to 0..1 helps avoiding divisions, making calculations easier and having a consistent interface without worrying about the underlying format, which is especially useful in GPUs due to being optimized for floating-point calculations.
Your username suggests you're used to integer math (have you ever used e.g. SSE2 FP registers or even x87 instructions?), but Lua 5.1 only uses double precision floating-point math (except for some FFI types in LuaJIT). I suggest you learn more about FP numbers to be more familiar with their format, their limitations and their quirks. There's a lot of math to explore if you want to get well acquainted with them and know what to expect, when and why. Albeit the mandatory reference is https://docs.oracle.com/cd/E19957-01/80 ... dberg.html, I think Knuth's is a somewhat gentler introduction: Knuth, D.E., The Art of Computer Programming, Volume 2 "Seminumerical Algorithms", section 4.2.2, "Accuracy of Floating Point Arithmetic". Here's the opening first sentences of the third edition (the one I have), from p.229:
One warning note though: the book was written at a time when binary was not overwhelmingly dominant, therefore it deals not only with base 2, but also with others (especially 10). Still, you can easily skip the parts that only apply to decimal FP or that apply to bases that don't include binary.Floating point computation is by nature inexact, and programmers can easily misuse it so that the computed answers consist almost entirely of "noise". One of the principal problems of numerical analysis is to determine how accurate the results of certain numerical methods will be. There's a credibility gap: We don't know how much of the computer's answers to believe. Novice computer users solve this problem by implicitly trusting in the computer as an infallible authority; they tend to believe that all digits of a printed answer are significant. Disillusioned computer users have just the opposite approach; they are constantly afraid that their answers are almost meaningless.
Re: Floating Point color
Code: Select all
00: (0.00, 0.25) and 0
01: (0.25, 0.50)
10: (0.50, 0.75)
11: (0.75, 1.00) and 1
or
0.25, 0.50, 0.75 can be 01, 10, 11
The problem is choice.
-
- Citizen
- Posts: 70
- Joined: Fri Jun 20, 2014 1:33 pm
Re: Floating Point color
I suggest you learn more about FP numbers to be more familiar with their format, their limitations and their quirks.
My point was that the inclusion of 1 in the fractional range makes sense from a pure mathematical pov, but introduces problems when mapping to real world DA convertors which select between 2^n states.
Think about a 2 bit DAC that is made from a resistor divide chain
0V----/\/\/\----o-----/\/\/\-----o-----/\/\/\------o 1Volt OR 0V------/\/\/\----o-----/\/\/\-----o-----/\/\/\----o-----/\/\/\----o 1Volt
In the first case, you need 3 resistors and the output will be 0V, 0.333V, 0.6666V, 1V - that is the 0-1scheme
In the second case, you have 4 resistors and the output will be 0V, 0.25V, 0.5V, 0.75V - that is the fractional integer model.
Why is this important? because if you extend the DAC to 4 bits (16 discrete states), you will have either 15 resistors or 16 resistors. The first case (0-1) the steps will be 1/15, the second case the steps will be 1/16.
Only the second case (4 resistors v 16 resistors) will have alignment at power of 2 boundaries. Try it.
My post actually points out that FP numbers (wonderful though they are) are quite tricky. Suggesting I study them more will not the change operation of DACs. In fact, as you may know, FP numbers become even more murky as FP 16. I believe there are now 3 different standards - the ieee standard, the ARM FP16 standard, and a new one from google tailored for efficient use in neural nets that is optimised for dot products.
For an introduction to some of the debate about the dubious notions of real numbers applied to real world applications like computing -
https://njwildberger.com/2012/12/02/dif ... l-numbers/ - prof of maths at UNSW
My point was that the inclusion of 1 in the fractional range makes sense from a pure mathematical pov, but introduces problems when mapping to real world DA convertors which select between 2^n states.
Think about a 2 bit DAC that is made from a resistor divide chain
0V----/\/\/\----o-----/\/\/\-----o-----/\/\/\------o 1Volt OR 0V------/\/\/\----o-----/\/\/\-----o-----/\/\/\----o-----/\/\/\----o 1Volt
In the first case, you need 3 resistors and the output will be 0V, 0.333V, 0.6666V, 1V - that is the 0-1scheme
In the second case, you have 4 resistors and the output will be 0V, 0.25V, 0.5V, 0.75V - that is the fractional integer model.
Why is this important? because if you extend the DAC to 4 bits (16 discrete states), you will have either 15 resistors or 16 resistors. The first case (0-1) the steps will be 1/15, the second case the steps will be 1/16.
Only the second case (4 resistors v 16 resistors) will have alignment at power of 2 boundaries. Try it.
My post actually points out that FP numbers (wonderful though they are) are quite tricky. Suggesting I study them more will not the change operation of DACs. In fact, as you may know, FP numbers become even more murky as FP 16. I believe there are now 3 different standards - the ieee standard, the ARM FP16 standard, and a new one from google tailored for efficient use in neural nets that is optimised for dot products.
For an introduction to some of the debate about the dubious notions of real numbers applied to real world applications like computing -
https://njwildberger.com/2012/12/02/dif ... l-numbers/ - prof of maths at UNSW
- slime
- Solid Snayke
- Posts: 3166
- Joined: Mon Aug 23, 2010 6:45 am
- Location: Nova Scotia, Canada
- Contact:
Re: Floating Point color
Is there an actual problem demonstrable with real Lua code here? Fixed-point colors have never had 50% of the color component's strength as a representable value, they've always mapped between [0, 1] and [0, maxinteger] such that fixed point 0 == floating point 0 and maxinteger == floating point 1, using a simple algorithm similar to the one quoted by pgimeno.
For an example of how GPUs do this, conversion rules for them are near the bottom of this pdf: https://developer.apple.com/metal/Metal ... cation.pdf
Or here: https://www.khronos.org/registry/vulkan ... -fixedconv
I don't see how the choices made by DACs affect colors.
For an example of how GPUs do this, conversion rules for them are near the bottom of this pdf: https://developer.apple.com/metal/Metal ... cation.pdf
Or here: https://www.khronos.org/registry/vulkan ... -fixedconv
I don't see how the choices made by DACs affect colors.
Re: Floating Point color
No it doesn't, because internally it's mapped to an integer, and an algorithm similar to the one I posted performs the conversion. But even if it did, that'd be something to tell GPU manufactures and the OpenGL specification committee. I don't think there's anything an OpenGL application like LÖVE can do to fix it, because it doesn't talk to the hardware directly.MachineCode wrote: ↑Tue Jan 29, 2019 12:25 am My point was that the inclusion of 1 in the fractional range makes sense from a pure mathematical pov, but introduces problems when mapping to real world DA convertors which select between 2^n states.
I get your point, but it's the GPU itself that uses the 0..1 range and contains the DACs (not sure about that, in digital connections like HDMI it's probably the monitor) and the cores that write to them, therefore there's nothing that can be done in Löve's side.MachineCode wrote: ↑Tue Jan 29, 2019 12:25 am Think about a 2 bit DAC that is made from a resistor divide chain
If you're talking about volts, yes. If you're talking about DACs that convert the values to screen, they receive values between 0 and 255, and nothing forbids them to use 0.99609375v instead of 1v (that's 255/256) as the maximum output of the DAC, or to power the DAC with 1.003921568627451v instead of 1v (that's 256/255) so that 255 is exactly 1v. Keep in mind that you're never talking to the internal circuitry directly in OpenGL; that's entirely up to the video card or monitor's PCB manufacturers.MachineCode wrote: ↑Tue Jan 29, 2019 12:25 am In the first case, you need 3 resistors and the output will be 0V, 0.333V, 0.6666V, 1V - that is the 0-1scheme
In the second case, you have 4 resistors and the output will be 0V, 0.25V, 0.5V, 0.75V - that is the fractional integer model.
Nor will it enable you to write to DACs directly through OpenGL. My advice was related to the trickiness of FP numbers that you mention, as that trickiness is controllable.MachineCode wrote: ↑Tue Jan 29, 2019 12:25 am My post actually points out that FP numbers (wonderful though they are) are quite tricky. Suggesting I study them more will not the change operation of DACs.
These points aren't really relevant to LÖVE programming. The FP16 used by OpenGL is described in the OpenGL specification; if you need to work with it, take a look at it to know its limits and work with them. As for the link, games don't need true real numbers; they don't care that there are infinite values between two floats because an approximate value is good enough.MachineCode wrote: ↑Tue Jan 29, 2019 12:25 am In fact, as you may know, FP numbers become even more murky as FP 16. I believe there are now 3 different standards - the ieee standard, the ARM FP16 standard, and a new one from google tailored for efficient use in neural nets that is optimised for dot products
For an introduction to some of the debate about the dubious notions of real numbers applied to real world applications like computing -
https://njwildberger.com/2012/12/02/dif ... l-numbers/ - prof of maths at UNSW
-
- Citizen
- Posts: 70
- Joined: Fri Jun 20, 2014 1:33 pm
Re: Floating Point color
I agree that the exact values of the 00 and ff colors don't really matter because you can't see them anyway. What is important is when you wish to modify or mask a subset of the color range. With fp, you need to specify a fp range. With fractional integers, you can reliably specify color regions with a boolean mask, and that can be a non linear mask as well. Arbitrary bit lengths are implicitly handled.
The issue with mapping the color space from 0 -> 1 is that it maps poorly onto the reality of hardware that divides the color space evenly into 2^n divisions. It works quite well for fixed maps of [0 .. c1, c2,c3,c4 ... 1] but fails for mapping a continuous field 0->1 to the discrete color partitions.
Take a 2 bit color field
11 - 3/3 = 1
10 - 2/3 = 0.666
01 - 1/3 = 0.333
00 - 0/3 = 0
Notice that we have actually divided this color space into 3 regions (2^n -1). Note this with regard to zorg's diagram up above of 4 color spaces.
Here is the problem. The function rnd() will produce a set of fp numbers evenly distributed over the range 0 .. 1. When applied to the 2 bit color field, how do you ensure that each bin will be evenly distributed? The partition 11 will need to include values below it. The partition 00 will need to include values above it.
The only way to evenly distribute the rnd() field to all "bins" is to divide the color space up into 4 regions and assign the fp number accordingly. If you do that, then you have effectively used fp numbers to simulate the fractional integer scheme, with the drawback that simple boolean arithmetic is not available.
I am trying to work out a way to test this, but my guess is that if I take an image and set pixels with a rnd() function, the codes 00 and ff will be statistically less frequent than the other values, because of the mismatch between fp(0-1) and the binary subdivision of color space. If this is not true, then that would indicate that inside the graphics system fp numbers are being converted to a fractional integer system anyway. If pixel(rnd()) skews 00 and ff slightly, who would notice?? It may not have been tested?
This is a very minor issue that actually will not affect anyone here, it is just interesting from a technical pov - so don't panic.
The issue with mapping the color space from 0 -> 1 is that it maps poorly onto the reality of hardware that divides the color space evenly into 2^n divisions. It works quite well for fixed maps of [0 .. c1, c2,c3,c4 ... 1] but fails for mapping a continuous field 0->1 to the discrete color partitions.
Take a 2 bit color field
11 - 3/3 = 1
10 - 2/3 = 0.666
01 - 1/3 = 0.333
00 - 0/3 = 0
Notice that we have actually divided this color space into 3 regions (2^n -1). Note this with regard to zorg's diagram up above of 4 color spaces.
Here is the problem. The function rnd() will produce a set of fp numbers evenly distributed over the range 0 .. 1. When applied to the 2 bit color field, how do you ensure that each bin will be evenly distributed? The partition 11 will need to include values below it. The partition 00 will need to include values above it.
The only way to evenly distribute the rnd() field to all "bins" is to divide the color space up into 4 regions and assign the fp number accordingly. If you do that, then you have effectively used fp numbers to simulate the fractional integer scheme, with the drawback that simple boolean arithmetic is not available.
I am trying to work out a way to test this, but my guess is that if I take an image and set pixels with a rnd() function, the codes 00 and ff will be statistically less frequent than the other values, because of the mismatch between fp(0-1) and the binary subdivision of color space. If this is not true, then that would indicate that inside the graphics system fp numbers are being converted to a fractional integer system anyway. If pixel(rnd()) skews 00 and ff slightly, who would notice?? It may not have been tested?
This is a very minor issue that actually will not affect anyone here, it is just interesting from a technical pov - so don't panic.
- zorg
- Party member
- Posts: 3468
- Joined: Thu Dec 13, 2012 2:55 pm
- Location: Absurdistan, Hungary
- Contact:
Re: Floating Point color
Actually an interesting take, godspeed and report back your findings
Me and my stuff True Neutral Aspirant. Why, yes, i do indeed enjoy sarcastically correcting others when they make the most blatant of spelling mistakes. No bullying or trolling the innocent tho.
Re: Floating Point color
Easy, take the floor of multiplying the random number by 4 and you'll get an evenly distributed result. Both Lua's math.random() and LÖVE's love.math.random() float random number generators always generate values 0 <= n < 1, and the multiplication result will always be < 4 (actually I've proved that t * n < n holds for every pair of finite floating point numbers t, n where t < 1.0 and n is positive and greater than the lowest positive normal number).MachineCode wrote: ↑Tue Jan 29, 2019 2:11 pm Here is the problem. The function rnd() will produce a set of fp numbers evenly distributed over the range 0 .. 1. When applied to the 2 bit color field, how do you ensure that each bin will be evenly distributed?
With the method I've pointed out:MachineCode wrote: ↑Tue Jan 29, 2019 2:11 pm The partition 11 will need to include values below it. The partition 00 will need to include values above it.
- The partition 00 will include values 0.0 <= r < 0.25, which times 4 is 0.0 <= r*4 < 1.0.
- The partition 01 will include values 0.25 <= r < 0.5, which times 4 is 1.0 <= r*4 < 2.0.
- The partition 10 will include values 0.5 <= r < 0.75, which times 4 is 2.0 <= r*4 < 3.0.
- The partition 11 will include values 0.75 <= r < 1.0, which times 4 is 3.0 <= r*4 < 4.0.
Flooring r*4 will get you 0, 1, 2 and 3, respectively for each range.
Each input range has the same size, and each output range has the same size, therefore the distribution is as uniform as that of the input random numbers.
You're overcomplicating things here, I think.MachineCode wrote: ↑Tue Jan 29, 2019 2:11 pm The only way to evenly distribute the rnd() field to all "bins" is to divide the color space up into 4 regions and assign the fp number accordingly. If you do that, then you have effectively used fp numbers to simulate the fractional integer scheme, with the drawback that simple boolean arithmetic is not available.
MachineCode wrote: ↑Tue Jan 29, 2019 2:11 pmI am trying to work out a way to test this, but my guess is that if I take an image and set pixels with a rnd() function, the codes 00 and ff will be statistically less frequent than the other values, because of the mismatch between fp(0-1) and the binary subdivision of color space.
Edit: Sorry, that's wrong, I get you now. You're right, given the rounding in the internal conversion, you need to compensate with this formula:
math.floor(love.math.random()*256)/255
(Edit 2: Or even the equivalent love.math.random(0, 255)/255. Thanks grump, I sometimes miss the most obvious things... I'll blame aging )
Last edited by pgimeno on Tue Jan 29, 2019 4:01 pm, edited 1 time in total.
Who is online
Users browsing this forum: Bing [Bot] and 4 guests