Need help solving a shader issue
Posted: Tue Jan 24, 2023 7:17 pm
I'm working on a 1bit game (where it only uses two colors) which leverages a custom shader. I recently started working on adding support to the shader so that I can define a pixel pattern for draw calls. The pattern is specified by 8 hex values that defines the states of pixels in each row, and then this 8x8 pattern is tiled. However I've run into an issue with the shader that I can seem to figure out, I'm fairly certain its a floating point rounding error, but I'm not sure how to fix it.
This is what I want to achieve: This is what actually happens: I believe the problem lies here, in the shader code:
I've tried several variations of the above to attempt to round the float, but its still always showing the same artifacting. I'm hoping this is an easy fix that I just can't see with my limited shader experience
I'm working on a rather large project, so I've attached a minimal (but working!) project that reproduces the issue in isolation.
This is what I want to achieve: This is what actually happens: I believe the problem lies here, in the shader code:
Code: Select all
vec4 effect(vec4 color, Image tex, vec2 tex_coords, vec2 screen_coords)
{
// rounding error somewhere on the next three lines?
float x = mod(screen_coords.x, 8);
float y = mod(screen_coords.y, 8);
if (pattern[int(floor(x + y))] == 1) {
return WHITE;
} else {
return BLACK;
}
}
I'm working on a rather large project, so I've attached a minimal (but working!) project that reproduces the issue in isolation.