Bobble68 wrote: ↑Sun Jan 21, 2024 11:53 am
Actually, I could do with some help for it - currently the lighting is very smooth, and I'd rather it's pixellated and alligned with the texture, which could help with optimising if its at it's native resolution, however short of rendering each texture to a canvas every frame, I can't think of a good way of achieving it.
temp.png (16.37 KiB) Viewed 1037 times
There are two ways of doing that.
The first way is:
Keeping things as they are -- you are drawing the sprite enlarged (AKA "magnified"), so a single sprite pixel occupies many small screen pixels. The shader runs on each screen pixel, not each enlarged sprite pixel, so when you're using screen_coords to find the light direction, what happens is that within the same large sprite pixel there'll be many different lighting levels, one for each screen pixel, leading to that smooth visual.
The solution to that is to "stepify" the screen coordinates used to calculate the light direction so that they have the same step size and alignment as the large sprite pixels.
The changes needed are:
To store the sprite dimensions and the transform used to draw it on screen, and send those to the shader:
-- After loading 'image' (the sprite), store its dimensions and make a Transform object to draw it with.
local imageSize = {image:getDimensions()}
local playerTransform = love.math.newTransform(-100.0, 150.0, 0, 8, 8)
(...)
-- In love.update:
shader:send('textureSize', imageSize)
shader:send('playerTransform', playerTransform) -- Gets (magically) sent as a mat4 uniform.
-- In love.draw:
love.graphics.setShader(shader)
love.graphics.draw(image, playerTransform)
To add the stepification operations in the shader, so the screen_coords snap to the center of the enlarged sprite pixels that they're sampling:
// 2 new uniforms.
uniform vec2 textureSize;
uniform mat4 playerTransform;
(...)
// When calculating 'distance'.
// Change from UV coordinates (range [0, 1]) to image coordinates (range
// [0, texture_width] horizontally, and [0, texture_height] vertically).
vec2 image_coords = texture_coords * textureSize;
// Floor the image coordinate (truncate it, removing the decimal part) to snap it
// to the top-left corner of the image pixel being sampled.
// Also add 0.5 to X and Y, to move to the center of the image pixel being sampled.
image_coords = floor(image_coords) + 0.5;
// Transform the image coordinate by the image transform, to make it into a screen
// coordinate.
image_coords = (playerTransform * vec4(image_coords, 0.0, 1.0)).xy;
// Uncomment the line below to get the original behavior.
//image_coords = screen_coords;
vec3 distance = light - vec3(image_coords, 0.0);
The second way is:
Since you're unlikely to draw sprites with different scale levels and will draw everyone at the same scale level, it's better to use a small "pixel art" canvas, and draw your sprites with shaders onto that small canvas. After that, using nearest-neighbor filtering, draw that canvas upscaled to fill the game window. So the shaders happen on the pixel art canvas, a space where the screen pixels and sprite pixels have the same size and alignment.
This way is what you'd use in your shipped game.
PS speaking about canvases, since your (very cool) port of the normal map generation code is simple enough --as you're sampling the texture 4 times or potentially 9 times if you included the diagonals-- this could be made into a shader that calculates the pixel normal in real-time, so no normal map textures are needed. This would be no more expensive than a real-time gaussian blur / depth of field effect used in games.
This would be especially interesting if you had lots of animated sprites and wanted to avoid generating and handling all of those extra normal map assets.
vilonis wrote: ↑Mon Jan 22, 2024 3:40 pm
Two options I can think of off the top of my head:
* render the world to a virtual canvas that is 1:1 with your pixel graphics, then scale the whole thing up afterwards. That way your pixel shader is running on each graphical pixel.
* In your pixel shader code, have each screen pixel determine its (approximate) graphical pixel by dividing and rounding its screen pixel coordinate by the scale factor, then use that for the lighting calculations
Both of those have complications that would need to be worked out, but they’re a start.
RNavega wrote: ↑Mon Jan 22, 2024 10:43 pm
There are two ways of doing that.
The first way is:
Keeping things as they are -- you are drawing the sprite enlarged (AKA "magnified"), so a single sprite pixel occupies many small screen pixels. The shader runs on each screen pixel, not each enlarged sprite pixel, so when you're using screen_coords to find the light direction, what happens is that within the same large sprite pixel there'll be many different lighting levels, one for each screen pixel, leading to that smooth visual.
The solution to that is to "stepify" the screen coordinates used to calculate the light direction so that they have the same step size and alignment as the large sprite pixels.
The changes needed are:
To store the sprite dimensions and the transform used to draw it on screen, and send those to the shader:
-- After loading 'image' (the sprite), store its dimensions and make a Transform object to draw it with.
local imageSize = {image:getDimensions()}
local playerTransform = love.math.newTransform(-100.0, 150.0, 0, 8, 8)
(...)
-- In love.update:
shader:send('textureSize', imageSize)
shader:send('playerTransform', playerTransform) -- Gets (magically) sent as a mat4 uniform.
-- In love.draw:
love.graphics.setShader(shader)
love.graphics.draw(image, playerTransform)
To add the stepification operations in the shader, so the screen_coords snap to the center of the enlarged sprite pixels that they're sampling:
// 2 new uniforms.
uniform vec2 textureSize;
uniform mat4 playerTransform;
(...)
// When calculating 'distance'.
// Change from UV coordinates (range [0, 1]) to image coordinates (range
// [0, texture_width] horizontally, and [0, texture_height] vertically).
vec2 image_coords = texture_coords * textureSize;
// Floor the image coordinate (truncate it, removing the decimal part) to snap it
// to the top-left corner of the image pixel being sampled.
// Also add 0.5 to X and Y, to move to the center of the image pixel being sampled.
image_coords = floor(image_coords) + 0.5;
// Transform the image coordinate by the image transform, to make it into a screen
// coordinate.
image_coords = (playerTransform * vec4(image_coords, 0.0, 1.0)).xy;
// Uncomment the line below to get the original behavior.
//image_coords = screen_coords;
vec3 distance = light - vec3(image_coords, 0.0);
The second way is:
Since you're unlikely to draw sprites with different scale levels and will draw everyone at the same scale level, it's better to use a small "pixel art" canvas, and draw your sprites with shaders onto that small canvas. After that, using nearest-neighbor filtering, draw that canvas upscaled to fill the game window. So the shaders happen on the pixel art canvas, a space where the screen pixels and sprite pixels have the same size and alignment.
This way is what you'd use in your shipped game.
PS speaking about canvases, since your (very cool) port of the normal map generation code is simple enough --as you're sampling the texture 4 times or potentially 9 times if you included the diagonals-- this could be made into a shader that calculates the pixel normal in real-time, so no normal map textures are needed. This would be no more expensive than a real-time gaussian blur / depth of field effect used in games.
This would be especially interesting if you had lots of animated sprites and wanted to avoid generating and handling all of those extra normal map assets.
Ah thank you both for the advice! However, the problem is with using a canvas in that way is that the sprites need to be able to rotate and have be able to have positions with less than a pixel precision - rounding is probably the way to go here, though I'm uncertain of how to include rotation in that calculation. I hadn't considered doing this in realtime - only disadvantage with that (other than the slight cost increase) is that it'll be harder to implement manually drawn normals, which I can see myself using in some cases.