What's everyone working on? (tigsource inspired)
Re: What's everyone working on? (tigsource inspired)
Thanks. I was planning on improving the interface a lot, adding some randomization features, and then using it as a display in our computer science department's outreach event (intended to make kids interested in computer science). It also turned out to be a useful debugging tool for the non-shader version of my channel-hopping program (which this is based on), so that was a nice side effect :p
Re: What's everyone working on? (tigsource inspired)
Just made a simplistic snake game for COCK demos. I never implemented a snake game, and it turned out to be surprisingly easy, though I used a couple of dirty tricks to get it done faster.
I'm gonna use this simple game to demonstrate library's functionality, by patching this game up for different shows. This first and very basic demo only covers setting up fixed controls. Keyboard: arrows = direction, space = boost, enter = pause. Joystick: stick & d-pad - direction, button 1 (whatever it is) - boost, button 2 - pause.
I'm gonna use this simple game to demonstrate library's functionality, by patching this game up for different shows. This first and very basic demo only covers setting up fixed controls. Keyboard: arrows = direction, space = boost, enter = pause. Joystick: stick & d-pad - direction, button 1 (whatever it is) - boost, button 2 - pause.
- Attachments
-
- datafile.love
- (15.88 KiB) Downloaded 279 times
- NightKawata
- Party member
- Posts: 294
- Joined: Tue Jan 01, 2013 9:18 pm
- Location: Cyberspace, Room 6502
- Contact:
Re: What's everyone working on? (tigsource inspired)
"I view Python for game usage about the same as going fishing with a stick of dynamite. It will do the job but it's big, noisy, you'll probably get soaking wet and you've still got to get the damn fish out of the water." -taylor
Re: What's everyone working on? (tigsource inspired)
I'll be working on a way to easily add objects to the game I'm currently working on (I've just started it, I'll probably release a prototype in the next few days; I'll ask you to excuse me before, daltonic people). This is what the 'player' item would (will) probably look like:
This would (again, will) be one of the falling objects, a droplet:
...This actually seems quite an hard project now that I think more seriously about it... Anyway, the code'll be on Github when (IF!) it reaches an acceptable state. I don't know how this license is called, but you'll be able to do whatever you want with the code EXCEPT claiming it's yours. If you edit it you must either provide the original source or link my Github page somewhere, a "CREDITS" file (even inside the .love) will do.
Warning: I'm likely to give up on this or leave it half-made for months. Anyway, I swear I'll get at least the game I'm talking about to a playable state. This object system won't be implemented in the first versions, it won't even be made yet. Who wants to tell me I'm just reinventing the wheel?
Code: Select all
player: --local player = {}
hue: 0->0%359 --player.hue = 0: automatically makes player.hue wrap in each update
m: 16 --player.m = 16
x: 400-l/2->0|800-l --player.x = 400-l/2; automatically clamps X to 0 and 800-player.l, which will be declared later
xV: 200 --player.xV = 200; it's pixel per second
y: 600-2l --player.y = 600-player.l*2
l: 32 --player.l = 32
kLeft: left --the key to move it left is left
kRight: right --guess what?
--EoF, so "return player"
Code: Select all
drop: --local drop = {}
hue: 0?359 --drop.hue = math.random(0, 359)
m: 8 --drop.m = 8
x: 0?800-w --drop.x = math.random(0, 800-drop.w)
y: -16++G --drop.y = -16; adds gravity (G is caps, so it's an external constant; it could be a drop field or a number easily)
w: 4 --drop.w = 4
h: 8 --drop.h = 8
onColl: (self,p) --collision callback
p.hue: p.hue*p.m/(p.m+self.m) + self.hue*self.m/(p.m+self.m) --p.hue becomes that
p:partBurst() --makes p release a particle burst
self:destroy(); --destroys self, ';' tells the code the function's ended
--EoF, the object's description has ended
Warning: I'm likely to give up on this or leave it half-made for months. Anyway, I swear I'll get at least the game I'm talking about to a playable state. This object system won't be implemented in the first versions, it won't even be made yet. Who wants to tell me I'm just reinventing the wheel?
lf = love.filesystem
ls = love.sound
la = love.audio
lp = love.physics
lt = love.thread
li = love.image
lg = love.graphics
ls = love.sound
la = love.audio
lp = love.physics
lt = love.thread
li = love.image
lg = love.graphics
- Sheepolution
- Party member
- Posts: 264
- Joined: Mon Mar 04, 2013 9:31 am
- Location: The Netherlands
- Contact:
Re: What's everyone working on? (tigsource inspired)
The Post-Ludum Dare version of my game! I'm remaking the whole game from start. Here is a piece of the replay mechanic I just finished (notice my mouse to see that it's a replay ). Instead of holding record of the x, y, state, direction, all the other stuff, I hold record of the key inputs and redo them. I was afraid it wouldn't work 100% (that the character would move slightly different because of the deltatime (which I'm also holding record of this time)), but so far it works perfect.
And one more gif:
Don't worry, the art will be done by someone else this time. Here is the first concept art:
When the Alpha version is ready I'll post it in Projects and Demos!
And one more gif:
Don't worry, the art will be done by someone else this time. Here is the first concept art:
When the Alpha version is ready I'll post it in Projects and Demos!
-
- Party member
- Posts: 712
- Joined: Fri Jun 22, 2012 4:54 pm
- Contact:
Re: What's everyone working on? (tigsource inspired)
The concept art is amazing!
Can you tell me why you would choose to redo input instead of redoing events? Just because it's easier to code?
It sounds like it can get rather unprecise. Maybe store the x and y position along with the input? That way you can easily correct any offsets ("At 0.5 seconds, jump was pressed and character was at [10,20]" is more precise than "At 0.5 seconds, jump was pressed").
Can you tell me why you would choose to redo input instead of redoing events? Just because it's easier to code?
It sounds like it can get rather unprecise. Maybe store the x and y position along with the input? That way you can easily correct any offsets ("At 0.5 seconds, jump was pressed and character was at [10,20]" is more precise than "At 0.5 seconds, jump was pressed").
trAInsported - Write AI to control your trains
Bandana (Dev blog) - Platformer featuring an awesome little ninja by Micha and me
GridCars - Our jam entry for LD31
Germanunkol.de
Bandana (Dev blog) - Platformer featuring an awesome little ninja by Micha and me
GridCars - Our jam entry for LD31
Germanunkol.de
- Sheepolution
- Party member
- Posts: 264
- Joined: Mon Mar 04, 2013 9:31 am
- Location: The Netherlands
- Contact:
Re: What's everyone working on? (tigsource inspired)
I agree with you that it sounds like it could get an unprecise replay. To be fair it IS unsync when I print and compare the position, I just haven't noticed any problems with it yet.Can you tell me why you would choose to redo input instead of redoing events?
I guess both could work, but I do it like this for multiple reasons. To me it looks a lot nicer this way. The player isn't forced in any position, I like that because the player is more of its own (if that makes sense). It still checks everything by functions and applies variables which otherwise I might have to force as well.
Another big reason is that this game will be online. If Player 1 and Player 2 want to play and Player 1 thinks to be funny by changing the values (more speed, more jump height, etc.), then Player 2 won't notice any of that. It just checks the inputs Player 1 did and replays that.
Thanks about the concept art! Here is a link to the artist's Deviantart.
Re: What's everyone working on? (tigsource inspired)
Yea pseudovisuals aren't exactly appealing.
Re: What's everyone working on? (tigsource inspired)
Implementing right stereo projection in OpenGL is the one cunning obscure task, but with software projection it was an effort worth of adding two extra addition operations to the formula. Side-by-side and interlaced stereo is only two lines of code away. Now gotta come up with a shader that allows configurable colormatrices for anaglyph glasses, because just maxing out red and cyan yields terrible ghosting with most glasses, not to mention that there's optimization techniques to it and that there's different color glasses like green-magenta.
Who is online
Users browsing this forum: aikusuuta and 11 guests