My old implementation was already getting frames from the camera in a separate thread. Let's see if things are better with the video module.
I don't need to modify LÖVE, just to extend VideoStream and return an correctly-filled frame in getFrontBuffer(). This may require some massaging in the framebuffer provided by the camera library...
Using cameras
Forum rules
Before you make a thread asking for help, read this.
Before you make a thread asking for help, read this.
- bartbes
- Sex machine
- Posts: 4946
- Joined: Fri Aug 29, 2008 10:35 am
- Location: The Netherlands
- Contact:
Re: Using cameras
I'm not sure what you define as "encapsulated", but it's not tied to theora at all. To implement an alternate decoder, you simply need to implement a VideoStream (and its base class Stream). The trickiest part is probably seeking. During development I had it stream from Video4Linux, after I wrote an ffmpeg-based decoder, LVEP, that can be built and required like any lua module can.Positive07 wrote: PS: This module is not encapsulted at all, and right now is only designed to play Theora Ogg videos... We can ask the developers to make this better but don't expect much though. I would love to see a C API to implement something in the C side that can behave as VideoStreams or something like that.
love.graphics doesn't need any modifications at all, provided your video output is in Y'CbCr format (as VideoStream specifies).
- Positive07
- Party member
- Posts: 1014
- Joined: Sun Aug 12, 2012 4:34 pm
- Location: Argentina
Re: Using cameras
Oh! Sorry I thought that there was no way to define the same structure and change the functions, as I said I have no experience with C++ and it's classes. I'm glad this is possible.
The video output format may be a problem, the seeking part could totally be ignored with a dummy function or something, since seeking camera input is not something you would expect, unless you are recording or something, but I wouldn't go that far for a first implementation, and if I did I would do it in parallel and directly to an ogg file with a separate stream so that you can save that file if needed...
What I don't have any idea how to implement is the sound part and syncing if you can provide more insight on this I would love to learn about these topics
Thanks for your answer bartbes, I'm glad you came to correct me and I learned new stuff
The video output format may be a problem, the seeking part could totally be ignored with a dummy function or something, since seeking camera input is not something you would expect, unless you are recording or something, but I wouldn't go that far for a first implementation, and if I did I would do it in parallel and directly to an ogg file with a separate stream so that you can save that file if needed...
What I don't have any idea how to implement is the sound part and syncing if you can provide more insight on this I would love to learn about these topics
Thanks for your answer bartbes, I'm glad you came to correct me and I learned new stuff
for i, person in ipairs(everybody) do
[tab]if not person.obey then person:setObey(true) end
end
love.system.openURL(github.com/pablomayobre)
[tab]if not person.obey then person:setObey(true) end
end
love.system.openURL(github.com/pablomayobre)
- bartbes
- Sex machine
- Posts: 4946
- Joined: Fri Aug 29, 2008 10:35 am
- Location: The Netherlands
- Contact:
Re: Using cameras
Sound is basically the same, but then using Decoder. Same deal really, except sound is not double buffered (it's an older interface and it hasn't been updated to match). As for syncing them up, a VideoStream has an associated FrameSync object, which it is supposed to sync to. This FrameSync can either just be time-based (DeltaSync), or associated with a Source (SourceSync), in which case it uses that Source's playback position as the current position.
Unfortunately, for a camera, I'm not sure how you would sync it up to anything, and if there's even information available about when a frame was recorded, and when the relevant audio was. Again, since FrameSync is (designed to be) a base class, theoretically you could define another synchronisation method.
Unfortunately, for a camera, I'm not sure how you would sync it up to anything, and if there's even information available about when a frame was recorded, and when the relevant audio was. Again, since FrameSync is (designed to be) a base class, theoretically you could define another synchronisation method.
-
- Citizen
- Posts: 67
- Joined: Sat May 08, 2021 9:45 pm
Re: Using cameras
there is a way to get this working with a udp stream from a wifi camera?
Code: Select all
target = boardIndex.getWhosPeekingThisLine()
target:setObey(true)
Who is online
Users browsing this forum: Ahrefs [Bot], Bing [Bot], Google [Bot] and 3 guests