I don't expect any noticeable overhead. Note that setPitch is applied per sound, not per output; you can have the same sound loaded twice and play it at two different pitches at the same time. Therefore I assume that it's setting an internal read index multiplier and that it will do some kind of interpolation/antialiasing to avoid sounding like crap.MachineCode wrote: ↑Tue Jan 23, 2018 10:31 pm @pgimeno thanks for that. A PID with long time constant is a good solution provided that snd:setPitch is reasonably low overhead. I really don't know what the low level audio machinery looks like in a PC, but I suppose there is fine grain control over the sample rate so it's just changing a register or two.
Synchronising video and audio
Forum rules
Before you make a thread asking for help, read this.
Before you make a thread asking for help, read this.
Re: Synchronising video and audio
- zorg
- Party member
- Posts: 3465
- Joined: Thu Dec 13, 2012 2:55 pm
- Location: Absurdistan, Hungary
- Contact:
Re: Synchronising video and audio
Yes, you are right, CPUs do suck at timing on those scales, hence why you usually can't work with audio buffers smaller than some size, 256 samples on my system, for example (without ASIO, 2ms with it), otherwise underruns will occur.
The physical device may/does have its own crystal but to be honest, that's so low level, i don't really care about that; it's like caring about how high of a sampling rate the 1-bit DAC/ADC has. I set the sampling rate in my OS (which probably sets it on the soundcard) and probably in my DAW as well, although that may either do realtime resampling or use ASIO, directly talking with the soundcard, sidestepping any OS setting.
But yeah, overhead is imho high enough to not care too much; if you're using QSources, you can push individual samplepoints onto a very small buffer, and push those onto the Source, in effect, to the soundcard; increasing or decreasing the index by 1 smp (or more, depending on how big of a divergence you have / how fast of a convergence you want) will fix any drift that may arise.
Again, my biggest issue is/was that i wasn't sure how Video objects in Löve worked; now that i looked at it, a measly Video:play and Video:tell isn't the most accurate thing you could have; That said, you should test which works better for you; seeking the video based on the very precise audio position you can achieve with QSources and a SoundData buffer, or messing with the audio decoding, skipping or duplicating samplepoints based on info you get from Video:tell.
Again, hopefully no offense given, certainly none taken here; just that the discussion may have diverged a bit.
(Also, this way you wouldn't need to queue up exact audio frames either, since whether you'd load the one audio into a SoundData (which would take up large amounts of RAM) or just use a Decoder and decode into the small SoundData used as a QSource buffer instead (Decoders have such functionality from 0.11).
Finally, the main reason i'm not sold on not using QSources and using normal ones with :setPitch is that timing their starts is hard; you probably will have noticeable audio gaps, which i'm guessing isn't acceptable.
The physical device may/does have its own crystal but to be honest, that's so low level, i don't really care about that; it's like caring about how high of a sampling rate the 1-bit DAC/ADC has. I set the sampling rate in my OS (which probably sets it on the soundcard) and probably in my DAW as well, although that may either do realtime resampling or use ASIO, directly talking with the soundcard, sidestepping any OS setting.
But yeah, overhead is imho high enough to not care too much; if you're using QSources, you can push individual samplepoints onto a very small buffer, and push those onto the Source, in effect, to the soundcard; increasing or decreasing the index by 1 smp (or more, depending on how big of a divergence you have / how fast of a convergence you want) will fix any drift that may arise.
Again, my biggest issue is/was that i wasn't sure how Video objects in Löve worked; now that i looked at it, a measly Video:play and Video:tell isn't the most accurate thing you could have; That said, you should test which works better for you; seeking the video based on the very precise audio position you can achieve with QSources and a SoundData buffer, or messing with the audio decoding, skipping or duplicating samplepoints based on info you get from Video:tell.
Again, hopefully no offense given, certainly none taken here; just that the discussion may have diverged a bit.
(Also, this way you wouldn't need to queue up exact audio frames either, since whether you'd load the one audio into a SoundData (which would take up large amounts of RAM) or just use a Decoder and decode into the small SoundData used as a QSource buffer instead (Decoders have such functionality from 0.11).
Finally, the main reason i'm not sold on not using QSources and using normal ones with :setPitch is that timing their starts is hard; you probably will have noticeable audio gaps, which i'm guessing isn't acceptable.
Me and my stuff True Neutral Aspirant. Why, yes, i do indeed enjoy sarcastically correcting others when they make the most blatant of spelling mistakes. No bullying or trolling the innocent tho.
-
- Citizen
- Posts: 70
- Joined: Fri Jun 20, 2014 1:33 pm
Re: Synchronising video and audio
Thanks guys. It is a big help to talk these issues through to clear up how it all fits together in a framework like love2d. I know a bit about digital audio because I designed one of the first Digital Audio Dubber systems when the studios moved away from mag tape. That was more of an embedded design where you have very low level control over the dsps that drive the DACs. The sound libraries on PC are quite a different beast.
The issue of sync was always a huge problem because you need to emulate a shuttle wheel and lock multiple machines to an external SMPTE timecode. It is interesting to note that Tarantino's Jackie Brown was one of the first movies digitally dubbed and the first release had some bad audio sync problems due to a timecode bug. They fixed it later on, but sync has been a confusing problem for a long time.
The issue of sync was always a huge problem because you need to emulate a shuttle wheel and lock multiple machines to an external SMPTE timecode. It is interesting to note that Tarantino's Jackie Brown was one of the first movies digitally dubbed and the first release had some bad audio sync problems due to a timecode bug. They fixed it later on, but sync has been a confusing problem for a long time.
- bartbes
- Sex machine
- Posts: 4946
- Joined: Fri Aug 29, 2008 10:35 am
- Location: The Netherlands
- Contact:
Re: Synchronising video and audio
I see you've all gone deeply technical, but is there a reason you can't just use Video:setSource?
Re: Synchronising video and audio
I didn't assume when the OP said they are "generating video frames" that they came from a file, nor did I assume that about the audio. I imagined something like a dance game or a demoscene-like production.
-
- Citizen
- Posts: 70
- Joined: Fri Jun 20, 2014 1:33 pm
Re: Synchronising video and audio
I am working on a type of media player that generates video and audio from command strings. The images can be still or animated and each 1/60 frame is a set of 16 bit opcodes that drive a primitive graphics draw engine, so a picture is made from a set of block fills, runs and pixel draws. Sound is synthesized by parametric generators or an array of points with interpolation. So, it is outside the normal video stream tools. A bit like a software emulation of a shortwave radio with pictures. Lots of noise and image corruption.
The idea is that lots of effects with simple images just become small fragments of code. Drawing a rect block is 3 words (6 bytes). By using a completely proprietary codec it means that the stuff you see has a unique look and feel and can't be polluted by real world jpegs etc. I actually want a codec that has limited colours and bad audio that doesn't sound perfect. Anyway, what I will end up with is frames and audio chunks generated locked to the 60Hz video and the audio will need to get fed to the sound library. Mostly, the audio might be short so sync will not be a problem, but it might be possible to encode a video clip, so in that case sync slip would need to be fixed.
The idea is that lots of effects with simple images just become small fragments of code. Drawing a rect block is 3 words (6 bytes). By using a completely proprietary codec it means that the stuff you see has a unique look and feel and can't be polluted by real world jpegs etc. I actually want a codec that has limited colours and bad audio that doesn't sound perfect. Anyway, what I will end up with is frames and audio chunks generated locked to the 60Hz video and the audio will need to get fed to the sound library. Mostly, the audio might be short so sync will not be a problem, but it might be possible to encode a video clip, so in that case sync slip would need to be fixed.
Re: Synchronising video and audio
It underruns because 8 buffers of 5 ms length is 40 ms, which is about the minimum Po2 length that's reliably longer than ~20 ms OpenAL internal update timing. If you clock OpenAL higher, or extend buffer count limit, then you can use buffers as short as 1 sample. The CPU itself is perfectly adequate for real-time applications.
Who is online
Users browsing this forum: Google [Bot] and 2 guests