(Sorry for the gigantic wall of text beforehand )
HDPLocust wrote: ↑Fri Feb 14, 2020 5:01 pm
Thanks for the explanation. Currently, I have some problems with functional analysis, and I would love to read additional materials.
There are quite a few different fast discreet fourier (or related) transformation implementations, but those should be more-or-less equivalent in terms of getting near same output from input. (granted, they still make some assumptions, hence windowing -is- a must for accuracy)
HDPLocust wrote: ↑Fri Feb 14, 2020 5:01 pm
In addition, this post is still promoting sound analysis stuff, so if someone is interested and wants to start doing better things, my work is partially done : )
Yeah, i have been working on such things since a few years ago, been using this library, but it is not the best, so i have been trying to optimize it:
https://github.com/h4rm/luafft (Currently i can do a frequency resolution of 16k bins for a mono channel without slowdown)
HDPLocust wrote: ↑Fri Feb 14, 2020 5:01 pm
I can set buffer size...
...but it should be preprocessed (by extracting specific samples from it).
Whoops, my mistake, that one. :v Then that point of mine can be disregarded.
Still, i do not really agree with the second part, since one calling :decode() already "extracts the specific samples" from the file itself...
HDPLocust wrote: ↑Fri Feb 14, 2020 5:01 pm
Windowing and every-byte-transformation it's for sound equalizers, not for visual stuff or realtime beat detection etc :<
It is for visual stuff; if you don't window/envelope, a sonograph that would use FFT data will have vertical discontinuities because the input wasn't tapered off to zero on both ends.
HDPLocust wrote: ↑Fri Feb 14, 2020 5:01 pm
Due to the fact that we work with frame rendering, non-hard-realtime Lua VM and probably lags (non-stable fps etc), we need to determine stuff exactly the time, not the byte offsets.
Yes, all commercial operating systems as far as i know are non-realtime (win, osx, linux); as for the rest, the exact reason for buffering based on samplepoint offsets (not necessarily byte offsets) is that we can exactly know how much data was processed at one time... granted if the code has something like an "if there are empty buffers in this QSource, then process and queue up the small SoundData buffer(s) we're using". I use this method in my tracker music replayer i wrote; it absolutely works.
HDPLocust wrote: ↑Fri Feb 14, 2020 5:01 pm
Also QueueableSource has big problems finding "current byte offset" position, so we should use independent byte/time counters.
If you were the one making that bitbucket issue, i already answered there
but to reiterate, you should not get the offset from the QSource; store it in a variable after you processed a SoundData buffer.
HDPLocust wrote: ↑Fri Feb 14, 2020 5:01 pm
Or we find ourselves too much dependent on the frame rate, and also have to put the only next frame data to the source with only one SoundData slot, so microlag will pause our composition.
The solution to that of course is to queue up at least as much data as needed for one "frame" to happen. If you don't want to sacrifice vsync on the main thread, you could use another thread that doesn't have that issue. (In fact, my advanced source library works like that...) There's no micro-lag pausing my processing in my tracker replayer either so idk...
HDPLocust wrote: ↑Fri Feb 14, 2020 5:01 pm
(window moving will stop everything too)
That is a "windows is shit" thing though, and we can not do anything about it, as far as i know...
HDPLocust wrote: ↑Fri Feb 14, 2020 5:01 pm
But you should see that parallel decoding with time depency in my demo is ok, the waveform represents the audible.
Yes, note that i did not say your method is wrong, i only said that there are other ways that might work better depending on what one wants.
HDPLocust wrote: ↑Fri Feb 14, 2020 5:01 pm
I tried using this, and the best I could think of was something like that. And if you will take a window and held it until the current SoundData in queue is played, our system breaks:
<snip>
And I will be glad to know better solution.
I'm at work at the moment, so sadly i can not give a better solution, but i will try a few things when i get back home.
Also, i am working on a visualization library myself, so i will probably share progress on that soon.