Hey Everyone,
I had an idea for a project, to create an audio visualiser (in the style of MonsterCat https://www.youtube.com/user/MonstercatMedia) thst you put an mp4 into it, and it creates the wavelengths on screen. Is this at all possible? Is there a way to do this in the standard Love2D API, or if not is there a library I can install to do this?
Thanks!
Split audio wavelength
Forum rules
Before you make a thread asking for help, read this.
Before you make a thread asking for help, read this.
Split audio wavelength
Check out my blog - The place to find out about the latest tech news! http://thegeekcircle.blogspot.co.uk/
Re: Split audio wavelength
Looks like you want to build a spectrum analyzer. In software, this is typically done using a short-time Fourier transform or wavelet analysis. There is quite a bit of complicated math involved.
LÖVE currently does not include a library to do that, and I don't think you'd want to implement that in Lua yourself.
You can try your luck with fftw via Luajits ccall interface or this binding, but that library provides only the bare minimum (just FFT, no STFT).
LÖVE currently does not include a library to do that, and I don't think you'd want to implement that in Lua yourself.
You can try your luck with fftw via Luajits ccall interface or this binding, but that library provides only the bare minimum (just FFT, no STFT).
Re: Split audio wavelength
I'm not sure this will help, but I couldn't resist translating the rosetta code versions of the Cooley Tukey FFT.
Gist on Github if you want to fork it.
Code: Select all
local complex = {
__tostring = function(self) return ("(% .5f % .5fi)"):format(self[1], self[2]) end
}
local A = -2 * math.pi
local function C(t) return setmetatable(t, complex) end
local function cexp(x)
local er = math.exp(x[1])
return C{ er*math.cos(x[2]), er*math.sin(x[2]) }
end
local function cmul(x, y) return C{ x[1]*y[1]-x[2]*y[2], x[1]*y[2]+x[2]*y[1] } end
local function cadd(x, y) return C{ x[1]+y[1], x[2]+y[2] } end
local function csub(x, y) return C{ x[1]-y[1], x[2]-y[2] } end
local function slice(list) -- evens/odds semantics are weird.
local even, odd = {}, {}
for i = 1, #list, 2 do even[#even+1] = list[i] end
for i = 2, #list, 2 do odd[#odd+1] = list[i] end
return even, odd
end
local function FFT(x)
local N, H = #x, math.floor(#x/2)
local y = {}
for i = 1, N do
y[i] = type(x[i]) ~= "table" and C{x[i], 0} or x[i]
end
if N <= 1 then return y end
local evens, odds = slice(y)
evens = FFT(evens)
odds = FFT(odds)
local results = {}
for k = 1, H do
local T = cexp{0, A*((k-1)/N)}
results[k] = cadd(evens[k], cmul(T, odds[k]))
results[H+k] = csub(evens[k], cmul(T, odds[k]))
end
return results
end
local data = { 1, 1, 1, 1, 0, 0, 0, 0 }
local outp = FFT(data)
for i = 1, #data do
print(tostring(outp[i]))
end
Re: Split audio wavelength
That could be useful...
Where is the bit of code to import the file for the FFT or is it just given some set ints to work with?
Thanks!
Where is the bit of code to import the file for the FFT or is it just given some set ints to work with?
Thanks!
Check out my blog - The place to find out about the latest tech news! http://thegeekcircle.blogspot.co.uk/
- zorg
- Party member
- Posts: 3465
- Joined: Thu Dec 13, 2012 2:55 pm
- Location: Absurdistan, Hungary
- Contact:
Re: Split audio wavelength
I'm not sure about the mp4 part though, i do know löve does support mp3-s.
Also, i kinda did a waveform analyzer thing myself in löve (though it's a bit more than that...), using lpghatguy's queuablesource lib from his microphone input project on github.
Maybe it would work to just use [wiki]Source:tell[/wiki] to get the position in samples, and a variable to calculate the difference between two calls, then use that interval on the SoundData to get a slice to generate the FFT on each update cycle, but i'm pretty sure that that would be less accurate than doing it with my solution (since what i push to the source is what will get played, so the visualization won't have "gaps" in it)
Also, i kinda did a waveform analyzer thing myself in löve (though it's a bit more than that...), using lpghatguy's queuablesource lib from his microphone input project on github.
Maybe it would work to just use [wiki]Source:tell[/wiki] to get the position in samples, and a variable to calculate the difference between two calls, then use that interval on the SoundData to get a slice to generate the FFT on each update cycle, but i'm pretty sure that that would be less accurate than doing it with my solution (since what i push to the source is what will get played, so the visualization won't have "gaps" in it)
Me and my stuff True Neutral Aspirant. Why, yes, i do indeed enjoy sarcastically correcting others when they make the most blatant of spelling mistakes. No bullying or trolling the innocent tho.
Re: Split audio wavelength
Sorry, I put the wrong thing - I meant mp3!
Check out my blog - The place to find out about the latest tech news! http://thegeekcircle.blogspot.co.uk/
Re: Split audio wavelength
You need to feed it the sound data. Reading the thread, I'm not sure you know how to obtain the sound data in love2d. Just in case: you need to load the sound with love.sound.newSoundData, not with love.audio.newSource. Then you can use the resulting SoundData both to get the sample data and pass it to the FFT, and to pass it to love.audio.newSource in order to be able to play it.
This is a simple example to use the data to roughly visualize the wave (not the spectrum):
This is a simple example to use the data to roughly visualize the wave (not the spectrum):
Code: Select all
local snd, src
local lpts, rpts
local zoom_out = 200
local samplerate
local playing = false
local last_playing
function update_display(start)
lpts, rpts = {}, {}
local h1 = love.window.getHeight()
local h2 = h1 * 3 / 4
h1 = h1 / 4
for i = 1, love.window.getWidth() do
lpts[i*2-1] = i-1
lpts[i*2] = h1 - (snd:getSample(math.floor(i*zoom_out+start)*2) or 0) * h1
end
for i = 1, love.window.getWidth() do
rpts[i*2-1] = i-1
rpts[i*2] = h2 - (snd:getSample(math.floor(i*zoom_out+start)*2+1) or 0) * h1
end
end
function love.load(cmdlineargs)
argv = cmdlineargs
if #argv < 2 then
error("Audio file name required in command line")
end
snd = love.sound.newSoundData(argv[2])
src = love.audio.newSource(snd)
-- Read snd:getChannels(). Here we skip that step and assume stereo.
samplerate = snd:getSampleRate()
end
function love.update(dt)
if not playing then
playing = 0
last_playing = love.window.getWidth()*zoom_out
src:play()
update_display(0)
else
playing = playing + dt*samplerate
if playing >= last_playing then
update_display(last_playing)
last_playing = last_playing + love.window.getWidth()*zoom_out
end
end
end
function love.draw()
love.graphics.line(unpack(lpts))
love.graphics.line(unpack(rpts))
love.graphics.print(playing, 0, 0)
local x = (playing-last_playing)/zoom_out + love.window.getWidth()
if x <= love.window.getWidth() then
love.graphics.line(x, 0, x, love.window.getHeight())
end
end
Who is online
Users browsing this forum: Semrush [Bot] and 1 guest