pgimeno wrote: ↑Wed Mar 06, 2019 12:51 pm
unpack accepts an offset parameter. Wouldn't that obviate the need of slicing?
Ah, you're right, my bad. No slicing required then.
Where would you need the slicing/formatting/concatenation with unpack which you don't with other methods? Could you give an example?
After thinking about it for a bit, concatenation may not always be required, but...
Consider this simple structure:
Code: Select all
{
uint16_t len
uint16_t data[len]
}
in a file with 1,000,000 records of this type.
With love.data.unpack:
Code: Select all
local data = "\x10\x0000112233445566778899aabbccddeeff"
local result = {}
for i = 1, 1e6 do
local len = love.data.unpack('<H', data)
result = { love.data.unpack('<' .. ('H'):rep(len), data, 3) }
result[#result] = nil -- unpack returns an additional value
end
Runtime: 0.65s
But we can get rid of concatenation and also the table:
Code: Select all
local data = "\x10\x0000112233445566778899aabbccddeeff"
local result = {}
for i = 1, 1e6 do
local len = love.data.unpack('<H', data)
for i = 1, len do
result[i] = love.data.unpack('<H', data, i * 2 + 1)
end
end
Runtime: 1.39s
The concatenation is gone, but now it takes more than twice the time to complete. With more complex data structures and more calls to unpack, this may quickly become considerable.
moonblob:
Code: Select all
local data = "\x10\x0000112233445566778899aabbccddeeff"
local r = BlobReader(data, '<')
local result = {}
for i = 1, 1e6 do
r:rewind():array('u16', r:u16(), result)
end
Runtime: 0.13s
moonblob is 5x-10x faster and the code is a lot more readable imho - more succint, no fiddling with strings and no strange looking format identifiers that you have to look up to understand their meaning (except the eyesore that is the endianess specifier).
You'll probably come up with an ingenious solution using unpack that proves me utterly wrong
I can't believe there's no way to tell unpack to parse n values at once.