Hello,
working on a port of a game which has a vital add-on community,
so not every content being executed can be trusted.
I wonder what Löve does to protect the user's system from potential malice lua code?
Please note the discussion in the game's forum:
https://forums.wesnoth.org/viewtopic.ph ... 40#p658240
'untrusted' code
Forum rules
Before you make a thread asking for help, read this.
Before you make a thread asking for help, read this.
Re: 'untrusted' code
LÖVE does nothing to protect the user's system from malicious code. You can sandbox code with setfenv, but that may not be very practical for modding, and it's only superficially secure and no defense against JIT exploits, or other kinds of exploits.
There is always a risk involved when running untrusted code. Clever sandboxing and not allowing bytecode payloads keeps script kiddies away. It's the best you can do with reasonable effort.
There is always a risk involved when running untrusted code. Clever sandboxing and not allowing bytecode payloads keeps script kiddies away. It's the best you can do with reasonable effort.
- slime
- Solid Snayke
- Posts: 3163
- Joined: Mon Aug 23, 2010 6:45 am
- Location: Nova Scotia, Canada
- Contact:
Re: 'untrusted' code
Using setfenv with carefully selected functions to expose (without reducing functionality so much that the types of add-ons are limited significantly) and preventing bytecode from being loaded by add-ons has worked pretty well for Vendetta Online, which is a MMOFPS that has had client side Lua add-on support for about 13 years now.
That being said, I'm sure varying degrees of malicious code are still possible to create for Vendetta Online.
That being said, I'm sure varying degrees of malicious code are still possible to create for Vendetta Online.
Re: 'untrusted' code
If you want to protect against DoS (infinite or very long loops, memory exhaustion), it's a hell. I'd advise you to look into a different language or framework.
If protecting against DoS is not a requirement, it's still a hell but not so much. Besides blocking bytecode, you need to include only safe things (better to use a whitelist than a blacklist). This might be safe, not sure:
print, pcall, xpcall, next, pairs, ipairs, tostring, tonumber, table, math, string, coroutine, assert, error, unpack, select, type, getmetatable, setmetatable
Every module (table, math, string, coroutine) should be a copy of the main one, never the original. Note that string functions are an easy way to DoS.
One subtlety is that attackers can obtain the metatable of a string, and modify the main string table through it, making your code use any strings that the attacker wants in place of the strings you expect. An example where this is especially dangerous is if your (non-sandboxed) code uses os.execute.
To protect against that, you can pass a modified getmetatable that returns the sandbox's string table instead of the real one. For example:
Edit: Sorry, that won't work. The client can still do ("").fn = rogue_fn. You need to swap the metatable of strings when calling the sandboxed function, and swap it back on end. This is done with debug.setmetatable (normal setmetatable does not allow changing the metatable of strings).
If protecting against DoS is not a requirement, it's still a hell but not so much. Besides blocking bytecode, you need to include only safe things (better to use a whitelist than a blacklist). This might be safe, not sure:
print, pcall, xpcall, next, pairs, ipairs, tostring, tonumber, table, math, string, coroutine, assert, error, unpack, select, type, getmetatable, setmetatable
Every module (table, math, string, coroutine) should be a copy of the main one, never the original. Note that string functions are an easy way to DoS.
One subtlety is that attackers can obtain the metatable of a string, and modify the main string table through it, making your code use any strings that the attacker wants in place of the strings you expect. An example where this is especially dangerous is if your (non-sandboxed) code uses os.execute.
Code: Select all
local function copy(x)
local ret = {}
for k, v in next, x do
ret[k] = v
end
return ret
end
local function new_sandboxed_env()
local safe_string = copy(string)
local safe_table = copy(table)
local safe_math = copy(math)
local safe_coroutine = copy(coroutine)
local env = { print=print, pcall=pcall, xpcall=xpcall, next=next, pairs=pairs, ipairs=ipairs,
tostring=tostring, tonumber=tonumber, assert=assert, error=error, unpack=unpack,
select=select, type=type, setmetatable=setmetatable,
table=safe_table, math=safe_math, string=safe_string, coroutine=safe_coroutine,
}
env._G = env
local _G = _G
env.getmetatable = function(x)
local ret = _G.getmetatable(x)
if ret == _G.string then
return safe_string
end
return ret
end
return env
end
Edit: Sorry, that won't work. The client can still do ("").fn = rogue_fn. You need to swap the metatable of strings when calling the sandboxed function, and swap it back on end. This is done with debug.setmetatable (normal setmetatable does not allow changing the metatable of strings).
Last edited by pgimeno on Tue Sep 15, 2020 12:14 pm, edited 1 time in total.
- zorg
- Party member
- Posts: 3465
- Joined: Thu Dec 13, 2012 2:55 pm
- Location: Absurdistan, Hungary
- Contact:
Re: 'untrusted' code
I've read the linked discussion on the other forum.
For one, the basic client without mods should already not do that; dev responsibility exists for that part.
The other thing is, running a mod or whatever that, let's say, does contain code that can lock up the game; from the viewpoint of the server and other clients, that only means that that one client is no longer in the game, and the connection should be terminated with it. Not your fault/job to fix an add-on that misbehaves.
No system-specific stuff is needed to limit the game's resource usage for the reasons given above. Then again, it does make sense to do that purely for the running untrusted code reason however. (some things can be avoided by not allowing them through environment shenanigans, as mentioned previously by others, some can't... although if ffi is needed, as some said in the other forum's thread, a wrapper could be written for most needs that limit memory allocations to a set amount and so on... probably)
Now pgimeno mentioned DoS attacks... if we're only talking about the worry that a client could DoS the server,for one, the server can detect whether a client's sending it -way- more data than it should, and above a threshold, it can kill the connection, and even blacklist the IP (this can be worked around in many cases though), second, stuff like cloudfare is a thing now, that also reroutes DoS-suspected conns to nowhere.
Edit: indeed i was only talking about scenario #2.
I agree that a centralized server can indeed be beneficial to thwarting cheating, but i don't see how that has anything to do with guaranteeing that the clients don't softlock the computers they run on.This sounds like a discussion that needs to be made with Löve's c++ developers.EDIT: furthermore if you want to implement server in a server-client model where the actual code is run on the server (unliek currentl wesnoth where umc code is only executed on the client), you need some operating system specific code to limits a games resourses usage. Once you have that, implementing full os-level sandbooxing via functions like seccomp-bpf (on Linux, din't look up the equivalents on other OS) might be a rather small step.
I will start a discussion on their forums and drop a link to it here.
For one, the basic client without mods should already not do that; dev responsibility exists for that part.
The other thing is, running a mod or whatever that, let's say, does contain code that can lock up the game; from the viewpoint of the server and other clients, that only means that that one client is no longer in the game, and the connection should be terminated with it. Not your fault/job to fix an add-on that misbehaves.
No system-specific stuff is needed to limit the game's resource usage for the reasons given above. Then again, it does make sense to do that purely for the running untrusted code reason however. (some things can be avoided by not allowing them through environment shenanigans, as mentioned previously by others, some can't... although if ffi is needed, as some said in the other forum's thread, a wrapper could be written for most needs that limit memory allocations to a set amount and so on... probably)
Now pgimeno mentioned DoS attacks... if we're only talking about the worry that a client could DoS the server,for one, the server can detect whether a client's sending it -way- more data than it should, and above a threshold, it can kill the connection, and even blacklist the IP (this can be worked around in many cases though), second, stuff like cloudfare is a thing now, that also reroutes DoS-suspected conns to nowhere.
Edit: indeed i was only talking about scenario #2.
Last edited by zorg on Tue Sep 15, 2020 12:19 pm, edited 3 times in total.
Me and my stuff True Neutral Aspirant. Why, yes, i do indeed enjoy sarcastically correcting others when they make the most blatant of spelling mistakes. No bullying or trolling the innocent tho.
Re: 'untrusted' code
This is not about cheating or about DDoS (distributed denial of service) of the network. It's about both security against unauthorized access, and about DoS (denial of service) through crashing the program or making it unresponsive. The real concerns here are these scenarios:zorg wrote: ↑Tue Sep 15, 2020 9:54 am I agree that a centralized server can indeed be beneficial to thwarting cheating, but i don't see how that has anything to do with guaranteeing that the clients don't softlock the computers they run on.
[...]
Now pgimeno mentioned DoS attacks... if we're only talking about the worry that a client could DoS the server,for one, the server can detect whether a client's sending it -way- more data than it should, and above a threshold, it can kill the connection, and even blacklist the IP (this can be worked around in many cases though), second, stuff like cloudfare is a thing now, that also reroutes DoS-suspected conns to nowhere.
- Server-side rogue mods that can attack (destroy HD, take over for spam, etc) the server's computer.
- Client-installed mods that run in the client and that can attack the client.
- Client-sent mods that run in the server or in other clients, in which case they can attack the server or other clients.
- Server-sent mods that run in the client, potentially sent from a rogue server that can attack the clients.
2 is similar to 1.
3 would be a major concern if it is possible. In that case, DoS (crashing or hanging the server with Lua code) becomes a serious threat that must be accounted for, because then, a client can force a server to crash at will, denying the service to all other clients and potentially generating a bad reputation (unfairly) for the server.
4 is the scenario where sandboxing makes the most sense. DoS (hanging or crashing the client) is not really a concern; a rogue server can crash a client, no biggie - it could as well have said a harsh word and forcibly closed the connection, and the result would be similar. It's not something to protect against. But attacks to the client must be guarded against.
So I assume the desired protection is about either 1 or 4. Protecting against 1 is difficult; protecting against 4 implies limiting the set of possible actions to a limited subset. I would not export any Löve functions, to start with, or maybe only on a case-by-case basis and with extra care; for example, any objects that export an FFI pointer (like all derived from Data, which is several Löve objects) have the potential to be exploited.
PS. I've edited my other post because I made a fatal mistake.
Re: 'untrusted' code
Many thanks to all participants.
Most likely the best solution is to come up with another c++ host that links to the love2d library for providing the filesystem features (savedir, mounting zips, etc).
Fortunately, the client comes without the need for being modded,
thus it can stay on Löve (with which I am already in love).
Regards, Fabi
Yes, it seems now, sooner or later, the server must move from Löve to another framework.
Most likely the best solution is to come up with another c++ host that links to the love2d library for providing the filesystem features (savedir, mounting zips, etc).
Fortunately, the client comes without the need for being modded,
thus it can stay on Löve (with which I am already in love).
Regards, Fabi
Re: 'untrusted' code
If that's really all you need to use from Löve, you can just use the PHYSFS library directly from C++, which is the library that Löve abstracts as love.filesystem.
Re: 'untrusted' code
zorg wrote: ↑Tue Sep 15, 2020 9:54 am I've read the linked discussion on the other forum.
I agree that a centralized server can indeed be beneficial to thwarting cheating, but i don't see how that has anything to do with guaranteeing that the clients don't softlock the computers they run on.This sounds like a discussion that needs to be made with Löve's c++ developers.EDIT: furthermore if you want to implement server in a server-client model where the actual code is run on the server (unliek currentl wesnoth where umc code is only executed on the client), you need some operating system specific code to limits a games resourses usage. Once you have that, implementing full os-level sandbooxing via functions like seccomp-bpf (on Linux, din't look up the equivalents on other OS) might be a rather small step.
I will start a discussion on their forums and drop a link to it here.
For one, the basic client without mods should already not do that; dev responsibility exists for that part.
The other thing is, running a mod or whatever that, let's say, does contain code that can lock up the game; from the viewpoint of the server and other clients, that only means that that one client is no longer in the game, and the connection should be terminated with it. Not your fault/job to fix an add-on that misbehaves.
No system-specific stuff is needed to limit the game's resource usage for the reasons given above. Then again, it does make sense to do that purely for the running untrusted code reason however. (some things can be avoided by not allowing them through environment shenanigans, as mentioned previously by others, some can't... although if ffi is needed, as some said in the other forum's thread, a wrapper could be written for most needs that limit memory allocations to a set amount and so on... probably)
Now pgimeno mentioned DoS attacks... if we're only talking about the worry that a client could DoS the server,for one, the server can detect whether a client's sending it -way- more data than it should, and above a threshold, it can kill the connection, and even blacklist the IP (this can be worked around in many cases though), second, stuff like cloudfare is a thing now, that also reroutes DoS-suspected conns to nowhere.
Edit: indeed i was only talking about scenario #2.
So i'm the one who wrote the first message quoted there.
The reason why running umc (UserMadeContent) code on the server would require people to limit its cpu usage is simply that the server shouldn't become become unresponsive (for probably many players in case that the server hosts multiple games) if one umc author writes bad code. If the UMC code runs on the client (like nonlove wesnoth does) this is less of a problem: first of it makes only his client unresponsive, second we have a player in front of the pc that can in the worst case just kill the application.
- zorg
- Party member
- Posts: 3465
- Joined: Thu Dec 13, 2012 2:55 pm
- Location: Absurdistan, Hungary
- Contact:
Re: 'untrusted' code
Again, my assumption was about clients running (client-side) code/addons locally only; since i don't exactly know how the game works, i didn't envision the possibility of user content being executed on the server.
Me and my stuff True Neutral Aspirant. Why, yes, i do indeed enjoy sarcastically correcting others when they make the most blatant of spelling mistakes. No bullying or trolling the innocent tho.
Who is online
Users browsing this forum: Google [Bot] and 6 guests