I'd like to know if you can detect that another application is reading the memory of your own program using ReadProcessMemory.
My question is related to the fact that Blizzard's games are protected by a "warden", that is able to detect cheats and bots injecting memory.
I get how they can check if the memory is injected, but can they also detect if it's only getting read by another program?
In the context of detecting external hacks which may use ReadProcessMemory() an anticheat can scan all external processes using signatures and heuristics to detect if something could be a cheat.
In the context of internal hacks, they don't use ReadProcessMemory but are injected into the game process and can read or modify the memory directly. An anticheat can simply detect injection methods or any allocation of memory to detect an internal cheat.
An anticheat which is running as administrator can also get a list of all the open process handles and detect what processes have permissions to interact with the game process. Code which can show you how that works is available here
A combination of these techniques only gives you an indicator of risk, further investigation is required to detect if the process is legitimate or not.
Related
Is there a possible way to stop/abort/terminate a required/loaded module?
I found here (https://stackoverflow.com/a/6677355/5781499) something:
var name = require.resolve('moduleName');
delete require.cache[name];
But this does not stop/abort a running timer or similar.
It just keeps doing what the script does.
The reason for me to need this, I want to implement a plugin system where you can start & stop plugins.
"Starting" is easy, just load with require(...) the code.
But what would be the best way to stop everything the plugin is doing?
I have though about a VM, but in node there is no way to abort either a vm execution.
Next thing that came to my mind, was "Worker Threads". They provide a .terminate method which does what I need. (But now I have to deal with inter process communication, which is very complex to keep everything synced)
Would be awesome if someone could give me a hint/tip.
Nodejs does not provide any feature to do what you want so you will have to do a bunch of things manually. As you've discovered, deleting the module for the module cache only affects what happens if you try to load the code again, it does not affect the already loaded code at all.
If you're going to keep the plug-ins in the same process, then you can implement a required method in your plug-ins called something like "shutdown" where the plug-in shuts itself down manually (stops timers, unregisters event handlers, etc...). Implemented correctly, this should disconnect it entirely from anything in your nodejs program. If you then delete the module from the require cache, you can then load a new module in its place. The one downside to this is that nodejs does not ever unload the original code - it just stays in memory. If you're not accessing that original module handle, that code never gets used again, but it isn't freed or GCed by nodejs.
A bit more robust system would be to put each plug-in in their own child-process or worker thread and just communicate with them via the built-in interprocess communication between parent and child process which is essentially just messaging. As long as you don't have to send large amounts of data between parent and child/worker or have super high bandwidth data, then the messaging is pretty simple to use and works well.
If using a separate child process, you can then kill the child process at anytime and the OS will reclaim all resources used by the process (not quite so true for a workerThread). This has its own downsides in that it will likely use a lot more memory since a whole new nodejs process or workerThread in nodejs is a much heavier weight thing than just loading a single module into your existing nodejs process.
Running it in a child process has the advantage that your main process is much more protected from errant code in the plug-in (either accidental or malicious) since they are different processes and the plug-in can't directly mess with the parent process. But, don't fool yourself here, unless you run it in a sandboxed VM, the plug-in can still wreak some havoc on the system since it has access to many resources on the system (disk, network, other peripherals, etc...).
I want to build a small web application in Rust which should be able to read and write files on a users behalf. The user should authenticate with their UNIX credentials and then be able to read / write only the files they have access to.
My first idea, which would also seem the most secure to me, would be to switch the user-context of an application thread and do all the read/write-stuff there. Is this possible?
If this is possible, what would the performance look like? I would assume spawning an operating system thread every time a request comes in could have a very high overhead. Is there a better way to do this?
I really wouldn't like to run my entire application as root and check the permissions manually.
On GNU/Linux, it is not possible to switch UID and GID just for a single thread of a process. The Linux kernel maintains per-thread credentials, but POSIX requires a single set of credentials per process: POSIX setuid must change the UID of all threads or none. glibc goes to great lengths to emulate the POSIX behavior, although that is quite difficult.
You would have to create a completely new process for each request, not just a new thread. Process creation is quite cheap on Linux, but it could still be a performance problem. You could keep a pool of processes around to avoid the overhead of repeated process creation. On the other hand, many years ago, lots of web sites (including some fairly large ones) used CGI to generate web pages, and you can get relatively far with a simple design.
I think #Florian got this backwards in his original answer. man 2 setuid says
C library/kernel differences
At the kernel level, user IDs and group IDs are a per-thread attribute. However, POSIX requires that all threads in a process
share the same credentials. The NPTL threading implementation handles
the POSIX requirements by providing wrapper functions for the various
system calls that change process
UIDs and GIDs. These wrapper functions (including the one for setuid()) employ a signal-based technique to ensure that when one
thread changes credentials, all of the other threads in the process
also change their credentials. For details, see nptl(7).
Since libc does the signal dance to do it for the whole process you will have to do direct system calls to bypass that.
Note that this is linux-specific. Most other unix variants do seem to follow posix at the kernel level instead emulating it in libc.
Hoi.
I am working on an experiment allowing users to use 1% of my CPU. That's like your own Webserver; but a big dynamic remote execution framework (dont ask about that), and I dont want users to use API functions like create files, no sockets, no threads, no console output, nothing.
Update1: People will be sending me binaries, so interrupt 0x80 is possible. Therefore... Kernel?
I need to limit a process so it cannot do anything but use a single pipe. Through that pipe the process will use my own wrapped and controlled API.
Is that even possible? I thought like a Linux kernel module.
The issues with limiting RAM and CPU are not primary here, for that there's something on google.
Thanks in advance!
The ptrace facility will allow your program to observe and control the operation of another process. Using the PTRACE_SYSCALL flag, you can stop the child process before every syscall, and make a decision about whether you want to allow that system call to proceed.
You might want to look at what Google is doing with their Native Client technology and the seccomp sandbox. The Native Client (NaCl) stuff is intended to let x86 binaries supplied by a web site run inside a user's local browser. The problem of malicious binaries is similar to what you face, so most of the technology/research probably applies directly.
My questions is what the title says. Can I run a remote thread without being blocked by some antivirus applications?
ReadProcessMemory is slow, so I need to inject my own code into the process and read it's own memory.
Whether or not anti-virus software is running should not affect this. You'll need elevated rights, though, but ReadProcessMemory requires that anyway.
One way is to ask that process somehow to load your code. If you have access to its source code, you can add an IPC interface for that. If that program has plugin/addon interface, consider writing a plugin which will contain such an interface.
On Windows, you can try SetWindowsHookEx API. It is more common operation than injecting a thread, so maybe AVs will ignore you this time.
Or you can ask users to add the program to AV's exclusion list.
Otherwise, there is no way to inject into a foreign process and not be suspicious. You're going to do what most malware wants to do, yet without being detected, how do you think any good AV can allow that?
My VPS account has been occasionally running out of memory. It's using Apache on Linux. Support says it's a slow memory leak and has enabled MaxRequestsPerChild to deal with it.
I have a few questions about this. When a child process dies, will it cause my scripts to lose session data? Does anyone have advice on how I can track down this memory leak?
Thanks
No, when a child process dies you will not lose any data unless it was in the middle of a request at the time (which should not happen if it exits due to MaxRequestsPerChild).
You should try to reproduce the memory leak using an identical software stack on your test system. You can use tools such as Valgrind to try to detect it.
You can also try a debug build of your web server and its modules, which will enable you to detect what's going on.
It's difficult to reproduce the behaviour of production systems in non-production ones. If you have auto-test coverage of your web application, you could try using your full auto-test suite, but in practice this is unlikely to cover every code path therefore may miss the leaky one.
When a child process dies, will it cause my scripts to lose session data?
Without knowing what scripting language and session handler you are using (and the actual code) it rather hard to say.
In most cases, using scripting languages in modules or via [fast] cgi, then its very unlikely that the session data would actually be lost - although if the process dies in the middle of processing a request it may not get the chance to write the updated session back to whatever is storing the session. And in the very unlikely event it dies during the writeback, it may corrupt the session data. These are quite exceptional circumstances.
OTOH if your application logic is implemented via a daemon (e.g. a Java container) then its quite probable that memory leaks could accumulate (although these would be reported against a different process).
Note that if the problem is alleviated by setting MaxRequestsPerChild then it implies that the problem is occurring in an Apache module.
The production releases of Apache itself, in my experience, is very stable without memory leaks. However I've not used all the modules. Not sure if ExtendedStatus gives a breakdwon of memory usage by module - might be worth checking.
I've previously seen problems with the memory management of modules loaded by the PHP module not respecting PHP's memory limits - these did clear down at the end of the request though.
C.