My questions is what the title says. Can I run a remote thread without being blocked by some antivirus applications?
ReadProcessMemory is slow, so I need to inject my own code into the process and read it's own memory.
Whether or not anti-virus software is running should not affect this. You'll need elevated rights, though, but ReadProcessMemory requires that anyway.
One way is to ask that process somehow to load your code. If you have access to its source code, you can add an IPC interface for that. If that program has plugin/addon interface, consider writing a plugin which will contain such an interface.
On Windows, you can try SetWindowsHookEx API. It is more common operation than injecting a thread, so maybe AVs will ignore you this time.
Or you can ask users to add the program to AV's exclusion list.
Otherwise, there is no way to inject into a foreign process and not be suspicious. You're going to do what most malware wants to do, yet without being detected, how do you think any good AV can allow that?
Related
I want to build a small web application in Rust which should be able to read and write files on a users behalf. The user should authenticate with their UNIX credentials and then be able to read / write only the files they have access to.
My first idea, which would also seem the most secure to me, would be to switch the user-context of an application thread and do all the read/write-stuff there. Is this possible?
If this is possible, what would the performance look like? I would assume spawning an operating system thread every time a request comes in could have a very high overhead. Is there a better way to do this?
I really wouldn't like to run my entire application as root and check the permissions manually.
On GNU/Linux, it is not possible to switch UID and GID just for a single thread of a process. The Linux kernel maintains per-thread credentials, but POSIX requires a single set of credentials per process: POSIX setuid must change the UID of all threads or none. glibc goes to great lengths to emulate the POSIX behavior, although that is quite difficult.
You would have to create a completely new process for each request, not just a new thread. Process creation is quite cheap on Linux, but it could still be a performance problem. You could keep a pool of processes around to avoid the overhead of repeated process creation. On the other hand, many years ago, lots of web sites (including some fairly large ones) used CGI to generate web pages, and you can get relatively far with a simple design.
I think #Florian got this backwards in his original answer. man 2 setuid says
C library/kernel differences
At the kernel level, user IDs and group IDs are a per-thread attribute. However, POSIX requires that all threads in a process
share the same credentials. The NPTL threading implementation handles
the POSIX requirements by providing wrapper functions for the various
system calls that change process
UIDs and GIDs. These wrapper functions (including the one for setuid()) employ a signal-based technique to ensure that when one
thread changes credentials, all of the other threads in the process
also change their credentials. For details, see nptl(7).
Since libc does the signal dance to do it for the whole process you will have to do direct system calls to bypass that.
Note that this is linux-specific. Most other unix variants do seem to follow posix at the kernel level instead emulating it in libc.
I'd like to know if you can detect that another application is reading the memory of your own program using ReadProcessMemory.
My question is related to the fact that Blizzard's games are protected by a "warden", that is able to detect cheats and bots injecting memory.
I get how they can check if the memory is injected, but can they also detect if it's only getting read by another program?
In the context of detecting external hacks which may use ReadProcessMemory() an anticheat can scan all external processes using signatures and heuristics to detect if something could be a cheat.
In the context of internal hacks, they don't use ReadProcessMemory but are injected into the game process and can read or modify the memory directly. An anticheat can simply detect injection methods or any allocation of memory to detect an internal cheat.
An anticheat which is running as administrator can also get a list of all the open process handles and detect what processes have permissions to interact with the game process. Code which can show you how that works is available here
A combination of these techniques only gives you an indicator of risk, further investigation is required to detect if the process is legitimate or not.
I'm in the process of designing an application that will run on a headless Windows CE 6.0 device. The idea is to make an application that will be started at startup and run until powered off. (Basically it will look like a service, but an application is easier to debug without the complete hassle to stop/deploy/start/attach to process procedure)
My concern is what will happen during development. If I debug/deploy the application I see no way of closing it in a friendly and easy way. (Feel free to suggest if this can be done in a better/user friendly way) I will just stop the debugger and the result will be WSACleanup is not called.
Now, the question. What is the consequence of not calling WSACleanup? Will I be able to start and run the winsock application again using the debugger? Or will there be a resource leak preventing me to do so?
Thanks in advance,
Jef
I think that Harry Johnston comment is correct.
Even if your application has no UI you can find a way to close it gracefully. I suppose that you have one or more threads in loops, you can add a named manual reset event that is checked (or can be used for waits instead of Sleep()) inside the loop condition and build a small application that opens the event using the same name, sets it and quits. This would force also your service app to close.
It may not be needed for debugging, but it may be useful also if you'll need to update your software and this requires that your main service is not running.
Hoi.
I am working on an experiment allowing users to use 1% of my CPU. That's like your own Webserver; but a big dynamic remote execution framework (dont ask about that), and I dont want users to use API functions like create files, no sockets, no threads, no console output, nothing.
Update1: People will be sending me binaries, so interrupt 0x80 is possible. Therefore... Kernel?
I need to limit a process so it cannot do anything but use a single pipe. Through that pipe the process will use my own wrapped and controlled API.
Is that even possible? I thought like a Linux kernel module.
The issues with limiting RAM and CPU are not primary here, for that there's something on google.
Thanks in advance!
The ptrace facility will allow your program to observe and control the operation of another process. Using the PTRACE_SYSCALL flag, you can stop the child process before every syscall, and make a decision about whether you want to allow that system call to proceed.
You might want to look at what Google is doing with their Native Client technology and the seccomp sandbox. The Native Client (NaCl) stuff is intended to let x86 binaries supplied by a web site run inside a user's local browser. The problem of malicious binaries is similar to what you face, so most of the technology/research probably applies directly.
How do you disable closing an application when it is not responding and just wait till it recovers back?
What you're asking is not just impossible (any user with sufficient priviledges can terminate a process...no matter what OS), it's a horrible User Experience (UX) decision.
Think about it from the User's point of view. You're sitting there looking at an application. The application doesn't appear to be doing anything and isn't providing you any visual feedback that it is doing work. You'd think the application was hung and you'd restart it.
You could do anything from showing a scrolling progress bar to having the long running process update some piece of information on the UI thread (think of an installer in mid-install...it's constantly telling you which files it's putting where rather than just making you wait). In any case, you should be providing some visual feedback to the user so they know your application is still running.
Have the GUI work in a separate thread so that it is (hopefully) never "not responding".
If this is a question about programming, your program should never be in that state since you've tied up the GUI thread somehow. And you can't (and shouldn't) stop Windows or the user from closing your program. They've detected your code is rubbish and have every right to forcefully toss it out of their valuable address space.
In any case, your program's too busy doing other stuff - if it can't respond to the user, it probably can't waste time protecting itself either.
At some point, developers need to get it through their thick skulls that the computer belongs to the user, not them.
Of course, if you're talking about how to configure Windows to prevent this (such as on your PC), then this question belongs on serverfault.
Don't. No matter how important you think your application is, your users' ability to control their own systems is more important.
You can always terminate applications from task manager if you have the privileges. You can just disable or not show the system menu options that has the close icon and close menu option in the application window but that is not going to prevent the user from terminating it from task manager as mentioned before. Instead, I would just show some busy processing icon in the application so the user understands what is going on.
Only thing you can do is disable the close button. Users can still kill it from task manager or similar tool, to way around that. You could make killing it harder by launching it as a privileged process, but that comes with many more problems of its own.