Protecting executable from being patched - security

My logic of APT (Anti-Paching Technology) is as follows...
1) Store on the MSSQL server the md5 hash of the executable for protection.
2) Perform an md5 comparison (within my application startup) the hash found on the server, with the executable itself.
3) If comparison fails exit application silently.
And all these above before it is finally pached!
I mean what is your best way to protected a file from being patched?
Without using ready tools (.net reactor, virtualizer etc)
Edit: Something else just came into my mind.
Is there any way of checking the application integrity on server side?
I mean my app works only online. Could i execute something on the server (my domain) that could check the application integrity?

The thing is a cracker would patch the application precisely on step 2, removing the hash check code.
So I wouldn't call that very effective against serious crackers.
EDIT: I guess your best bet is defense in depth, given that your app has to be online I'd:
Require authentication: Authenticate users, hopefully via a cryptographic key, and require a key check to receive/send data.
Obfuscation: It makes things harder for crackers.
Continued checks: Besides checking who is sending data, validate the application each time a request is sent.
These all can still be circumvented, but they make things a lot harder and might disuade some if your app is not worth that much to them.

A patched application means the 'cracker' has complete control over the machine the code is running on (at least enough control to patch the executable). So patch prevention however smart it might be is working against the flow of control.
Complicating your binary file might be enough to discourage patching so obfuscators are propably your best bet.

you can't. once someone else has your file they can do what they like with it - first thing would be to patch out your anti-patching code.

If the application is running on someone else's machine, you cannot prevent them from patching it. You can make it harder, but it's a shell game: you cannot win. Regardless of how complicated you make it, some guy somewhere will see it as an interesting challenge to break your protection, and he will succeed. Then, everyone else just has to download his version. The most extreme form of patch-protection today is Skype (that I know of). It's insanely complicated, and yet it has been broken.
Since your application apparently runs online, you can ask yourself why you want to prevent patches in the first place (maybe it's to prevent the user from entering some bad values? Or to prevent them from seeing some information that's present in the program?), and then architect your program so that whatever you want to keep hidden or checked happens on the server.
For example if it's a game and you want to prevent players from hacking the game to know where the other players are: change the server so it only sends coordinate information for the players that you can already see.
Another example: if it's an online store and you want to make sure users don't submit purchase orders with incorrect prices, check the prices at the server.
The only exception there is if you control the hardware that the program's running on. But even there, it's very hard to do it right (see: XBox, PS3, and the many other consoles that tried to do that and failed). It's probably still better to leverage the client/server architecture rather than betting on "trusted computing".

Crackers nowadays don't bother patching your executable file; they simply change your program's variables in-memory to make its behaviour more amenable to their requirements. Defending against this is very difficult and reasonably pointless; most games' crack-protection works only by searching for signatures of known crack programs (like an AV engine does).

Everyone nailed it, you can't stop someone but you can make it harder for them, you could even go off the deep end and make some in-memory validation stuff like World Of Warcrafts Warden system.
If you tell us what language you are writing in we might be able to suggest some simple obfuscation methods.

Related

Does Microsoft have a recommended way to handle secrets in headers in HttpClient?

Very closely related: How to protect strings without SecureString?
Also closely related: When would I need a SecureString in .NET?
Extremely closely related (OP there is trying to achieve something very similar): C# & WPF - Using SecureString for a client-side HTTP API password
The .NET Framework has class called SecureString. However, even Microsoft no longer recommends its use for new development. According to the first linked Q&A, at least one reason for that is that the string will be in memory in plaintext anyway for at least some amount of time (even if it's a very short amount of time). At least one answer also extended the argument that, if they have access to the server's memory anyway, in practice security's probably shot anyway, so it won't help you. (The second linked Q&A implies that there was even discussion of dropping this from .NET Core entirely).
That being said, Microsoft's documentation on SecureString does not recommend a replacement, and the consensus on the linked Q&A seems to be that that kind of a measure wouldn't be all that useful anyway.
My application, which is an ASP.NET Core application, makes extensive use of API Calls to an external vendor using the HttpClient class. The generally-recommended best practice for HttpClient is to use a single instance rather than creating a new instance for each call.
However, our vendor requires that all API Calls include our API Key as a header with a specific name. I currently store the key securely, retrieve it in Startup.cs, and add it to our HttpClient instance's headers.
Unfortunately, this means that my API Key will be kept in plaintext in memory for the entire lifecycle of the application. I find this especially troubling for a web application on a server; even though the server is maintained by corporate IT, I've always been taught to treat even corporate networks as semi-hostile environments and not to rely purely on corporate firewalls for application security in such cases.
Does Microsoft have a recommended best practice for cases like this? Is this a potential exception to their recommendation against using SecureString? (Exactly how that would work is a separate question). Or is the answer on the other Q&A really correct in saying that I shouldn't be worried about plaintext strings living in memory like this?
Note: Depending on responses to this question, I may post a follow-up question about whether it's even possible to use something like SecureString as part of HttpClient headers. Or would I have to do something tricky like populate the header right before using it and then remove it from memory right afterwards? (That would create an absolute nightmare for concurrent calls though). If people think that I should do something like this, I would be glad to create a new question for that.
You are being WAY too paranoid.
Firstly, if a hacker gets root access to your web server, you have WAY bigger problems than your super-secret web app credentials being stolen. Way, way, way bigger problems. Once the hackers are on your side of the airtight hatchway, it is game over.
Secondly, once your infosec team detects the intrusion (if they don't, again, you've got WAY bigger problems) they're going to tell you and the first thing you're going to do is change every key and password you know of.
Thirdly, if a hacker does get root access to your webserver, their first thought isn't going to be "let's take a memory dump for later analysis". A dumpfile is rather large (will take time to transfer over the wire, and the network traffic might well be noticed) and (at least on Windows) hangs the process until it's complete (so you'd notice your web app was unresponsive) - both of which are likely to raise some red flags.
No, hackers are there to grab as much valuable information in the least amount of time, because they know their access could be discovered at any second. So they're going to go for the low-hanging fruit first - usernames and passwords. Then they'll move on to trying to find out what's connected to that server, and since your DB credentials are likely in a config file on that server, they will almost certainly switch their attentions to that far more interesting target.
So all things considered, your API key is pretty darn unlikely to be compromised - and even if it is, it won't be because of something you did or didn't do. There are far more productive ways of focusing your time than trying to secure something that already is (or should be) incredibly secure. And, at the end of the day, no matter how many layers of security you put in place... that API or SSL key is going to be raw, in memory, at some stage.

Can a running nodejs application cryptographically prove it is the same as published source code version?

Can a running nodejs program cryptographically prove that it is the same as a published source code version in a way that could not be tampered with?
Said another way, is there a way to ensure that the commands/code executed by a nodejs program are all and only the commands and code specified in a publicly disclosed repository?
The motivation for this question is the following: In an age of highly sophisticated hackers as well as pressures from government agencies for "backdoors" that allow them to snoop on private transactions and exchanges, can we ensure that an application has been neither been hacked nor had a backdoor added?
As an example, consider an open source-based nodejs application like lesspass (lesspass/lesspass on github) which is used to manage passwords and available for use here (https://lesspass.com/#/).
Or an alternative program for a similar purpose encryptr (SpiderOak/Encryptr on github) with its downloadable version (https://spideroak.com/solutions/encryptr).
Is there a way to ensure that the versions available on their sites to download/use/install are running exactly the same code as is presented in the open source code?
Even if we have 100% faith in the integrity of the the teams behind applications like these, how can we be sure they have not been coerced by anyone to alter the running/downloadable version of their program to create a backdoor for example?
Thank you for your help with this important issue.
sadly no.
simple as that.
the long version:
you are dealing with the outputs of a program, and want to ensure that the output is generated by a specific version of one specific program
lets check a few things:
can an attacker predict the outputs of said program?
if we are talking about open source programs, yes, an attacker can predict what you are expecting to see and even can reproduce all underlying crypto checks against the original source code, or against all internal states of said program
imagine running the program inside a virtual machine with full debugging support like firing up events at certain points in code, directly reading memory to extract cryptographic keys and so on. the attacker does not even have to modify the program, to be able to keep copys of everything you do in plaintext
so ... even if you could cryptographically make sure that the code itself was not tampered with, it would be worth nothing: the environment itself could be designed to do something harmful, and as Maarten Bodewes wrote: in the end you need to trust something.
one could argue that TPM could solve this but i'm afraid of the world that leads to: in the end ... you still have to trust something like a manufacturer or worse a public office signing keys for TPMs ... and as we know those would never... you hear? ... never have other intentions than what's good for you ... so basically you wouldn't win anything with a centralized TPM based infrastructure
You can do this cryptographically by having a runtime that checks signatures before running any code. Of course, you'd have to trust that runtime environment as well. Unless you have such an environment you're out of luck - that is, unless you do a full code review.
Furthermore you can sign the build by placing a signature within the build system. The build system and developer access in turn can be audited. This is usually how secure development environments are build. But in the end you need to trust something.
If you're just afraid that a particular download is corrupted you can test against an official hash published at one or more trusted locations.

Validating client binaries in client/server handshake

I am building a client-side program that connects to a server. This client-side program needs to have the source code available to the users as part of the licencing (not an option). However, I need to ensure that when a user connects to the server with that client-side program, it's running with the original code and hasn't been altered and re-compiled.
Is there any way to check during connection to the server that they're using an unaltered version of the program?
No, there's really no way to do that.
You're basically encountering the "Trusted Client" problem. The client code runs on the user's PC, and the user has full control over that PC. He can change the bytes of the program on disk, or even in memory. If you were to try to perform a hash or checksum against the code, he could simply change the code that did that verification and make it return "unmodified".
You could try to make things a little harder on a malicious user but there's no practical way to achieve what you're hoping.
What you have described is a issue that the video game industry has been fighting for the last decade and a half. In short, how to prevent the user from modifying the client (in their case, generally to prevent cheating, though also for copyright reasons). If that effort has taught us anything, it's that preventing modifications to the client is a constant arms race that you will never decisively win. In light of that, don't even try.
Follow the standard client-server assumption that the client is in the hands of the enemy and cannot be trusted. Build your server side defensively based on that assumption and you'll be alright.
It's very very difficult and probably not worth it. But if you are interested in pursuing it you'd have to develop something that has been code signed and monitored by the Windows kernal.
A couple topics that will orient you to the scope of the problem:
Protected media path
Driver signing
Both media devices and device drivers are digitally signed by the manufacturer and continuously monitored by Windows. If anything goes out of whack, it gets shut down (that'ts the technical term). Seems very daunting. And I don't know if the technology is available for desktop software that isn't a device driver and isn't related to DRM.
Good luck!

saving passwords inside your application code

I have a doubt concerning how to store a password for usage in my application. I need to encrypt/decrypt data on the fly, so the password will need to be somewhere. Options would be to have it hard-coded in my app or load it from a file.
I want to encrypt a license file for an application and one of the security steps involves the app being able to decrypt the license (other steps follow after). The password is never know to the user and only to me as e really doesn't need it!
What I am concerned is with hackers going through my code and retrieving the password that I have stored there and use it to hack the license breaking the first security barrier.
At this point I am not considering code obfuscation (eventually I will), so this is an issue.
I know that any solution that stores passwords is a security hazard but there's no way around it!
I considered assembling the password from multiple pieces before really needing it, but at some point the password is complete so a debugger and a well place breakpoint is all that is needed.
What approaches do you guys(and galls), use when you need to store your passwords hard-coded in your app?
Cheers
My personal opinion is the same as GregS above: it is a waste of time. The application will be pirated, no matter how much you try to prevent it. However...
Your best bet is to cut down on casual-piracy.
Consider that you have two classes of users. The normal user and the pirate. The pirate will go to great lengths to crack your application. The normal user just wants to use your application to get something done. You can't do anything about the pirate.
A normal user isn't going to know anything about cracking code ("uh...what's a hex editor?"). If it is easier for this type of person to buy the application than it is to pirate it, then they are more likely to buy it.
It looks like the solutions you have already considered will be effective against the normal user. And that's about all that you can do.
Decide now how much time/effort you want to spend on preventing piracy. If someone is determined, they're probably going to get your application to work anyway.
I know you don't want to hear it, but it's a waste of time, and if your app needs a hardcoded password then that is a flaw.
I don't know that there is any approach to solving this problem that would deter a hacker in any meaningful way. Keeping the secret a secret is one of cryptography's great problems.
An approach I have done in the past was to generate an unique ID during the install, it would get the HDD and MCU's SN and use it in a complex structure, then the user will send this number for our automated system and we reply back with another block of that, the app will now decrypt and compare this data on the fly during the use.
Yes I works but it still have the harded password, we have some layers for protection (ie. there are some techniques that prevents a mid-level hacker to understand our security system).
I would just recommend you to do a very complex system and try to hack it on your own, see if disassembly can lead to an easy path. Add some random calls to random subroutines, make it very alleatory, try to fake the use of registry keys and global variables, turn the hacker life in a hell so he will eventually give up.

Designing a Linux-based system for transferability of ownership/admin rights without total trust

Inspired by a much more specific question on ServerFault.
We all have to trust a huge number of people for the security and integrity of the systems we use every day. Here I'm thinking of all the authors of all the code running on your server or PC, and everyone involved in designing and building the hardware. This is mitigated by reputation and, where source is available, peer review.
Someone else you might have to trust, who is mentioned far less often, is the person who previously had root on a system. Your predecessor as system administrator at work. Or for home users, that nice Linux-savvy friend who configured your system for you. The previous owner of your phone (can you really trust the Factory Reset button?)
You have to trust them because there are so many ways to retain root despite the incoming admin's best efforts, and those are only the ones I could think of in a few minutes. Anyone who has ever had root on a system could have left all kinds of crazy backdoors, and your only real recourse under any Linux-based system I've seen is to reinstall your OS and all code that could ever run with any kind of privilege. Say, mount /home with noexec and reinstall everything else. Even that's not sufficient if any user whose data remains may ever gain privilege or influence a privileged user in sufficient detail (think shell aliases and other malicious configuration). Persistence of privilege is not a new problem.
How would you design a Linux-based system on which the highest level of privileged access can provably be revoked without a total reinstall? Alternatively, what system like that already exists? Alternatively, why is the creation of such a system logically impossible?
When I say Linux-based, I mean something that can run as much software that runs on Linux today as possible, with as few modifications to that software as possible. Physical access has traditionally meant game over because of things like keyloggers which can transmit, but suppose the hardware is sufficiently inspectable / tamper-evident to make ongoing access by that route sufficiently difficult, just because I (and the users of SO?) find the software aspects of this problem more interesting. :-) You might also assume the existence of a BIOS that can be provably reflashed known-good, or which can't be flashed at all.
I'm aware of the very basics of SELinux, and I don't think it's much help here, but I've never actually used it: feel free to explain how I'm wrong.
First and foremost, you did say design :) My answer will contain references to stuff that you can use right now, but some of it is not yet stable enough for production. My answer will also contain allusions to stuff that would need to be written.
You can not accomplish this unless you (as user9876 pointed out) fully and completely trust the individual or company that did the initial installation. If you can't trust this, your problem is infinitely recursive.
I was very active in a new file system several years ago called ext3cow, a copy on write version of ext3. Snapshots were cheap and 100% immutable, the port from Linux 2.4 to 2.6 broke and abandoned the ability to modify or delete files in the past.
Pound for pound, it was as efficient as ext3. Sure, that's nothing to write home about, but it was (and for a large part) still is the production standard FS.
Using that type of file system, assuming a snapshot was made of the pristine installation after all services had been installed and configured, it would be quite easy to diff an entire volume to see what changed and when.
At this point, after going through the diff, you can decide that nothing is interesting and just change the root password, or you can go inspect things that seem a little odd.
Now, for the stuff that has to be written if something interesting is found:
Something that you can pipe the diff though that investigates each file. What you're going to see is a list of revisions per file, at which time they would have to be recursively compared. I.e. , present against former-present, former-present against past1, past1 against past2, etc , until you reach the original file or the point that it no longer exists. Doing this by hand would seriously suck. Also, you need to identify files that were never versioned to begin with.
Something to inspect your currently running kernel. If someone has tainted VFS, none of this is going to work, CoW file systems use temporal inodes to access files in the past. I know a lot of enterprise customers who modify the kernel quite a bit, up to and including modules, VMM and VFS. This may not be such an easy task - comparing against 'pristine' may not be tenable since the old admin may have made good modifications to the kernel since it was installed.
Databases are a special headache, since they change typically each second or more, including the user table. That's going to need to be checked manually, unless you come up with something that can check to be sure that nothing is strange, such a tool would be very specific to your setup. Classic UNIX 'root' is not your only concern here.
Now, consider the other computers on the network. How many of them are running an OS that is known to be easily exploited and bot infested? Even if your server is clean, what if this guy joins #foo on irc and starts an attack on your servers via your own LAN? Most people will click links that a co-worker sends, especially if its a juicy blog entry about the company .. social engineering is very easy if you're doing it from the inside.
In short, what you suggest is tenable, however I'm dubious that most companies could enforce best practices needed for it to work when needed. If the end result is that you find a BOFH in your work force and need to can him, you had better of contained him throughout his employment.
I'll update this answer more as I continue to think about it. Its a very interesting topic. What I've posted so far are my own collected thoughts on the same.
Edit:
Yes, I know about virtual machines and checkpointing, a solution assuming that brings on a whole new level of recursion. Did the (now departed) admin have direct root access to the privileged domain or storage server? Probably, yes, which is why I'm not considering it for the purposes of this question.
Look at Trusted Computing. The general idea is that the BIOS loads the bootloader, then hashes it and sends that hash to a special chip. The bootloader then hashes the OS kernel, which in turn hashes all the kernel-mode drivers. You can then ask the chip whether all the hashes were as expected.
Assuming you trust the person who originally installed and configured the system, this would enable you to prove that your OS hasn't had a rootkit installed by any of the later sysadmins. You could then manually run a hash over all the files on the system (since there is no rootkit the values will be accurate) and compare these against a list provided by the original installer. Any changed files will have to be checked carefully (e.g. /etc/passwd will have changed due to new users being legitimately added).
I have no idea how you'd handle patching such a system without breaking the chain of trust.
Also, note that your old sysadmin should be assumed to know any password typed into that system by any user, and to have unencrypted copies of any private key used on that system by any user. So it's time to change all your passwords.

Resources