I'm building a cross-platform app that will communicate with the server. Security is very important.
Is there a scheme that will allow me to "trust" that the executable is genuine and hasn't been tampered with and that the requests are indeed coming from my signed executable and not an impersonator? Seems like the traditional man-in-the-middle attack. How can I prevent it?
I understand that I can sign an executable with a trusted CA under Windows. This ensures that the executable hasn't been tampered with on the user's machine. However, a targeted virus can still replace the executable (as opposed to modifying it) with an impersonator and Windows won't complain.
Can, then, my genuine executable sign the requests it is making to the server and can I validate these requests on the server? The naive solution is to embed a "private certificate" in the signed executable. However, I suspect that it's possible to extract this private certificate even from a signed executable.
Finally, are there executable signing mechanisms in OSX and Linux?
For code signing on Mac there are some reference in StackOverflow here and here. I've seen different things for Linux like signelf, bsign (used by digsig), elfsign.
As mentioned by Nickolay O., code signing will not do anything against decompilation. Code signing doesn't prevent man-in-the-middle and it's not a solution to demonstrate to a server that the client hasn't been tampered with.
Signing executables has meaning only for OS - to let it know, that executable is a good one.
If somebody can decompile your application, than you cannot be sure in any way that request was sent by the valid executable.
Related
I am trying to read the TrustZone white paper but it is really difficult to understand some of the basic stuff. I have some questions about it. They may be simple questions but I am a beginner in this field:
What makes secure world really "secure". I mean why normal world might be tampered with but not the secure world?
Who can change secure os? I mean like adding a "service"? can for example an application developer for mobile pay application add a service in the Secure OS to work with his app? if Yes then how any developer can add to the secure OS and it is still secure?
What prevents a malicious application form the normal OS to raise an SMC exception and transfer to Secure OS?,answered
The idea of a secure world is to keep the code executing there as small and as simple as possible - the bare minimum to fulfil its duties (usually controlling access to some resource like encryption keys or hardware or facilitating some secure functions like encryption/decryption).
Because the amount of code in the secure world is small, it can be audited easily and there's reduced surface area for bugs to be introduced. However, it does not mean that the secure world is automatically 'secure'. If there is a vulnerability in the secure world code, it can be exploited just like any other security vulnerability.
Contrast this with code executing in the normal world. For example, the Linux kernel is much more complex and much harder to audit. There are plenty of examples of kernel vulnerabilities and exploits that allow malicious code to take over the kernel.
To illustrate this point, let's suppose you have a system where users can pay money via some challenge-response transaction system. When they want to make a transaction, the device must wait for the user to press a physical button before it signs the transaction with a cryptographic key and authorises the payment.
But what if some malicious code exploited a kernel bug and is able to run arbitrary code in kernel mode? Normally this means total defeat. The malware is able to bypass all control mechanisms and read out the signing keys. Now the malware can make payments to anyone it wants without even needing the user to press a button.
What if there was a way that allows for signing transactions without the Linux kernel knowing the actual key? Enter the secure world system.
We can have a small secure world OS with the sole purpose of signing transactions and holding onto the signing key. However, it will refuse to sign a transaction unless the user presses a special button. It's a very small OS (in the kilobytes) and you've hired people to audit it. For all intents and purposes, there are no bugs or security vulnerabilities in the secure world OS.
When the normal world OS (e.g. Linux) needs to sign a transaction, it makes a SMC call to transfer control to the secure world (note, the normal world is not allowed to modify/read the secure world at all) with the transaction it wants to sign. The secure world OS will wait for a button press from the user, sign the transaction, then transfer control back to normal world.
Now, imagine the same situation where malware has taken over the Linux kernel. The malware now can't read the signing key because it's in the secure world. The malware can't sign transactions without the user's consent since the secure world OS will refuse to sign a transaction unless the user presses the button.
This kind of use case is what the secure world is designed for. The whole idea is the hardware enforced separation between the secure and normal world. From the normal world, there is no way to directly tamper with the secure world because the hardware guarantees that.
I haven't worked with TrustZone in particular but I imagine once the secure world OS has booted, there is no way to directly modify it. I don't think application developers should be able to 'add' services to the secure world OS since that would defeat the purpose of it. I haven't seen any vendors allowing third parties to add code to their secure world OS.
To answer your last question, I've already answered it in an answer here. SMC exceptions are how you request a service from the secure world OS - they're basically system calls but for the secure world OS. What would malicious code gain by transferring control to the secure world?
You cannot modify/read the secure world from the normal world
When you transfer control to the secure world, you lose control in the normal world
What makes secure world really "secure". I mean why normal world might be tampered with but not the secure world?
The security system designer makes it secure. TrustZone is a tool. It provides a way to partition PHYSICAL memory. This can prevent a DMA attack. TrustZone generally supports lock at boot features. So once a physical mapping is complete (secure/normal world permissions) they can not be changed. TrustZone gives tools to partition interrupts as well as boot securely.
It is important to note that the secure world is a technical term. It is just a different state than the normal world. The name secure world doesn't make it secure! The system designer must. It is highly dependant on what the secure assets are. TrustZone only gives tools to partition things that can prevent the normal world access.
Conceptually there are two types of TrustZone secure world code.
A library - here there will not usually be interrupts used in the secure world. The secure API is a magic eight ball. You can ask it a question and it will give you an answer. For instance, it is possible that some Digital rights management system might use this approach. The secret keys will be hidden and in-accessible in the normal world.
A secure OS - here the secure world will have interrupts. This is more complex as interrupts imply some sort of pre-emption. The secure OS may or may not use the MMU. Usually the MMU is needed if the system cache will be used.
Those are two big differences between final TrustZone solutions. The depend on the system design and what the end application is. TrustZone is ONLY part of the tools to try and achieve this.
Who can change secure os? I mean like adding a "service"? can for example an application developer for mobile pay application add a service in the Secure OS to work with his app? if Yes then how any developer can add to the secure OS and it is still secure?
This is not defined by TrustZone. It is up to the SOC vendor (people who licence from ARM and build the CPU) to provide a secure boot technology. The Secure OS might be in ROM and not changeable for instance. Other methods are that the secure code is digitally signed. In this case, there is probably on-chip secure ROM that verifies the digital signing. The SOC vendor will provide (usually NDA) information and techniques for the secure boot. It usually depends on their target market. For instance, physical tamper protection and encrypt/decrypt hardware maybe also included in the SOC.
The on-chip ROM (only programmed by the SOC vendor) usually has mechanisms to boot from different sources like NAND flash, serial NOR flash, eMMC, ROM, Ethernet, etc). Typically it will have some one time fuseable memory (EPROM) that a device/solution vendor (the people that make things secure for the application) can program to configure the system.
Other features are secure debug, secure JTAG, etc. Obviously all of these are possible attack vectors.
I know this type of question has been done to death.
My question relates to protecting my code that is installed on a clients PC.
I know the answers are to obfuscate, get a patent, put code on my server, accept it will be hacked, consider that my code is not THAT important or unique etc..
BUT, I am supplying the Windows PC to the client(s) with my software pre-installed.
It is a C# .Net app.
Under these circumstances where I am supplying the hardware is there any other 'tricks' I can use to prevent decompilation my code?
Thanks
Use BitLocker (at rest encryption) on the hard disk and a user for your client with limited privileges. Don't share the admin user's password with your client.
This is continuation of my previous post (Understanding BCryptSignHash output signature).
Let me clearly state my problem:
I need to sign a data in windows application level.
I need to verify the same in linux application level and windows driver (that i have wrote).
I tried following:
Using CryptoAPI, i was able to sign in windows application level and verify in the windows driver. In linux, i tried to use simpleECDSA (http://jonasfj.dk/blog/2007/12/simpleecdsa-a-simple-implementation-of-ecdsa-in-c/) to verify the signature (generated using cryptoAPI). I was able to convert the binary key blobs from cryptoAPI in simpleECDSA but could not interpret the signature.
Using Crypto++ library, i was able to sign in windows application level and verify in linux application level but could not use the same to verify in windows driver.
Kindly let me know if there is a library available or a way that i could use the same public/private key and signature across windows application/driver and linux.
Am new to cryptography hence forgive my naiveness.
Thanks,
F
I have a program and installer that installs a driver on the users system for the program to use. However, it doesn't work on 64 bit systems.
As I understand it, I need to sign the driver to allow it to be installed. I have a code signing certificate.
How do I sign the driver with it?
You need to do cross-signing http://msdn.microsoft.com/en-us/library/windows/hardware/gg487315.aspx
It's the same as signing. Don't let people tell you it has to be Verisign or whatever. It doesn't. But it does have to be a cert on Microsoft's list (see the link).
What are some platform-specific API's that web browsers use to securely save passwords with reversible encryption on local systems?
Since they must be able to reproduce the exact characters to pass up to a web site, the data can't be a one-way hash. My initial thought is there are system methods which utilize your current authentication data to perform encryption/decryption, but do not give access to applications to read it (your system login data) directly. I'm wondering what these are on different platforms (Windows, Linux, OS X) and how well they protect the information if the hard drive is accessed directly; i.e. a stolen laptop hard drive is placed into another computer or analyzed via a Live CD.
Here's how google chrome does it. Looks like they use CryptProtectData on windows.