I am looking for a way to integrate OpenSSL and Node.js for a while now.
My goals are:
I want to be platform independent, hence a solution should work on OS X, Linux and Windows.
I want to avoid unnecessary disk operations. E.g., a private key might not be in a file, but in a database (may be a stupid example, but let's consider this to be a valid requirement).
I want to support creating keys, csrs, signing csrs, creating ca certs, ... all the certificate stuff, from end to end.
Now the options I have considered are:
Use the OpenSSL library which is integrated within Node.js. Unfortunately, the crypto module does not provide the certificate things.
Use the OpenSSL library using an external module. Unfortunately, I don't know how to do this, probably due to missing knowledge in C/C++.
Use the OpenSSL binary as a child process. Given that OpenSSL is available, this should work on all platforms. It's not nice, but it works.
Question #1: As I have written I do not have the slightest idea on how access the OpenSSL library directly that comes bundled with Node.js. How would I approach this?
At the moment, I stick with using the binary as a child process. Unfortunately, this requires that all the things such as private keys and so on are either given as files (which I explicitly want to avoid), or that I hand over everything using /dev/stdin (which does not work on Windows).
Question #2: How could I deal with this? Would a solution to #1 solve this issue, too?
The answer to question #1 is that you cannot. Without bindings, you can only access the functions exposed by nodejs.
Unfortunately there doesn't seem to be a way work around for /dev/stdin in windows. Namedpipes would be an option but nodejs does not support them. You may be able to have nodejs launch openssl.exe in interactive mode and send commands through stdin, and read the output through stdout but this seems very inefficient.
So the answer is question #2 is that you cannot deal with the windows problem.
Writing your won binding seems to be the only option. It's actually not so difficult - something I'm sure you could get collaborators to help with.
Related
Secure sockets use a CN check against certs in a trust collection with the domain accepting or connecting. For myself I created a private and public set for localhost and that helps me debug locally. If I wanted to offer an SDK, would it be considered secure to distribute a .key and .cer X509 for this localhost debugging use-case? Or is it always not considered secure to have a .key in any open space at all, because of its potential misuse?
Sorry if this is discussed in other places but I cannot find out a clear answer on it.
This might be somewhat opinionated and also depends on your project somewhat, but I think the main risk is how people will actually use those. Some of them will use it for production for sure, because it is easier, or they don't understand keypairs and just want it to work and so on.
Any project should be secure by default, for everybody involved, including endusers and developers as well if your project is something like a library or component. Secure by default in this case would mean not providing an actual keypair, because that would potentially be a backdoor in case of at least some of its uses - even though it was not meant to be used like that.
Another thing to consider is the reputation of your project. If you include a key and users misuse it on the internet, it will be easy to find and potentially exploit vulnerable instances of your project with tools like Shodan. Nobody will care the developers did it wrong - it will be your project that's found vulnerable.
A better way to consider would be to provide something like an init script that would generate a key and a certificate for that specific instance. It could still be easy for the user and developer, and also secure for everybody. In case of a linux package, this could even be done by the installer script with most packaging solutions so it would be fully transparent for the user.
I'm trying to fix an old binary (sources unavailable of course...) that fails to connect now, probably because it's using outdated list of CAs.
However, when running through strace I don't see the binary attempting to read my CAs from /etc/ssl/certs.
Is it possible the list of CAs has been bundled into the binary itself ?
Thanks a lot,
Adam
To be clear, since you say source unavailable I assume you mean a custom program that uses OpenSSL library, since the source for the utility commandline-interface progam named openssl is still available for versions dating back to last century (and until 1.1.0 didn't change much, even when it probably should have).
Yes, definitely. A program using libssl (and libcrypto) can choose whether to use the standard file(s) for its truststore, or some other (custom) file(s) it specifies (often from configuration), or hardcoded data as you ask or data from some other source like a (secure, we hope!) database, or even no truststore at all if it uses ciphersuites that don't use certificate authentication -- anonymous, PSK or SRP -- which is rarely used but is supported by OpenSSL.
You might try strings on the program to see if they were basic enough to embed certs (and maybe other things) in PEM -- IINM that's how Lenovo Superfish was found. If they embedded binary 'DER', that still has enough redundancy you could find it, but not so easily.
Look at the network traffic with Wireshark or similar, or if you have access to the server check its logs, to see if the program is sending an alert in the range 41 to 49 in response to the server's first flight i.e. just after ServerHelloDone.
That would definitively indicate a certificate problem.
Can a running nodejs program cryptographically prove that it is the same as a published source code version in a way that could not be tampered with?
Said another way, is there a way to ensure that the commands/code executed by a nodejs program are all and only the commands and code specified in a publicly disclosed repository?
The motivation for this question is the following: In an age of highly sophisticated hackers as well as pressures from government agencies for "backdoors" that allow them to snoop on private transactions and exchanges, can we ensure that an application has been neither been hacked nor had a backdoor added?
As an example, consider an open source-based nodejs application like lesspass (lesspass/lesspass on github) which is used to manage passwords and available for use here (https://lesspass.com/#/).
Or an alternative program for a similar purpose encryptr (SpiderOak/Encryptr on github) with its downloadable version (https://spideroak.com/solutions/encryptr).
Is there a way to ensure that the versions available on their sites to download/use/install are running exactly the same code as is presented in the open source code?
Even if we have 100% faith in the integrity of the the teams behind applications like these, how can we be sure they have not been coerced by anyone to alter the running/downloadable version of their program to create a backdoor for example?
Thank you for your help with this important issue.
sadly no.
simple as that.
the long version:
you are dealing with the outputs of a program, and want to ensure that the output is generated by a specific version of one specific program
lets check a few things:
can an attacker predict the outputs of said program?
if we are talking about open source programs, yes, an attacker can predict what you are expecting to see and even can reproduce all underlying crypto checks against the original source code, or against all internal states of said program
imagine running the program inside a virtual machine with full debugging support like firing up events at certain points in code, directly reading memory to extract cryptographic keys and so on. the attacker does not even have to modify the program, to be able to keep copys of everything you do in plaintext
so ... even if you could cryptographically make sure that the code itself was not tampered with, it would be worth nothing: the environment itself could be designed to do something harmful, and as Maarten Bodewes wrote: in the end you need to trust something.
one could argue that TPM could solve this but i'm afraid of the world that leads to: in the end ... you still have to trust something like a manufacturer or worse a public office signing keys for TPMs ... and as we know those would never... you hear? ... never have other intentions than what's good for you ... so basically you wouldn't win anything with a centralized TPM based infrastructure
You can do this cryptographically by having a runtime that checks signatures before running any code. Of course, you'd have to trust that runtime environment as well. Unless you have such an environment you're out of luck - that is, unless you do a full code review.
Furthermore you can sign the build by placing a signature within the build system. The build system and developer access in turn can be audited. This is usually how secure development environments are build. But in the end you need to trust something.
If you're just afraid that a particular download is corrupted you can test against an official hash published at one or more trusted locations.
My logic of APT (Anti-Paching Technology) is as follows...
1) Store on the MSSQL server the md5 hash of the executable for protection.
2) Perform an md5 comparison (within my application startup) the hash found on the server, with the executable itself.
3) If comparison fails exit application silently.
And all these above before it is finally pached!
I mean what is your best way to protected a file from being patched?
Without using ready tools (.net reactor, virtualizer etc)
Edit: Something else just came into my mind.
Is there any way of checking the application integrity on server side?
I mean my app works only online. Could i execute something on the server (my domain) that could check the application integrity?
The thing is a cracker would patch the application precisely on step 2, removing the hash check code.
So I wouldn't call that very effective against serious crackers.
EDIT: I guess your best bet is defense in depth, given that your app has to be online I'd:
Require authentication: Authenticate users, hopefully via a cryptographic key, and require a key check to receive/send data.
Obfuscation: It makes things harder for crackers.
Continued checks: Besides checking who is sending data, validate the application each time a request is sent.
These all can still be circumvented, but they make things a lot harder and might disuade some if your app is not worth that much to them.
A patched application means the 'cracker' has complete control over the machine the code is running on (at least enough control to patch the executable). So patch prevention however smart it might be is working against the flow of control.
Complicating your binary file might be enough to discourage patching so obfuscators are propably your best bet.
you can't. once someone else has your file they can do what they like with it - first thing would be to patch out your anti-patching code.
If the application is running on someone else's machine, you cannot prevent them from patching it. You can make it harder, but it's a shell game: you cannot win. Regardless of how complicated you make it, some guy somewhere will see it as an interesting challenge to break your protection, and he will succeed. Then, everyone else just has to download his version. The most extreme form of patch-protection today is Skype (that I know of). It's insanely complicated, and yet it has been broken.
Since your application apparently runs online, you can ask yourself why you want to prevent patches in the first place (maybe it's to prevent the user from entering some bad values? Or to prevent them from seeing some information that's present in the program?), and then architect your program so that whatever you want to keep hidden or checked happens on the server.
For example if it's a game and you want to prevent players from hacking the game to know where the other players are: change the server so it only sends coordinate information for the players that you can already see.
Another example: if it's an online store and you want to make sure users don't submit purchase orders with incorrect prices, check the prices at the server.
The only exception there is if you control the hardware that the program's running on. But even there, it's very hard to do it right (see: XBox, PS3, and the many other consoles that tried to do that and failed). It's probably still better to leverage the client/server architecture rather than betting on "trusted computing".
Crackers nowadays don't bother patching your executable file; they simply change your program's variables in-memory to make its behaviour more amenable to their requirements. Defending against this is very difficult and reasonably pointless; most games' crack-protection works only by searching for signatures of known crack programs (like an AV engine does).
Everyone nailed it, you can't stop someone but you can make it harder for them, you could even go off the deep end and make some in-memory validation stuff like World Of Warcrafts Warden system.
If you tell us what language you are writing in we might be able to suggest some simple obfuscation methods.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am running a number of SSL-encrypted websites, and need to generate certificates to run on these. They are all internal applications, so I don't need to purchase a certificate, I can create my own.
I have found it quite tedious to do everything using openssl all the time, and figure this is the kind of thing that has probably been done before and software exists for it.
My preference is for linux-based systems, and I would prefer a command-line system rather than a GUI.
Does anyone have some suggestions?
An option that doesn't require your own CA is to get certificates from CAcert (they're free).
I find it convenient to add the two CAcert root certificates to my client machines, then I can manage all the SSL certificates through CAcert.
I know you said you prefer the command line, but for others who are interested in this, TinyCA is a very easy to use GUI CA software. I have used this both in Linux, and also in OSX.
It's likely that self-signing will give you what you need; here is a page (link resurrected by web.archive.org) that provides a decent guide to self-signing if you would like to know the ins and outs of how it's done and how to create your own script.
The original script link from this response is unfortunately dead and I was unable to find an archive of it, but there are many alternatives for pre-rolled shell scripts out there.
If you're looking for something to support fairly full-featured self-signing, then this guide for 802.1x authentication from tldp.org recommends using the helper scripts for self-signing from FreeRADIUS. Or, if you just need quick-and-dirty, then Ron Bieber offers up his "brain-dead script" for self-signing on his blog at bieberlabs.com.
Of course there are many alternative scripts out there but this seems to give a good range of choices, and with a little additional info from the guide you should be able to tailor these to do whatever you need.
It's also worth checking the SSL Certificates HOWTO. It's quite old now (last updated 2002) but its content is still relevant: it explains how to use the CA Perl / Bash script provided with OpenSSL software.
The XCA software appears reasonably well maintained (copyright 2012, uses Qt4), with a well-documented and simple enough user interface and has packages on debian, ubuntu and fedora.
Don't judge the website at first sight:
http://xca.sourceforge.net/
Rather, check this nice walkthrough to add a new CA:
http://xca.sourceforge.net/xca-14.html#ss14.1
You can see a screenshot of the application there: http://sourceforge.net/projects/xca/
It is GUI-based though, not command-line.
There's a simple webpage solution: https://www.ibm.com/developerworks/mydeveloperworks/blogs/soma/entry/a_pki_in_a_web_page10
I like to use the easy-rsa scripts provided with OpenVPN. This is a collection of command line tools used to create the PKI environment required for OpenVPN.
But with a slight change of the (also provided) openssl.cnf file you can create pretty much anything you want with it.
I use that for self signing ssl server certificates as well as with Bacula backup and for creating private keys/csr's for "real" certificates.
just download the OpenVPN community edition source tarball and copy the easy-rsa folder to your linux machine. you'll find lots of documentation on the openvpn community pages.
I used to use CAcert, it's also nice, but you have to create the CSR yourself, so you have to use openssl again and the certs aer only valid for half a year. this is annoying
I created a wrapper script, written in Bash, for OpenSSL that might be useful to you here. To me, the easiest sources of user error when using OpenSSL were:
Keeping a consistent and logical naming scheme for configuration/certs/keys so that I can see how every artifact fits into the entire PKI by just looking at the file name/extension
Enforcing a folder structure thats consistent across all CA machines that use the script.
Specifying too many configuration options via CLI and loosing track of some of the details
The strategy is to push all configuration into their own files, saving only execution of a particular action for the CLI. The script also strongly enforces the use of a particular naming scheme for folders/files here which is helpful when looking at any single file.
Use/Fork/PR away! Hope it helps.