So just a disclaimer, I am currently learning Rust so this might be a completely stupid question.
I have some plans in the making to deploy a simple Rust app to the cloud - currently decided on an AWS Lambda workflow for this. I am planning to invoke a bunch of API calls which require authentication, so I wondering what is the best practice to store secret values in Rust. The reason is that i know thet Rust is a compiled language, so it dumps all the source code and builds a binary out of that.
What I am wondering is just how safe is it to hard code secret values (like access keys, tokens etc.) in Rust code. Since the code will be compiled, is there any way that a potential attacker or a hacker could look into the Rust compiled binary and figure out my hard-coded secret values?
Here the alternative of course is to store things in a secret place, like in Secrets Manager, and then do a runtime call to retrieve the data. But this will add some additional overhead to a task, especially if it is scheduled to run every few minutes; hence I was curious how safe it actually is to hardcode secret values in rust code. Thanks, and welcome any clarification.
Related
Very closely related: How to protect strings without SecureString?
Also closely related: When would I need a SecureString in .NET?
Extremely closely related (OP there is trying to achieve something very similar): C# & WPF - Using SecureString for a client-side HTTP API password
The .NET Framework has class called SecureString. However, even Microsoft no longer recommends its use for new development. According to the first linked Q&A, at least one reason for that is that the string will be in memory in plaintext anyway for at least some amount of time (even if it's a very short amount of time). At least one answer also extended the argument that, if they have access to the server's memory anyway, in practice security's probably shot anyway, so it won't help you. (The second linked Q&A implies that there was even discussion of dropping this from .NET Core entirely).
That being said, Microsoft's documentation on SecureString does not recommend a replacement, and the consensus on the linked Q&A seems to be that that kind of a measure wouldn't be all that useful anyway.
My application, which is an ASP.NET Core application, makes extensive use of API Calls to an external vendor using the HttpClient class. The generally-recommended best practice for HttpClient is to use a single instance rather than creating a new instance for each call.
However, our vendor requires that all API Calls include our API Key as a header with a specific name. I currently store the key securely, retrieve it in Startup.cs, and add it to our HttpClient instance's headers.
Unfortunately, this means that my API Key will be kept in plaintext in memory for the entire lifecycle of the application. I find this especially troubling for a web application on a server; even though the server is maintained by corporate IT, I've always been taught to treat even corporate networks as semi-hostile environments and not to rely purely on corporate firewalls for application security in such cases.
Does Microsoft have a recommended best practice for cases like this? Is this a potential exception to their recommendation against using SecureString? (Exactly how that would work is a separate question). Or is the answer on the other Q&A really correct in saying that I shouldn't be worried about plaintext strings living in memory like this?
Note: Depending on responses to this question, I may post a follow-up question about whether it's even possible to use something like SecureString as part of HttpClient headers. Or would I have to do something tricky like populate the header right before using it and then remove it from memory right afterwards? (That would create an absolute nightmare for concurrent calls though). If people think that I should do something like this, I would be glad to create a new question for that.
You are being WAY too paranoid.
Firstly, if a hacker gets root access to your web server, you have WAY bigger problems than your super-secret web app credentials being stolen. Way, way, way bigger problems. Once the hackers are on your side of the airtight hatchway, it is game over.
Secondly, once your infosec team detects the intrusion (if they don't, again, you've got WAY bigger problems) they're going to tell you and the first thing you're going to do is change every key and password you know of.
Thirdly, if a hacker does get root access to your webserver, their first thought isn't going to be "let's take a memory dump for later analysis". A dumpfile is rather large (will take time to transfer over the wire, and the network traffic might well be noticed) and (at least on Windows) hangs the process until it's complete (so you'd notice your web app was unresponsive) - both of which are likely to raise some red flags.
No, hackers are there to grab as much valuable information in the least amount of time, because they know their access could be discovered at any second. So they're going to go for the low-hanging fruit first - usernames and passwords. Then they'll move on to trying to find out what's connected to that server, and since your DB credentials are likely in a config file on that server, they will almost certainly switch their attentions to that far more interesting target.
So all things considered, your API key is pretty darn unlikely to be compromised - and even if it is, it won't be because of something you did or didn't do. There are far more productive ways of focusing your time than trying to secure something that already is (or should be) incredibly secure. And, at the end of the day, no matter how many layers of security you put in place... that API or SSL key is going to be raw, in memory, at some stage.
I have created a Python module which I would like to distribute via PyPI. It relies on a third party API which in turn requires a free API key.
Yesterday I asked this question on how to reference a YAML file (which would contain the API keys) using the true path of the module. However that got me thinking of other ways;
Ask the users to save the API key as an environment variable and have the script check for the existence of said variable
Ask the user to pass in the API key as an **kwargs argument when creating a new instance of the object e.g.
thing = CreateThing(user_key = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa', api_key = 'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbb')
I would like to see what the community thinks on this topic.
I have created a Python module which I would like to distribute via PyPI. It relies on a third party API which in turn requires a free API key.
Even being a free api-key you should never have it in your code, even less have your code distributed to the public with it.
My advice is to never have any secrets on your code, not even default secrets as many developers like to put in their calls to get values from environment variables, configuration files, databases or whatsoever they retrieve them from.
When dealing with secrets you must always raise an exception when you fail to obtain one... Once more don't use default values from your code, not even with the excuse that they will be used only during development and/or testing.
I recommend you to read this article I wrote about leaking secrets in your code to understand the consequences of doing so, like this one:
Hackers can, for example, use exposed cloud credentials to spin up servers for bitcoin mining, for launching DDOS attacks, etc and you will be the one paying the bill in the end as in the famous "My $2375 Amazon EC2 Mistake"...
While the article is in the context of leaking secrets in the code a mobile app, must of the article applies to any type of code we write and commit into repositories.
About your proposed solution
Ask the users to save the API key as an environment variable and have the script check for the existence of said variable
Ask the user to pass in the API key as an **kwargs argument when creating a new instance of the object e.g.
thing = CreateThing(user_key = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa', api_key = 'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbb')
The number 1 is a good one and you sould use here the dot env file approach, maybe using a package like this one. But remember to raise an exception if the values does not exist, please never use defaults from your code.
Regarding solution 2 it is more explicit for the developer using your library, but you should recommend also them the .env file and help them understand how to properly manage them. For example secrets used on .env files should be retrieved from a vault software.
A SECURITY CALL OF ATTENTION
Dot env files cannot be committed into the source code at any time and when committing the .env.example, into your git repo, it must not contain any default values.
Oh you may think if I commit it accidentally to Github I will just clean my commits, rewrite the history and do a force push. Well think twice and see why that will not solve the problem you have created:
Well I have bad news for you... it seems that some services cache all github commits, thus hackers can check these services or employ the same techniques to immediately scan any commit sent to github in a matter of seconds.
Source: the blog post I linked above.
And remember what I have quoted earlier "My $2375 Amazon EC2 Mistake, that was due to leaked credentials in an accidental Github commit.
From this answer: it is recommended to use the OS keyring. In my lib odsclient I ended up implementing a series of alternatives, from the most secure (keyring) to the less ones (OS environment variable, git-ignored text file, arg-passed apikey possibly obfusctated with getpass()).
You might wish to have a look for inspiration, see this part of the doc in particular.
I am aware that the standard way to store sensitive data is in the environment variables, in particular outside of the git repo.
There are many posts discussing this topic, reiterating this as standard practice, but I am still unclear on what the pros and cons of storing passwords/keys actually as environment variables versus simply as JSON somewhere in the user's home directory outside the repo?
Unless I'm mistaken, if the server becomes compromised, both environment variables and arbitrary JSON file are equally exposed to someone with access to the machine.
The two methods seems remarkably similar when you consider that for persistent environment variables the keys and secrets would probably be stored in an appropriate script like .profile anyway.
If you store API keys and something/someone gains permissions like those of the program which must normally use those keys it is theoretically game over. However, practically there are games you can play to make life hard for the adversary (and yourself sadly):
Obfuscate the key locally...backup the key (encrypted) in a vault with a passphrase and only ever use the key in a compiled program that uses char arrays instead of strings (depends on language ofc). Maybe pass this program into an obfuscator which makes reverse engineering difficult.
Create an encrypted key store locally that only coughs up the key to programs that it "validates" as legitimate in some way.
use a "honey" key that you actively monitor for any usage of that appears to be the real key but you store the real key in a obfuscated way somewhere cleverly in the codebase.
Store the key remotely (in a "more secure" server) and force the call to route through a remote service there which checks the message validity in some way, the calling server's IP, the mac address, etc. and inserts the key and passes on the request acting as a proxy of sorts.
Wish I could give a better answer. At the end of the day security is best effort. Secure your perimeter, your network, and the server and make it difficult to find the key post-intrusion. This difficulty may give you time to detect the intrusion before a data breach occurs.
Can a running nodejs program cryptographically prove that it is the same as a published source code version in a way that could not be tampered with?
Said another way, is there a way to ensure that the commands/code executed by a nodejs program are all and only the commands and code specified in a publicly disclosed repository?
The motivation for this question is the following: In an age of highly sophisticated hackers as well as pressures from government agencies for "backdoors" that allow them to snoop on private transactions and exchanges, can we ensure that an application has been neither been hacked nor had a backdoor added?
As an example, consider an open source-based nodejs application like lesspass (lesspass/lesspass on github) which is used to manage passwords and available for use here (https://lesspass.com/#/).
Or an alternative program for a similar purpose encryptr (SpiderOak/Encryptr on github) with its downloadable version (https://spideroak.com/solutions/encryptr).
Is there a way to ensure that the versions available on their sites to download/use/install are running exactly the same code as is presented in the open source code?
Even if we have 100% faith in the integrity of the the teams behind applications like these, how can we be sure they have not been coerced by anyone to alter the running/downloadable version of their program to create a backdoor for example?
Thank you for your help with this important issue.
sadly no.
simple as that.
the long version:
you are dealing with the outputs of a program, and want to ensure that the output is generated by a specific version of one specific program
lets check a few things:
can an attacker predict the outputs of said program?
if we are talking about open source programs, yes, an attacker can predict what you are expecting to see and even can reproduce all underlying crypto checks against the original source code, or against all internal states of said program
imagine running the program inside a virtual machine with full debugging support like firing up events at certain points in code, directly reading memory to extract cryptographic keys and so on. the attacker does not even have to modify the program, to be able to keep copys of everything you do in plaintext
so ... even if you could cryptographically make sure that the code itself was not tampered with, it would be worth nothing: the environment itself could be designed to do something harmful, and as Maarten Bodewes wrote: in the end you need to trust something.
one could argue that TPM could solve this but i'm afraid of the world that leads to: in the end ... you still have to trust something like a manufacturer or worse a public office signing keys for TPMs ... and as we know those would never... you hear? ... never have other intentions than what's good for you ... so basically you wouldn't win anything with a centralized TPM based infrastructure
You can do this cryptographically by having a runtime that checks signatures before running any code. Of course, you'd have to trust that runtime environment as well. Unless you have such an environment you're out of luck - that is, unless you do a full code review.
Furthermore you can sign the build by placing a signature within the build system. The build system and developer access in turn can be audited. This is usually how secure development environments are build. But in the end you need to trust something.
If you're just afraid that a particular download is corrupted you can test against an official hash published at one or more trusted locations.
What are some effective and secure methods of securing SQL queries?
In short I would like to insure that programmers do not see the passwords used by the application to perform queries. Something like RSA or PGP comes to mind, but don't know how one can implement a changing password without being encoded in the application somewhere.
Our environment is a typical Linux/MySQL.
This might be more of a process issue and less of a coding issue.
You need to strictly separate the implementation process and the roll-out process during software development. The configuration files containing the passwords must be filled with the real passwords during roll-out, not before. The programmers can work with the password for the developing environment and the roll-out team changes those passwords once the application is complete. That way the real passwords are never disclosed to the people coding the application.
If you cannot ensure that programmers do not get access to the live system, you need to encrypt the configuration files. The best way to do this depends on the programming language. I am currently working on a Java application that encrypts the .properties files with the appropriate functions from the ESAPI project and I can recommend that. If you are using other languages, you have to find equivalent mechanisms.
Any time you want to change passwords, an administrator generates a new file and encrypts it, before copying the file to the server.
In case you want maximum security and do not want to store the key to decrypt the configuration on your system, an administrator can supply it whenever the system reboots. But this might take things too far, depending on your needs.
If programmers don't have access to the configuration files that contain the login credentials and can't get to them through the debug or JMX interfaces then that should work. Of course that introduces other problems but that would potentially satisfy your requirement. (I am not a Qualified Security Assessor - so check with yours to be sure for PCI compliance.)