How does a Wifi manager encrypt profiles (WinCE) - security

Hi I am trying to develop a small Wifi manager, and I have one question:
I need to encrypt the profile file when saving it in the disk, and decrypt it when loading it.
I will use a pass-phrase to do that, so how can I store my pass-phrase:
- If I store it in a file, it's to easy to dump
- If I hardcode it, it will be visible somewhere in my binary
- I am under WindowsCE, and I don't know if there is a secure store to save Data?
- I know that under Linux, we can have the trust store, and with tuning some permissions only 'root' will have access to it, and this is enough for me, is there any possibility to do something like that under WinCE?
Regards,

Microsoft's Wireless Zero Config stores the WiFi keys in a registry key defined in eapol.h. [HKLM]\Comm\EAPOL\Config
The string is encrypted using CryptProtectData with the CRYPTPROTECT_SYSTEM flag.

Related

Linux (Ubuntu) equivalent to Windows DPAPI

I am trying to find a solution to store secrets (to be used by my application) on Ubuntu Server 20.04. I have used Windows' DPAPI in the past to store secrets using the protection of the user account accessing the API.
Is there an official package on apt or snap providing this? Is there something like that inside the Linux kernel?
I would use the file system ACL but, this is not enough for me as I want the files to be unusable if the hard drive is compromised.
This project stores secrets on windows/max/linux.
https://pypi.org/project/keyring/
Specifically:
https://en.wikipedia.org/wiki/Keychain_%28software%29
https://specifications.freedesktop.org/secret-service/latest/
https://en.wikipedia.org/wiki/KWallet
It's a good reference no matter what language you're using.
To achieve a similar workflow to DPAPI, generate a 16 byte secure random password, save it in the keychain, and use it to encrypt your data.
If you have more than 1 file, you will want to place an 16 byte secure random salt in each file, and sha3 hash (or, better, hkdf) it with your random password. This will give you 1 key per file without filling up the chain.

What is the risk of hardcoded credentials in creating database connection?

Hi security aware people,
I have recently scanned my application with a tool for static code analysis and one of the high severity findings is a hardcoded username and password for creating a connection:
dm.getConnection(databaseUrl,"server","revres");
Why does the scanner think this is a risk for the application? I can see some downsides such as not being able to change the password easily if it's compromised. Theoretically someone could reverse-engineer the binaries to learn the credentials. But I don't see the advantage of storing the credentials in a config file, where they are easy to locate and read, unless they are encrypted. And if I encrypt them, I will be solving the same problem with the encryption key...
Are there any more risks that I cannot see? Or should I use a completely different approach?
Thank you very much.
A fixed password embedded in the code will be the same for every installation, and accessible by anyone with access to the source code or binary (including the installation media).
A password read from a file can be different for each installation, and known only to those who can read the password file.
Typically, your installer will generate a unique password per site, and write that securely to the file to be read by your application. (By "securely", I mean using O_CREAT|O_EXCL to prevent symlink attacks, and with a correct selection of file location and permissions before anyone else can open it).
This is an interesting one, I can give you examples for a .Net application (as you haven't specified running environment / technologies used). Although my guess is Java? I hope this is still relevant and helps you.
My main advice would be to read this article and go from there: Protecting Connection information - MSDN
Here is a page that describes working with encrypted configuration files here
I've seen this solved both using encrypted configuration files and windows authentication. I think that running your application as a user that will be granted access to the relevant stored procedures etc (as little as possible, e.g. Principle of Least Privilege) and furthermore folder access etc is a good route.
I would recommend using both techniques because then you can give relevant local folder access to the pool for IIS and split out your user access in SQL etc. This also makes for better auditing!
This depends on your application needs though. The main reason to make this configurable via a config file or environmental user account I would say is so that when you come to publish your application to production, your developers do not need access to the production user account information and instead can just work with Local / System test / UAT credentials instead.
And of course they are not stored in plain text in your source control checkin then either, which if you host in a private distributed network like GIT could mean that this could be compromised and a hacker would gain access to the credentials.
I think it depends on how accessible / secure your source code or compiled code is. Developers usually have copies of the code on their dev boxes, which are usually not nearly as secure as production servers, and so are much more easily hacked. Generally, a test user / pw is configured on the dev box, and in production, the "real" pw is stored in much more secure config files. Yes, if someone hacked into the server they could easily get the credentials, but that is much more difficult than getting into a dev box in most cases. But like I said it depends. If there is only one dev, and they have a super secure machine they work with, and the repo for their code is also super secure, then there is no effective difference.
What I do is to ask the credentials to end user initially and then encrypt and store them in a file. This way, I don't know their connection details and passwords as a dev. The key is a hashed binary and I store it by poking ekstra bytes in between. One who wants to crack it should find out the algorithm used, key and vector lengths, their location and the start-end positions of the byte sequence keeping the values. A genius, who would also reverse engineer my code to get all this information would break into it (but it might be easier to directly crack the end user's credentials).

How to store private key or secure information / data with Electron

I am developing standalone app for cross platform using electron.
I want store private data like private key, private data for some
execution in app. Execution like encrypt / decrypt data.
Or
I want store some secured information like user password, proprietary
data on app
Are any possible way to store these kind of secure information and app user unable to get any way?
There is an NPM module made for Atom editor (the app Electron was made for) called Keytar. It uses the native OS APIs for secure storage. eg. The keychain on OS X.
https://github.com/atom/node-keytar
I don't know the specific technology that you are using, so my answer will point in general to the key storage issue.
First, two big remarks:
Even with some heavy specialized hardware (banks and other critical systems use Hardware Security Modules -HSMs- for this), there is always a risk of getting your key stolen. What you choose to do depends on how important is your key and how much are you willing to do to protect it. I will try to avoid to mention solutions involving hardware, because they are usually overkill for most people.
There are, however, good practices that you can follow: https://www.owasp.org/index.php/Cryptographic_Storage_Cheat_Sheet
Now, some advise. Whatever you do, don't store your key in plaintext (and much less hardcoded). If you are using public key cryptography, PKCS12 files (usually with extension .p12 or .pfx) are the standard way to store the data. They are usually password protected.
Here you face a problem: if you have a key, you need to use it. If you use the key, it will be in "plaintext", at least in RAM. So, you need a way to enable the access that keeps the key as isolated as possible. If the actions are triggered by a user, things are relatively nice, because you could ask for the password before using the key.
If the actions are automated, however, you need to find a way to store the password. Even security software like some PGP implementations have approaches for this that aren't nice:
Ask for the password in command line: command -password my-password. This, put in a bat, works. But the password is stored and, depending of the operating system, even available with the command history.
Store it in a file: at least you don't leave copies around, but the password is still in plaintext.
Encrypt it using system data as encryption key: the password is relatively protected, but you lose portability and an attacker with access to the computer won't be stopped by the control.
Ask for the password once one the service is on: a bit more reasonable, but not always possible (if the service is critical but just one person has the password, availability might be compromised).
Fancy things could be done with threshold decryption, but that's probably too much for that case also.
I do not provide details on each option because what you can do probably depends on what your framework allows and the way in which your system is used, but I hope it helps as a reference of the different options. In any case, do not implement any cryptographic functionality on your own. Bad crypto is worse than no crypto at all.
Avoid storing private or server-side details like a private key in an electron app. Electron app's data and file can be accessed from the app.asar file and electron do not protect the content at all. There is no such mechanism of code protection in electron. However NW.js supports source code protection, You can read it here. So according to me, it's not safe to store private accreditations like signing a certificate or private key in electron source code.
As another way, you can store these data using node-keytar in the keychain for mac, the credential manager in windows and Gnom Keyring in Linux using native api. But still, these credentials are accessible to the user and does not make sense to storing private tokens (i.e. Token for GitHub private repository having administrative rights). It depends upon the user, If he/she is sophisticated enough to understand what did you stored in Keychain, Credential Manager or Keyring, they can misuse it or can use against you. So the final answer is,
Do not store Credentials/Private key or Administrative Tokens in electron source or using node-keytar.
the perfect way of storing data in electron is this package: https://www.npmjs.com/package/electron-data-holder
this package stores data in a JSON file but it gives you the ability to encrypt the data.
read more in the documentation

How do you provide encryption keys to a daemon or service?

I am trying to figure out a solution to a 'chicken and egg' issue which I have come across in a project I am working on for a new venture.
The systems in question are handing credit card data and as such the card numbers etc need to be stored encrypted in the database. In order to comply with PCI requirements we have the numbers encrypted with unique key pairs for each 'merchant', so if one merchant is compromised it shouldn't be possible to access another merchants card holder data.
This is fine when it comes to human interaction with the system, as the human can enter the passphrase to unlock the private key, and then decrypt the data, however when it comes to automated services which need to access the data (i.e. to process transactions at a later date) there is an issue with how best to provide the credentials to the service/daemon process.
A bit of background on the system:
card numbers are encrypted with asymmetric key pairs
the private key is passphrase protected
this passphrase is then encrypted with a 'master' key pair
the passphrase to unlock the master private key is then known by the operators granted permission (well, actually they a copy of it encrypted with their own key pair which they only know the passphrase to).
the daemon process will be run as its own user and group on a linux system.
For the daemon to be able to decrypt the data I was considering the following:
Setup a passphrase file similar to how .pgpass works
Store the file in the home directory for the daemon user
Set the permissions to 0600 for the file
Setup a file integrity monitoring system such as Tripwire to notify a security group (or similar) of any changes to the file or permissions.
Disable login for the daemon user, as it is used only for the process.
Given the above, I am wondering if this is sufficient. Obviously the weakness is with the system administrators - there are few of these (i.e. 2) trusted on the secure systems - given they can elevate their permissions (i.e. to root) and then change ownership on the files or the permissions to be able to read the passphrase - however once again this is likely something which can be mitigated with monitoring of checksum changes for files, FIM checksums etc.
So am I going about this the wrong way or are there other suggestions on how to handle this?
Not sure how much help this will be as given your aim is compliance with PCI-DSS the person you need to convince is your QSA.
Most QSA companies are happy to work in a consultative capacity and help you find a suitable solution rather than working purely in an assessment capacity so get them involved early and work with them to get a solution they are happy to sign off as compliant.
It may be worth getting them to document why it is sufficient as well so that if you change QSA in the future you can take the reasoning with you in case the subject comes up again.
One thing they are likely to mention based on the above solution is split knowledge for key management. It sounds like a single administrator has all the knowledge needed to access keys where PCI (3.6.6) requires split knowledge and dual control for manual clear-text key-management procedures.

how to manage an asymmetric key inside a key container for an enterprise software?

hello
i have an educational software that should be installed on different PCs across the enterprise.
my program is using a 5000 text, xml, html files as source of it's content. i don't want my source to be tampered with, copied or used illegally. what i intend to do is to encrypt my source seperately and then put the encrypted files in a folder inside of my app so later my app can read and decrypt each file that is requested by user. the app will be installed and used anywhere.
but the problem is that to secure and store the encryption key inside my application i have to use a key container while as far as i remember(correct me if i'm wrong) they're machine based and can't be used on different machines while i need my key to be fixed for all the installed copies on any PC. i know a lot of softwares using such a architecture but i don't know how do they do that.
any idea?
If you put the key on every PC (and you have to if you want them to be able to run your software) then everyone will have it and the encryption is pointless.
but the problem is that to secure and store the encryption key inside my application
Yeah, you can encrypt that key with another key and then turtles all the way down... What you are trying to do is impossible. Don't waste your time. You will gain no security whatsoever and the only thing you will do is waste cycles and annoy users making their computers slower.

Resources