Can someone help me in understanding what are mix and match attacks in security? I tried checking on Google but could not get a clear idea. Any explanation with an example should be helpful
Also, how is this related to do snapshot key in docker content trust?
That expression is used in "Manage keys for content trust":
snapshot: This key signs the current collection of image tags, preventing mix and match attacks.
When doing a docker push with Content Trust enabled for the first time, the root, targets, snapshot, and timestamp keys are generated automatically for the image repository:
The timestamp and snapshot keys are safely generated and stored in a signing server that is deployed alongside the Docker registry. These keys are generated in a backend service that isn't directly exposed to the internet and are encrypted at rest.
That is part of the docker notary architecture:
Rollback, Freeze, Mix and Match - The attacker can request that the Notary signer sign any arbitrary timestamp (and maybe snapshot) metadata they want. Attackers can launch a freeze attack, and, depending on whether the snapshot key is available, a mix-and-match attack up to the expiration of the targets file.
Clients both with and without pinned trust would be vulnerable to these attacks, so long as the attacker ensures that the version number of their malicious metadata is higher than the version number of the most recent good metadata that any client may have.
Note that the timestamp and snapshot keys cannot be compromised in a server-only compromise, so a key rotation would not be necessary. Once the Server compromise is mitigated, an attacker will not be able to generate valid timestamp or snapshot metadata and serve them on a malicious mirror, for example.
An attacker can add malicious content, remove legitimate content from a collection, and mix up the targets in a collection, but only within the particular delegation roles that the key can sign for.
For a definition of those terms, one can look at "Improving Hackage security" (for Haskell, but it applies also for a Docker registry):
Rollback attacks where an attacker gets a client to install an older version of a package than a version the client previously knew about.
Consider for example a case where the older package might have known security vulnerabilities.
Freeze attacks where the attacker prevents the entire set of packages from being updated (e.g. by always responding with an old snapshot).
Mix and match attacks where the attacker supplies combinations of packages or package metadata that never existed in the upstream repository.
Related
I'm after some concrete advice on how best to prevent (or at least deter) unauthorised software upgrades on an embedded system. It doesn't need to be bullet-proof and we can assume for now that the system itself is sufficiently locked down so that no-one can get unauthorised access to it.
My plan is to basically have an installer process running on the system which would receive update packages from anywhere but that could ensure those packages came from a trusted source (i.e., me) before attempting to install them.
In simple form, the update package would have the actual installation package, plus a matching digital signature that could only be generated by myself. Moreover, the signatures would be purely self-generated with no external authorities involved.
So these are my thoughts on the possible process:
Generate a private/public key pair and distribute the public key along with the embedded system itself.
When creating a software install package, pipe the contents of the package (or an MD5 of the package) through a signature generator using our private key.
Distribute the software install package along with that signature.
Have the installer check the signature against the software install package (using the public key it already has) and only install if there's a match.
If anyone can find any problems with this scheme, I'd appreciate the details, along with any specific advice on how to avoid them. In addition (though this is not the primary purpose of the question), any advice on tools to generate the keys would be appreciated.
I do not see any apparent problems with your solution. I can suggest improvements that you may have already taken into account
If the embedded software is sufficiently locked, it is not necessary to take additional measures to protect the integrity of the public key distributed with the software (e.g. by signing the installer itself and obfuscate, that could be a headache)
I've considered a TLS connection to download the updates, but it would not really needed, because packages are going to be protected with a digital signature
I suggest encapsulating the public key in an X509 certificate. This way you can control the period of validity and even a possible revocation if the private key has been compromised. In this case you will need a hierarchical Certificate Authority, with a root certificate that issues the signing certificates. Include in the truststore of the installer the public part of the root certificate. Then using a different signing certificate after expiration/revocation will be transparent to installer.
The root certificate has a long duration and a large key size (and should be conveniently secured), and the signing certificates have a shorter duration and can use a smaller key.
With this CA you could also generate a TLS certificate if you need some additional service: e.g check available updates. In this case include the certificate in the truststore of the installer to avoid man-in-the-middle attacks (SSL-pinning).
You can sign the full distribution or a hash. It does not affect security (see https://crypto.stackexchange.com/questions/6335/is-signing-a-hash-instead-of-the-full-data-considered-secure) but do not use MD5 because has extensive vulnerabilities. Use a SHA-2 function.
To generate the keys you can use openssl in command line or use the GUI application KeyStore-Explorer
I am developing standalone app for cross platform using electron.
I want store private data like private key, private data for some
execution in app. Execution like encrypt / decrypt data.
Or
I want store some secured information like user password, proprietary
data on app
Are any possible way to store these kind of secure information and app user unable to get any way?
There is an NPM module made for Atom editor (the app Electron was made for) called Keytar. It uses the native OS APIs for secure storage. eg. The keychain on OS X.
https://github.com/atom/node-keytar
I don't know the specific technology that you are using, so my answer will point in general to the key storage issue.
First, two big remarks:
Even with some heavy specialized hardware (banks and other critical systems use Hardware Security Modules -HSMs- for this), there is always a risk of getting your key stolen. What you choose to do depends on how important is your key and how much are you willing to do to protect it. I will try to avoid to mention solutions involving hardware, because they are usually overkill for most people.
There are, however, good practices that you can follow: https://www.owasp.org/index.php/Cryptographic_Storage_Cheat_Sheet
Now, some advise. Whatever you do, don't store your key in plaintext (and much less hardcoded). If you are using public key cryptography, PKCS12 files (usually with extension .p12 or .pfx) are the standard way to store the data. They are usually password protected.
Here you face a problem: if you have a key, you need to use it. If you use the key, it will be in "plaintext", at least in RAM. So, you need a way to enable the access that keeps the key as isolated as possible. If the actions are triggered by a user, things are relatively nice, because you could ask for the password before using the key.
If the actions are automated, however, you need to find a way to store the password. Even security software like some PGP implementations have approaches for this that aren't nice:
Ask for the password in command line: command -password my-password. This, put in a bat, works. But the password is stored and, depending of the operating system, even available with the command history.
Store it in a file: at least you don't leave copies around, but the password is still in plaintext.
Encrypt it using system data as encryption key: the password is relatively protected, but you lose portability and an attacker with access to the computer won't be stopped by the control.
Ask for the password once one the service is on: a bit more reasonable, but not always possible (if the service is critical but just one person has the password, availability might be compromised).
Fancy things could be done with threshold decryption, but that's probably too much for that case also.
I do not provide details on each option because what you can do probably depends on what your framework allows and the way in which your system is used, but I hope it helps as a reference of the different options. In any case, do not implement any cryptographic functionality on your own. Bad crypto is worse than no crypto at all.
Avoid storing private or server-side details like a private key in an electron app. Electron app's data and file can be accessed from the app.asar file and electron do not protect the content at all. There is no such mechanism of code protection in electron. However NW.js supports source code protection, You can read it here. So according to me, it's not safe to store private accreditations like signing a certificate or private key in electron source code.
As another way, you can store these data using node-keytar in the keychain for mac, the credential manager in windows and Gnom Keyring in Linux using native api. But still, these credentials are accessible to the user and does not make sense to storing private tokens (i.e. Token for GitHub private repository having administrative rights). It depends upon the user, If he/she is sophisticated enough to understand what did you stored in Keychain, Credential Manager or Keyring, they can misuse it or can use against you. So the final answer is,
Do not store Credentials/Private key or Administrative Tokens in electron source or using node-keytar.
the perfect way of storing data in electron is this package: https://www.npmjs.com/package/electron-data-holder
this package stores data in a JSON file but it gives you the ability to encrypt the data.
read more in the documentation
I am responsible for our corporate application menu page (intranet only). It contains many links to resources (web pages and installed application) and is tailored to the current user.
In the past, I have used an applet to allow installed applications to be started directly from the browser.
The corporate web is going though a revamp and I have been told to find a solution which requires no plugins of any kind.
My first attempt was to register a custom protocol handler. The menu provider contains definitions for all the links and application commands and each user has different rights. I could imagine that, when the menu is created for a user, the commands could be encoded and added as something like custom://base64encodedcommand. The handler would decode the command, perform checks and execute it.
This works well in IE, FF and Chrome. At the moment, we have mainly Windows workstations and it will be used only within the company intranet.
Is this a viable approach? Are there security issues?
Unfortunately with any solution it is possible to only prove the existance of a vulnerability, and never the lack there-of. But there are some necessery, but not sufficient ways to make your system more resistant to attacks.
Currently you are base64 encoding the execution string. This adds absolutely nothing to security. Even if you chose some different method, this will only be security through obscurity, and can easily be reverse engineered by somebody with enough time.
What you can do is to have some sort of public-private key signing set up. So that you can sign each link with your own private key, and that would mean that this link is allowed to be executed, a link without a signature or with an invalid signature should not even be decoded.
So what you would have is custom://+base64link+separator+base64signature.
Things to keep in mind:
It is very important that only you (or very select group of people) have access to private key. This is the same as with any other pub-priv key system.
Not only should you not run the link if the signature is invalid, but you must not even decode it (thus you sign the base64 string, not the decoded command). Assume that it is an attack right away, and probably even inform the user of the fact.
And i repeat, while this can be considered to be a necessary for security, it is not something that is sufficient. So keep thinking of other possible attacks.
We have this computer code which requires anyone who has access to it pay a license fee. We will pay the fee for our developers but they want our sysadmins to be licensed too as they can see the code archives. But if the code is stored encrypted in the archives then the sysadmins can see the files but not see the contents.
So does any software version control system allow encryption so that only the persons who are checking out the code will require the key and so be able to see the files decrypted.
I was thinking it wouldn't be hard to add this to pserver and cvs but if it is already done elsewhere why reinvent the wheel.
Any insight would be helpful.
There is no way to set up a source control system that can perform server-side diffs in a way that would prevent a sysadmin from at least theoretically accessing the contents. (i.e.: The source control system would not be able to store the decryption key in a place that the sysadmin couldn't access.) Unless your sysadmins habitually browse the source control database contents, such a system should have no practical difference from an unencrypted system from the perspective of your vendor.
The only way to make the source control database illegible to a server admin is to encrypt files on the client before submitting them to the server. For this to meet the desired goal, the decryption keys would need to be inaccessible to the admins, which is unlikely to be practical in most organizations since server admins typically have admin access on all client machines as well. Ignoring this picky detail, it would also mean that all your source control system would ever see is encrypted binaries, which means no server-side diff or blame. It also means potentially horrible bloat of your database size since every file will require complete replacement on each commit. Are you really willing to sacrifice useability of your source control system in order to save licensing fees and/or placate this vendor?
Basically, you want to give all your developers some secret key that they plug into the encryption/decryption routines of git's smudge and clean filters. And you want an encryption scheme that is capable of performing deltas.
First, see Encrypted version control for some examples in git. As written, this can dramatically increase disk usage. However, there are ways to make more "diff-friendly" encryption at the cost of some security. See diph for an example of how you might attack that. Also, any system that uses AES-ECB mode would diff quite well. (You generally shouldn't use AES-ECB mode because of its security flaws... one of those security flaws is that it can diff quite well... hey, that's what you wanted, so this seems a reasonable exception.)
I have to handle some sensitive data in my application, such as passwords, credit card information, etc.
What are possible security risks I could have and how can I avoid them?
Don't store Credit Card Information (in some jurisdictions, you might be breaking the law by doing so, or at least falling foul of a commercial agreement)
You don't say where your sensitive data is stored, but encypting it is the usual approach. There are two forms symmetric and asymmetric. Symmetric means you use the same key for encrypting and decrypting. Asymmetric consists of a public/private key pair.
Passwords: store only a salted hash (i.e. un-reversible) of your passwords, and compare with a similarly salted hash of an entered password.
Be aware that you really shouldn't store credit card info in any shape or form on a web server.
Bit of info on doing this in a web environment, which is my background:
If a website (or any application) needs to store card info, it should comply with the PCI DSS (Payment Cards Industry Data Security Standard). Amongst other things, it requires that the data be held encrypted on a separate server that isn't publicly accessible (IE: isn't hosting the site) and has a separate firewall between it and the webserver. Penalties for not complying are potentially very large in the event of any fraudulent activity following a security breach, and can include them ceasing working with you - it pretty much reserves the right for the them to chargeback any losses from the fraud to you (from my interpretation as a non legal person)
More on it here: https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml
Obviously, this may be expensive compared to shared hosting, as you immediately need two servers and a load of networking gear. Which is why people don't often do this.
I would be inclined to perform some form of reversible encryption on the information as it's being stored, something like:
$card = myEncryptionFunction($input);
A little more information on the nature of your application wouldn't hurt though.
I'd be using reversible encryption on the database data. Make sure this data doesn't seep into log-files too, log derived information instead. Consider how yoǘ'll handle different environments - normally you want to not use production data in your test environments. So even though you may consider copying production data back to test systems, you should probably generate fake data for the sensitive parts.
It's been already said that you shouldn't store CC especially CVV2 information in your database, avoid where possible.
If you store CC + CVV2 then consider using asymmetric encryption and store your private key in another server. Otherwise an attacker who can access the data 99% can access the key and the whole encryption would be pointless.
Passwords should be stored as one way hashed.
After all these you need to ensure that your application is secure against vulnerabilities such as SQL Injeciton, remote code execution etc.
Don't forget Even when an attacker can't read previous data they can plant a backdoor for the next data.