im 100% new to digital signature, as far as i understand, a document is signed by an user private key, and that signature is checked using the public key. my problem is that i have a web application, and a file server... Files are created on an earlier stage. then an user that is using the application checks the files and signs them using his key.
those files are stored on a file server and they need to be strip from some of the content in order to do the signature (according to the implementation manual of the file, an HL7 CDA file). so, i need some direction to understand how to do this, should i retrieve the file, then alter it and sign it from the browser, or should i send the private key to the server and make all things there?
or any other option, mks.
There are three options possible:
Transfer the file to the client. Have some client-side module that performs signing. The difficulty is that the files can be huge.
Transfer the key to the server. Sign the data on the server. This can be a problem if the private key is non-exportable (stored in hardware or just flagged as non-exportable in Windows CryptoAPI).
Use the distributeed solution which will calculate the hash on the server, transfer it to the client, calculate the signature on the client and send it back to the server. The example of such solution can be found in this SO answer.
Related
I'm working on a project where two clients can send files to each other via web sockets (using Socket.IO). Each chunk is encrypted with AES.
Currently, the clients connect to the server, they each generate an RSA public/private key pair on their devices, they then announce their public keys to the server which sends them to the other client, and this gets stored by said client. Before data is sent, it is encrypted with AES using a random key and a random IV, and the AES key is then encrypted using the other client's public key. The data is sent across, the other client then decrypts the AES key using their RSA private key, and finally decrypts the content using the decrypted AES key and saves it to a file on their disk.
The issue is that the server could easily just replace one client's public key with its own, and steal the data. The only solution I can think of is for the clients to contact one another and manually verify their public keys... I'm not sure how I'd go about automating this process. Services that provide E2EE seem to generate a matching code on each device, but I'm having trouble finding any information about how this is actually implemented, like how would two devices generate matching codes without talking to a server or each other in between, and if they do, then the server knows the code anyway right?
I've considered using WebRTC to send the public key from one client to the other without having the data go through the server, but I'd appreciate alternative approaches. Thank you in advance! :)
To prevent MITM, users are supposed to "manually compare public key fingerprints through an outside channel", as explained in this article regarding the Signal Protocol.
Usually, it means checking an hexadecimal string over a trusted communication: face to face, phone, ... Depending on your requirements, you might also consider that an attacker cannot access both your tool and emails at the same time and consider emails your trusted communication.
The problem is: I have a script (written in PHP) that is able to get commands, gather and send private data (this is just backup script). Main server will ask scripts to get data.
Now I have 3 security problems I want to solve:
I want to disallow any other server to ask for the private data (no server, except the allowed, can execute any command)
I want to encrypt request - I will send private data in it (for example file names to transfer, which are considered private)
I want to encrypt the response - script will output private data (file contents).
Does anyone has any idea how to achieve these points or has any interesting cases of thoughts in this matter?
My idea is to:
Generate some kind of secure password and encrypt script before I will deploy it to remote server. I store the password and pass it in a request - it will decrypt the script (this is easy in PHP). Remote script then may send request to main server asking was this command authorised. Also, I can check IP of the server.
I will also generate another secure password which is used to encode/decode all sensitive data in request.
The idea here is to use also encode/decode approach from 2, but also not to send a response directly. Script will send request to the server with an answer. (using HTTPS). This will not let data go outside of the connection and will also make sure we use HTTPS (I don't know the status of the remote server, but I'm sure main server has strong SSL).
Is it enough? Will you add anything? WHat are your thougts?
Regards,
Jakub Król.
You can use client certificate authentication. You will generate a certificate for each of your clients, and then your server (i.e. the script you were talking about) will check whether it knows about the given client. See, for example, here for info on using client certificates in PHP.
Since your client knows whom it is sending data to, you can use Asymmetric encryption. Client will use server's public key to encrypt request. Server will decrypt the request with it's private key. Search google/bing for assymetric encryption in php
Similar to point 2), your server may encrypt data for the given client using it's Public key.
We are building an android application and one of its features is to book a cab service provider's cab (say an Uber).
We have an application specific user ID. Let us call it AUID. To book the cab, the application would Post a request to server and send AUID along with other relevant information (like lat, long etc). How do I make sure at the server end that the request is indeed coming from the correct user and it is safe to book the cab? In the current form, if a third party gets to know the AUID of another person, the third party can book a cab on behalf of that person.
One of the solutions I thought of was using asymmetric encryption. The application would hold the public key and the server would contain the private key. Instead of sending the user ID to the server, we'll instead send an encrypted key where the key would be AUID + timestamp encrypted using the public key. We'll then decrypt the message using private key at server end to obtain the AUID. If the timestamp at server does not lie within a certain interval of the timstamp sent by the client, we reject the request.
Is this a safe enough approach? Is there any other practice widely followed for such scenarios?
What you propose is sensible: encrypt the AUID on the client app and verify on the server. As comments suggest, SSL is vital.
The problem is that if how to encrypt the AUID is in your app, it can be figured out by anyone dedicated enough.
You can drastically reduce the risks of fake requests by issuing a separate encryption key for each user. This means that if someone cracks your code, they can still only spoof from one account. However, once an attacker had decompiled your app, they could theoretically start new accounts, get a valid encryption key and spoof requests.
What you need for 100% reliability is some form of authentication which is not stored in the client app - like a password or TouchID on iOS or fingerprint api on Android M. So when a user orders a cab, they need to enter some piece of information which you also encode with the AUID and check on the server. That secret information is not stored in your app, so no-one can fake requests.
Requiring a password from a user is pretty inconvenient. Fingerprint scanning is much easier and probably acceptable. You could also use a trust system - if the user has ordered cabs before and everything was OK, they can order without special authentication. Using Trust together with individual encryption keys is pretty effective because anyone trying to spoof requests would need to do a successful order before being able to spoof - which is probably too much hassle for them.
There are generally 2 main methods of encrypting user uploaded files.
The client can encrypt the file, send it over for the server to store, and on request the server retrieves the encrypted file and the client does the decryption. In this scenario, the server never has access to the keys. There are cons to this; the encryption scheme is viewable by anyone in the source of the web app, it's extra processing for the client, and others
The second scenario of course is when the client sends the file in plaintext (presumably over SSL), and the server manages the keys and encryption/decryption.
It seems to me that the most common form implemented is the latter, where the server manages the encryption/decryption for the client. This seems ineffectual to me. If the web server is compromised, even if the attacker just has web app level privileges, encryption in the first place was pointless since he will have access to all the keys just as the web app did, and thereby the decrypted files. Is there a way to prevent this? Why would people even encrypt files to begin with then, unless they did client-side encryption and never store the keys?
Also, as a second part to this question, is it feasible to allow multiple people access to an encrypted file (say a division within a company) if you used the former option (client-side encryption)? I would presume the users would have to share their keys among themselves, which poses another security risk.
SSL only protects connection. If you want to prevent server from peeking at yours files, the file must be encrypted by a secret key server does not know. (scenario one)
For the second part, There are many papers discussing how to build such systems based on public key cryptography. Or you may look into other recently crypto reserches, such as broadcast encryption or attribute-based encryption.
Last summer, I was working on an application that tested the suitability of a prospective customer's computer for integrating our hardware. One of the notions suggested was to use the HTML report generated by the tool as justification for a refund in certain situations.
My immediate reaction was, "well we have to sign these reports to verify their authenticity." The solution I envisioned involved creating a signature for the report, then embedding it in a meta tag. Unfortunately, this scenario would require the application to sign the report, which means it would need a private key. Once the application is storing the private key, we're back at square one with no guarantee of authenticity.
My next idea was to phone home and have a server sign the report, but then the user needs an internet connection just to test hardware compatibility. Plus, the application would need to authenticate with the server, and an interested party could figure out what credentials it was using to do that.
So my question is this. Is there any way, outside of obfuscation, to verify that the application did indeed generate a given report?
As Eugene has rightly pointed that my initial answer was to authenticate the receiver. Let me propose an alternative approach for authenticating the sender
authenticate the sender:
When your application is deployed at your client end, you generate and deploy a self signed PFX certificate which holds the private key.
The details of your client and passphrase for the PFX is set by your client and may be you can get it printed and signed by your client in paper to hold them accountable for the keys which they have just generated..
Now you have a private key which can sign and when exporting the HTML report, you can export the certificate along with the report.
This is a low cost solution and is not as secure as having your private keys in a cryptotoken, as indicated by Eugene, in the previous post.
authenticate the receiver:
Have a RSA 2048 key pair at your receiving end. Export your public key to your senders.
When the sender has generated the report, let the report be encrypted by a symmetric key say AES 256. Let the symmetric key itself be encrypted/wrapped by your public key.
When you receive the encrypted report,use your private key to unwrap/decrypt the symmetric key and in turn decrypt the encrypted report with the symmetric key.
This way, you make sure that only the intended receiver alone can view the report.
I'd say that you need to re-evaluate possible risks and most likely you will find them to be not as important as you could think. The reason is that the report has value for you but less likely for a customer. So it's more or less a business task, not a programming one.
To answer your concrete question, there's no simple way to protect the private key used for signing from being stolen (if one really wants to). For more complex solutions employing a cryptotoken with private key stored inside would work, but cryptotoken is itself a hardware and in your scenario it would unnecessarily complicate the scheme.