I am implementing an API that uses a third party library.
The third party library provides a key which needs to be passed in as an input. The key is dynamic and can change based on consumer/business scenario. The lambda function should be able to decrypt the key.
Can someone suggest a way to decrypt a key? I am exploring aws-kms approach on the side.
Please note: i have noted down the .env way of achieving it. But, today my API is being consumed by one consumer hence one API key. Tomorrow, the number will increase (would result into multiple keys) and i may not be in place to store/update the function.
Edit: I need to pass some sensitive information through payload. This can be an alphanumeric value. e.g.
{"sender": "+123", "secret": "encrypted_value"}
The client and server should share a key using which client can encrypt the info and server (lambda function) should decrypt it.
Any suggestion would be great! Thanks!
The standard way of doing something like you described on your "edit" section using KMS is:
Client calls KMS directly to generate a data key. Client will get back a key in its encrypted and plain format.
Client encrypts the data with the plain key, throws it away and send encrypted data and encrypted key to the server.
Server calls KMS decrypt operation, gets back the plain key and uses it to decrypt the data. Server throws away the decrypted key and uses the decrypted data as it wishes.
Please let me know if you meant something different, but this is a fairly standard way to use KMS. Of course, you need to lock down all of the APIs using IAM and KMS policies as your use cases determine.
I've uploaded a large zip archive to Azure Storage BLOB container, ~9GB, using AzCopy utility. Now I'd like to check if it is correct. I can get "CONTENT-MD5" value from Azure Portal for the file. Then I need to calculate this on my side, right? Are there any other ways to check validity (except downloading this file)? It was archived using 7zip utility which doesn't have hash algo for MD5.
"Content-MD5" property of the uploaded blob is not maintained by Azure Storage Blob Service per real-time blob content. Actually, it's calculated by AzCopy during uploading and set to the target blob when AzCopy finishes uploading. Therefore, if you really want to validate the data integrity, you have to download the file using AzCopy with /CheckMD5 option, and then compare the downloaded file with your local original file.
However, given AzCopy has made its best effort to protect data integrity during transferring, the validation step above is probably redundant and strongly not recommended unless data integrity is much more important than performance under your scenario.
Here's how MD5 verification and property setting appears to work for Azure.
Azure, on the server side, calculates the MD5 of every upload.
If that upload happens to represent a "full file" (full blob--PutBlob is the internal name) then it also stores that MD5 value "for free" for you in the blob properties. It happens to also return the value it computed as a response HTTP header.
If you pass a header at upload time of "Content-MD5" azure (server) will also verify the upload against that value, and fail the upload if it doesn't match. Again it stores the MD5 value for you.
The real weirdness comes if you aren't uploading a "full file" in a "one shot" upload.
If your file is larger than client.SingleBlobUploadThresholdInBytes (typically 32MB, 256MB for C#), then Azure client will "break your upload up into 4-MB blocks [max for PutBlock], upload each block with PutBlock, and then commit all blocks with the PutBlockList method." Possibly uploading blocks in parallel. Azure itself has a 100MB hard limit on a single upload of any kind, so you can't adjust client.SingleBlobUploadThresholdInBytes past this other limit (update: might have changed to 5GB). You are forced to split the upload to "blocks" (chunks) of 4MB each. (Unrelated side effect: in azure "blocks" are changeable/updateable but not individual bytes. A "one shot" upload (up to that limit) basically contains one big "block" so is essentially immutable. If you go the multiple block upload, you could change a single block within that blob)
If you are uploading in "chunks" azure only supports having the server "verify" the MD5 values of each chunk, as it is uploaded. So if you set your client's parameter to setComputeMd5(true) (java) or validate_content = true (python), what it will do is calculate the MD5 of each 4MB chunk as it is uploaded, and pass that along to be verified that with the chunk upload. The documentation says this is "not needed when using HTTPS" because HTTPS also computes a checksum over the same bytes and includes it with the transfer, so somewhat redundant. CONTENT-MD5 for each chunk is "referred to" as a transactional (kind of like ephemeral) MD5. Seems to get discarded once that chunk has been verified.
This means that at the end of the day for a file uploading with a "chunked" upload, a CONTENT-MD5 property will not be set within Azure because it would need to apply for the "entire blob". Azure doesn't know what the value should be for an MD5 of all chunks together in order (it was only dealing with per-transfer MD5's as data came in), and it doesn't re-calculate a global one when putting all the pieces together at the end. For all we know it doesn't actually put them "together" per se, just links points blocks to each other, internally. This means that "sometimes" when you upload with the same client calls, it will have a CONTENT-MD5 property set and sometimes not (when the file is deemed too large).
So if we have an MD5 for the entire file at upload time, what options are we left with? We can't use it as an upload header for a particular chunk. So Azure's PutBlockList command was changed to accept "another" form of MD5, called x-ms-blob-content-md5. If you use this, it basically sets the blob's CONTENT-MD5 property within azure, and doesn't check it or verify it. In fact if you do an update on a blob within azure, it doesn't modify the CONTENT-MD5 property at all, and it could become out of date. You could also set this property in Azure with an post-hoc "set blob property" call after doing an upload, which similarly sets it to an arbitrary value without checking. The C# client has a BlobOption to set this StoreBlobContentMD5, but doesn't seem to allow you to provide the value. The Java client doesn't seem to have an option for it, maybe could set manual headers to provide it in either case. If you have any client with an option like "--put-md5" of azcopy, it is probably just setting this property, for large files. Only other options would be to compute the MD5 of the bytes as you pass them to the client, and see if that lines up, and assume they made it (wrapped InputStream style). Or re-compute the MD5 of the file (locally) after the upload and "hope" the client read and uploaded the same bytes as you just read (it does compute transactional md5's and/or HTTPS checksums as it goes, for each chunk).
Or the painful option: re-download it to verify its MD5 works. If you want to do it this way an "easy" way to do so is to set the azure CONTENT-MD5 property first (see above), then use an azure client to perform a file download. On the client-side it will calculate the md5 as it downloads it, compare that to the one "currently set" in azure (it is sent as a download header if present in azure), the client will fail the operation if it doesn't match at the end. So basically azure supports verifying full-file MD5's of large files on the client-side, but not the server-side...Or create an Azure function to do an equivalent to a client-side verify after upload.
There is one other MD5'ish thing that azure supports: if you do a "get blob" with a range specified of 4MB or less, you can also specify x-ms-range-get-content-md5 and it will return you the MD5 of that range in the CONTENT-MD5 HTTP header. FWIW.
From PowerShell you can run the following to get the MD5 hash of a file
Get-FileHash -Path "C:\temp\somefile.zip" -Algorithm MD5
If you're using C# you can also use this code snippet
using (var md5 = System.Security.Cryptography.MD5.Create())
{
using (var stream = File.OpenRead(filename))
{
return md5.ComputeHash(stream);
}
}
im 100% new to digital signature, as far as i understand, a document is signed by an user private key, and that signature is checked using the public key. my problem is that i have a web application, and a file server... Files are created on an earlier stage. then an user that is using the application checks the files and signs them using his key.
those files are stored on a file server and they need to be strip from some of the content in order to do the signature (according to the implementation manual of the file, an HL7 CDA file). so, i need some direction to understand how to do this, should i retrieve the file, then alter it and sign it from the browser, or should i send the private key to the server and make all things there?
or any other option, mks.
There are three options possible:
Transfer the file to the client. Have some client-side module that performs signing. The difficulty is that the files can be huge.
Transfer the key to the server. Sign the data on the server. This can be a problem if the private key is non-exportable (stored in hardware or just flagged as non-exportable in Windows CryptoAPI).
Use the distributeed solution which will calculate the hash on the server, transfer it to the client, calculate the signature on the client and send it back to the server. The example of such solution can be found in this SO answer.
I'm wondering whether a mechanism exists that allows client to client encryption. For example, when enabled, any information that is entered on one client can only be decrypted using a specific key.
Similar to how regular public key transactions work, but server agnostic.
A use case:
Everything on my Facebook profile is encrypted, and no body would be able to view that information (not even facebook). The users that I give the key would be able to decrypt that information.
This would allow complete control of data stored online.
The same idea can be applied for pictures uploaded to the internet.
One issue that I see is to have a practical mechanism to manage keys and a secure way to distribute keys to other users.
Has anyone done something like this before?
In case of Facebook I can imagine encrypting the data with OpenPGP keys into armored (text) format. Then you can place encrypted block to facebook or anywhere else. Other users would take the block, decrypt it on the client side and see it.
The same applies with other social networks and places where you can store some text block.
You can easily do encryption in some client application and even in Javascript (if you manage to make JavaScript load local user's keys somehow).
subject says all, the REST api docs seem to make me think I do (and if I dont my code doesnt work)
Do:
-hard code ProductToken
-ask user for authorization key
DoNot:
-hard code (or use) access key ID
-hard code (or use) secret access key
keep in mind this is for an application that uses devpay, not a website
thanks!
Either hardcode it or store it in an encrypted DB or XML file, as for Secret Key and Key ID, it is better to not hard code your own keys maybe as mentioned above store them in an encrypted file somewhere in your app and once you get the user Keys successfully delete the file that has your own keys or replace your keys with the new user credentials as you wont be needing your keys once the customer has successfully activated your product.
sure its always better to hide the ProductToken from the user rather than asking the user to input it manually because once your Product token goes public people can easily access your buckets and do whatever changes they like and you'll probably lose control of your data flow.