Efficiently store base64 image output of chrome.tabs.captureVisibleTab() - google-chrome-extension

How to efficiently store base64 screen capture from chrome.tabs.captureVisibleTab() API to disk. I know we can use chrome.storage.local to store base64 but is it efficient. Is there any way to store on disk and access it using a URL

Related

Encrypt FIle in Node.JS without IV

I am trying to encrypt a file using crypto module in Node.js before uploading it to IPFS. I want to have the same result every time I encrypt the same file so I computed the hash of the file using crypto.createHash().
For encrypting the file, right now I'm using the crypto.createCipheriv(). I wanted to know if there are other ways to encrypt a file without the iv? I wanted to just use the hash I computed to encrypt the file to ensure that I get the same result every time.
I'm using Node.js v16.14.0

How to check Azure Storage BLOB file uploaded correctly?

I've uploaded a large zip archive to Azure Storage BLOB container, ~9GB, using AzCopy utility. Now I'd like to check if it is correct. I can get "CONTENT-MD5" value from Azure Portal for the file. Then I need to calculate this on my side, right? Are there any other ways to check validity (except downloading this file)? It was archived using 7zip utility which doesn't have hash algo for MD5.
"Content-MD5" property of the uploaded blob is not maintained by Azure Storage Blob Service per real-time blob content. Actually, it's calculated by AzCopy during uploading and set to the target blob when AzCopy finishes uploading. Therefore, if you really want to validate the data integrity, you have to download the file using AzCopy with /CheckMD5 option, and then compare the downloaded file with your local original file.
However, given AzCopy has made its best effort to protect data integrity during transferring, the validation step above is probably redundant and strongly not recommended unless data integrity is much more important than performance under your scenario.
Here's how MD5 verification and property setting appears to work for Azure.
Azure, on the server side, calculates the MD5 of every upload.
If that upload happens to represent a "full file" (full blob--PutBlob is the internal name) then it also stores that MD5 value "for free" for you in the blob properties. It happens to also return the value it computed as a response HTTP header.
If you pass a header at upload time of "Content-MD5" azure (server) will also verify the upload against that value, and fail the upload if it doesn't match. Again it stores the MD5 value for you.
The real weirdness comes if you aren't uploading a "full file" in a "one shot" upload.
If your file is larger than client.SingleBlobUploadThresholdInBytes (typically 32MB, 256MB for C#), then Azure client will "break your upload up into 4-MB blocks [max for PutBlock], upload each block with PutBlock, and then commit all blocks with the PutBlockList method." Possibly uploading blocks in parallel. Azure itself has a 100MB hard limit on a single upload of any kind, so you can't adjust client.SingleBlobUploadThresholdInBytes past this other limit (update: might have changed to 5GB). You are forced to split the upload to "blocks" (chunks) of 4MB each. (Unrelated side effect: in azure "blocks" are changeable/updateable but not individual bytes. A "one shot" upload (up to that limit) basically contains one big "block" so is essentially immutable. If you go the multiple block upload, you could change a single block within that blob)
If you are uploading in "chunks" azure only supports having the server "verify" the MD5 values of each chunk, as it is uploaded. So if you set your client's parameter to setComputeMd5(true) (java) or validate_content = true (python), what it will do is calculate the MD5 of each 4MB chunk as it is uploaded, and pass that along to be verified that with the chunk upload. The documentation says this is "not needed when using HTTPS" because HTTPS also computes a checksum over the same bytes and includes it with the transfer, so somewhat redundant. CONTENT-MD5 for each chunk is "referred to" as a transactional (kind of like ephemeral) MD5. Seems to get discarded once that chunk has been verified.
This means that at the end of the day for a file uploading with a "chunked" upload, a CONTENT-MD5 property will not be set within Azure because it would need to apply for the "entire blob". Azure doesn't know what the value should be for an MD5 of all chunks together in order (it was only dealing with per-transfer MD5's as data came in), and it doesn't re-calculate a global one when putting all the pieces together at the end. For all we know it doesn't actually put them "together" per se, just links points blocks to each other, internally. This means that "sometimes" when you upload with the same client calls, it will have a CONTENT-MD5 property set and sometimes not (when the file is deemed too large).
So if we have an MD5 for the entire file at upload time, what options are we left with? We can't use it as an upload header for a particular chunk. So Azure's PutBlockList command was changed to accept "another" form of MD5, called x-ms-blob-content-md5. If you use this, it basically sets the blob's CONTENT-MD5 property within azure, and doesn't check it or verify it. In fact if you do an update on a blob within azure, it doesn't modify the CONTENT-MD5 property at all, and it could become out of date. You could also set this property in Azure with an post-hoc "set blob property" call after doing an upload, which similarly sets it to an arbitrary value without checking. The C# client has a BlobOption to set this StoreBlobContentMD5, but doesn't seem to allow you to provide the value. The Java client doesn't seem to have an option for it, maybe could set manual headers to provide it in either case. If you have any client with an option like "--put-md5" of azcopy, it is probably just setting this property, for large files. Only other options would be to compute the MD5 of the bytes as you pass them to the client, and see if that lines up, and assume they made it (wrapped InputStream style). Or re-compute the MD5 of the file (locally) after the upload and "hope" the client read and uploaded the same bytes as you just read (it does compute transactional md5's and/or HTTPS checksums as it goes, for each chunk).
Or the painful option: re-download it to verify its MD5 works. If you want to do it this way an "easy" way to do so is to set the azure CONTENT-MD5 property first (see above), then use an azure client to perform a file download. On the client-side it will calculate the md5 as it downloads it, compare that to the one "currently set" in azure (it is sent as a download header if present in azure), the client will fail the operation if it doesn't match at the end. So basically azure supports verifying full-file MD5's of large files on the client-side, but not the server-side...Or create an Azure function to do an equivalent to a client-side verify after upload.
There is one other MD5'ish thing that azure supports: if you do a "get blob" with a range specified of 4MB or less, you can also specify x-ms-range-get-content-md5 and it will return you the MD5 of that range in the CONTENT-MD5 HTTP header. FWIW.
From PowerShell you can run the following to get the MD5 hash of a file
Get-FileHash -Path "C:\temp\somefile.zip" -Algorithm MD5
If you're using C# you can also use this code snippet
using (var md5 = System.Security.Cryptography.MD5.Create())
{
using (var stream = File.OpenRead(filename))
{
return md5.ComputeHash(stream);
}
}

How secure is data stored using chrome.storage.local.set

I'm storing options data in a chrome extension using chrome.storage.local.set
How secure is that data?
Can it be read easily by anyone who has access to the file it is stored in?
It is not secure, and per the official chrome.storage docs is stored unencrypted in the user's profile folder under their Chrome data directory. You will need to use some additional encryption if you are storing more sensitive data using these APIs.
They are stored in a LevelDB database in the following location:
C:\Users\<User>\AppData\Local\Google\Chrome\User Data\Default\Local Extension Settings\<Extension id>
It's saved in the following path (For other OS, the path is similar), can be easily accessed.
C:\Users\<User>\AppData\Local\Google\Chrome\User Data\Default\Local Extension Settings\<Extension id>
Basically, since the data is saved in local machine, you can't trust it as secure, since there're tons of ways to get the data. For example, other extension/scripts may overrite chrome.storage.local.set and they may get the data first, like what Storage Area Explorer does.

How to use SSE-S3 on Amazon S3?

I want to enable SSE-S3 on Amazon S3. I click properties and check the encryption box for AES-256. It says encrypting, then done. But I can still read the files without providing a key, and when I check properties again, it shows the radio buttons unchecked. Did I do this correctly? Is it encrypted? So confusing.
You're looking at a view of a bucket in the S3 console that shows more than one file, or shows only one file but that file isn't selected. The radio buttons allow you to set all items you select to the values you select in the radio buttons, but the radio buttons remain blank whenever multiple files are shown, because they're only there to let you make a change -- not to show you the values of existing object.
Click on an individual file and view its properties and you'll see that the file is stored with server-side-encryption = AES256.
Yes, you can download the file without needing to decrypt it, because this feature is server-side encryption of data at rest -- the files are encrypted by S3 prior to storage on the physical media that S3 runs on. This is often done for compliance purposes, where regulatory restrictions or other contractual obligations require that data to be encrypted at rest.
The encryption keys are stored, separately from the object by S3, and are managed by S3. In fact, the encryption keys are actually stored, encrypted, by S3. (They generate a key for each object, and store that key in an encrypted form, using a master key).
Decryption of the encrypted data requires no effort on your part. When you GET an encrypted object, we fetch and decrypt the key, and then use it to decrypt your data.
https://aws.amazon.com/blogs/aws/new-amazon-s3-server-side-encryption/
For data in transit, S3 encrypts that whenever you use HTTPS.
Different than the feature that's available in the console, S3 also supports server-side AES-256 encryption with keys you manage. In this scenario, called SSE-C, you still aren't responsible for the actual encryption/decryption, because S3 still does that for you. The difference is that S3 doesn't store the key, and you have to present the key to S3 with a GET request in order for S3 to fetch the object, decrypt it, and return it to you. If you don't provide the correct key, S3 won't bother to return the object -- not even in encrypted form. S3 knows whether you've sent the right key with a GET request, because S3 stores a salted HMAC of the key along with the object, for validation of the key you send when you try to fetch the object, later.
This capability -- where you manage your own keys -- requires HTTPS (otherwise you'd be sending your encryption key accross the Internet unencrypted) and is only accessible through the API, not the console.
You cannot use the Amazon S3 console to upload an object and request SSE-C. You also cannot use the console to update (for example, change the storage class or add metadata) an existing object stored using SSE-C.
http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
And, of course, this method -- with customer-managed keys -- is particularly dangerous if you don't have a solid key-management infrastructure, because if you lose the key you used to upload a file, that file is, for all practical purposes, lost.

How to save the confidential data at server side

I create a web server with Node.js. The database is MongoDB. I'm using a json file to save the server configuration. The node module 'nconf' is used to read the json file.
Currently, all the data, including some confidential data, saved in the json file is plain text. I don't think it is security enough. What should I do to make sure the confidential data is security?
You could take a look into the crypto library of node.
Here is a link to the documentation: Crypto Node.js
You could use this to encrypt some of the data that is contained within the file. But you should also probably consider removing the sensitive information and find another means to store it else where, perhaps within a database, like your MongoDB.

Resources