I'd like to get a breakdown of how much storage space a web app I'm working on is taking up. More specifically I'm interested in the size of data stored in indexedDB.
I know I can use navigator.storage.estimate() to get a total value, but it shows over 1 mbyte even with an empty db, and I'm interested in knowing the size of indexedDB specifically.
Is there a way to check the size?
There is not currently a standard (or even non-standard) API that gives a breakdown of storage usage by type.
In Chrome's DevTools, the Application tab provides a breakdown of storage by type; click the "Clear storage" view in the top left and a graph should appear. This is useful for local debugging, but not understanding storage usage "in the wild"
There's some discussion about extending navigator.storage.estimate() with this data at https://github.com/whatwg/storage/issues/63
UPDATE: navigator.storage.estimate() now provides a breakdown (in Chrome 75 and later) by storage type.
Yes you can make use of the Quota Management API to get these information. this is an experimental feature and currently not supported by EDGE
This site will give you information how Browser support this and can see in real
Example
// Query current usage and availability in Temporary storage:
navigator.storageQuota.queryInfo("temporary").then(
function(storageInfo) {
// Continue to initialize local cache using the obtained
// usage and remaining space (quota - usage) information.
initializeCache(storageInfo.usage,
storageInfo.quota - storageInfo.usage);
});
More reading is available in W3org
Related
As the title says, I want to sync chrome extension storage data across devices. I was using the storage.sync method till now. But today I got a new error message as follows:
Unchecked runtime.lastError: QUOTA_BYTES_PER_ITEM quota exceeded
After few googling, I found that storage.sync has limited quota and we need to use storage.local instead. But the problem is how do I sync data between devices while using storage.local.
Please help.
Sample Code:
chrome.storage.sync.set({tasks, function(storage) {
// do something
});
}
Unfortunately you cannot use chrome.local to sync between devices.
However the issue appears to be the item size is too large QUOTA_BYTES_PER_ITEM quota exceeded.
Please note that this quota is significantly smaller than the total amount (in bytes) of data you can store using chrome.sync.
Reference: https://developer.chrome.com/extensions/storage#property-sync
I would recommend trying to break up your items into smaller chunks and using chrome.syncfor each of the smaller set of items.
You could also consider integrating your extension with a cloud database service to allow a larger set of storage capabilities.
I was wondering if it would be possible to save large amounts of data on the client side with node.js.
The data contain some images or videos. On the first page visit all the data should be
stored in browser cache, so that if there is no internet connection it would not have effect on the displayed content .
Is there a way to do this ?
Thanks for your help :D
You can use the Web Storage API from your client-side (browser-side) Javascript.
If your application stores a megabyte or so, it should work fine. It has limits on size that vary by browser. But, beware, treat the data in Web Storage as ephemeral, that is like a cache rather than permanent storage. (You could seriously annoy your users if you rely on local storage to hold hours of their work.)
You can't really get a node.js program itself to store this information, because those programs run on servers, not in browsers. But you can write your browser-side Javascript code to get data from node.js and then store it.
From browser Javascript you can say stuff like
window.localStorage.setItem ('itemName', value)
and
var retrievedValue = window.localStorage['itemName')
window.localStorage items survive browser exits and restarts. window.sessionStorage items last as long as the browser session continues.
I'm trying to log the total size of a request sent out from an ASP.NET Core website hosted in Azure. The goal is to be able to attribute some sense of what the data out cost is for specific functionality within the application. (Specifically we are looking at some functionality that uses Azure Blob Storage and allows downloads of the blobs but I don't think that's relevant to the question.)
I think the solution is some simple middleware that logs out the sizes to a storage mechanism (less concerned with that part) but not sure what to put inside the middleware.
Does anyone know how you would "construct" the total size of the response being sent out of the web app.?
I would presume it will include HttpContext.Response.Body.Length but I'm pretty confident that doesn't include the headers. Also not sure if that's the compressed size of the response body or not.
Thanks
I need to use local storage in an Azure WebJob (continuous if it matters). What is the recommended path for this? I want this to be as long-lasting as possible, so I am not wanting a Temp directory. I am well aware local storage in azure will always need to be backed by Blob storage or otherwise, which I already will be handling.
(To preempt question on that last part: This is a not frequently changing but large file (changes maybe once per week) that I want to cache in local storage for much faster times on startup. When not there or if out of date (which I will handle checking), it will download from the source blob and so forth.)
Related questions like Accessing Local Storage in azure don't specifically apply to a WebJob. However, this question is vitally connected, but 1) the answer replies on using Server.MapPath which is a System.Web dependent solution I think, and 2) I don't find this answer to have any research or definitive basis (though it is probably a good guess for the best solution). It would be nice if the Azure team gave more direction on this important issue, we're talking about nothing less than usage of the local hard drive.
Here are some Environment variables worth considering, though I don't know which to use:
Environment.CurrentDirectory: D:\local\Temp\jobs\continuous\webjobname123\idididid.id0
[PUBLIC, D:\Users\Public]
[ALLUSERSPROFILE, D:\local\ProgramData]
[LOCALAPPDATA, D:\local\LocalAppData]
[ProgramData, D:\local\ProgramData]
[WEBJOBS_PATH, D:\local\Temp\jobs\continuous\webjobname123\idididid.id0]
[SystemDrive, D:]
[LOCAL_EXPANDED, C:\DWASFiles\Sites\#1appservicename123]
[WEBSITE_SITE_NAME, webjobname123]
[USERPROFILE, D:\local\UserProfile]
[USERNAME, RD00333D444333$]
[WEBSITE_OWNER_NAME, asdf1234-asdf-1234-asdf-1234asdf1234+eastuswebspace]
[APP_POOL_CONFIG, C:\DWASFiles\Sites\#1appservicename123\Config\applicationhost.config]
[WEBJOBS_NAME, webjobname123]
[APPSETTING_WEBSITE_SITE_NAME, webjobname123]
[WEBROOT_PATH, D:\home\site\wwwroot]
[TMP, D:\local\Temp]
[COMPUTERNAME, RD00333D444333]
[HOME_EXPANDED, C:\DWASFiles\Sites\#1appservicename123\VirtualDirectory0]
[APPDATA, D:\local\AppData]
[WEBSITE_INSTANCE_ID, asdf1234asdf134asdf1234asdf1234asdf1234asdf1234asdf12345asdf12342]
[HOMEPATH, \home]
[WEBJOBS_SHUTDOWN_FILE, D:\local\Temp\JobsShutdown\continuous\webjobname123\asdf1234.pfs]
[WEBJOBS_DATA_PATH, D:\home\data\jobs\continuous\webjobname123]
[HOME, D:\home]
[TEMP, D:\local\Temp]
Using the %HOME% environment variable as a base path works for me nicely. I use a subfolder to store job-specific data, but other folder structure on top of this base path can be valid. For more details take a look at https://github.com/projectkudu/kudu/wiki/Understanding-the-Azure-App-Service-file-system and https://github.com/projectkudu/kudu/wiki/File-structure-on-azure
I have written a Cygwin app that uploads (using the REST API PUT operation) Block Blobs to my Azure storage account, and it works well for different size blobs when using HTTP. However, use of SSL (i.e. PUT using HTTPS) fails for Blobs greater than 5.5MB. Blobs less than 5.5MB upload correctly. Anything greater and I find that the TCP session (as seen by Wireshark) reports a dwindling window size that goes to 0 once the aforementioned number of bytes are transferred. The failure is repeatable and consistent. As a point of reference, PUT operations against my Google/AWS/HP cloud storage accounts work fine when using HTTPS for various object sizes, which suggests the problem is not my client but specific to the HTTPS implementation on the MSAZURE storage servers.
If I upload the 5.5MB blob as two separate uploads of 4MB and 1.5MB followed by a PUT Block List, the operation succeeds as long as the two uploads used separate HTTPS sessions. Notice the emphasis on separate. That same operation fails if I attempt to maintain an HTTPS session across both uploads.
Any ideas on why I might be seeing this odd behavior with MS Azure? Same PUT operation with HTTPS works ok with AWS/Google/HP cloud storage servers.
Thank you for reporting this and we apologize for the inconvenience. We have managed to recreate the issue and have filed a bug. Unfortunately we cannot share a timeline for the fix at this time, but we will respond to this forum when the fix has been deployed. In the meantime, a plausible workaround (and a recommended best practice) is to break large uploads into smaller chunks (using the Put Block and Put Block List APIs), thus enabling the client to parallelize the upload.
This bug has now been fixed and the operation should now complete as expected.