Is 5MB the de facto limit for W3C Web Storage? - browser

I am looking into using browser sessionStorage for a web application, and was trying to find current information on size limitations. It appears most desktop browsers have imposed a 5MB limit. However, I am not finding many recent articles nor information on the mobile browsers.
The Disk space of the W3C Web Storage specification says "A mostly arbitrary limit of five megabytes per origin is recommended. Implementation feedback is welcome and will be used to update this suggestion in the future."
The QuirksMode HTML5 compatibility page for localstorage has its last major update on 12 June 2009 and only includes data for last years current browsers: IE8, FF 3.5b4, Saf 4, Chrome 2.
According to Introduction to DOM Storage, IE8 "allows Web applications to store nearly 10 MB of user data." Introduction to sessionStorage seems to confirm that "Firefox’s and Safari’s storage limit is 5MB per domain, Internet Explorer’s limit is 10 MB per domain."
Web Storage: easier, more powerful client-side data storage from the Opera developer site states "As of now, most browsers that have implemented Web Storage, including Opera, have placed the storage limit at 5 Mb per domain."
A recent chromium issue (#42740) put a 5mb quota on session storage.
Chapter 5. Client-Side Data Storage from Building iPhone Apps with HTML, CSS, and JavaScript states "At the time of this writing, browser size limits for localStorage and sessionStorage are still in flux."
Question: Based on this info, should I just assume 5MB is the limit or should I spend time testing different browsers? Does anybody know of an existing test suite (a la Browserscope) that would have these results?

A site with some web storage info http://dev-test.nemikor.com
as you can see the quota's are different for each browser!

Assuming that the smallest limit for html5 web storage is 5mb, it would be sensible to go with that answer given what information you have presented, and has been presented about W3C web storage. Do beware that everything is in flux, but I don't think this limit will change drastically.

I've read from some bug report comments that Chrome stores localStorage data in UTF-16, which effectively doubles the sized used, leaving you with something more like 2.5mb. I think this might also be the case for other webkit browsers as well, if they impose the 5mb limit.
The fact that almost a year after this question was asked it still isn't easy to find the size limits (or even the key/value charset) is crazy.

Related

PageSpeed Insights number of distinct samples to show data for a URL logic

I'm reading the PageSpeed Insights documentation and am wondering if anyone knows how Google is determining what is considered a sufficient number of distinct samples per this FAQ:
Why is the real-world Chrome User Experience Report speed data not available for a URL?
Chrome User Experience Report aggregates real-world speed data from opted-in users and requires that a URL must be public (crawlable and indexable) and have sufficient number of distinct samples that provide a representative, anonymized view of performance of the URL.
I'm building a report centered around Core Web Vitals data and realizing some URLs have few data points with CWV timings, and I'm curious exactly how Google is handling these situations. I've been searching through docs and articles, haven't found anything with a specific reference.
The exact threshold is kept secret, so that's why you won't find it documented anywhere. However, as a site owner there are a few things you can do to work around a URL not having sufficient data:
Use the Core Web Vitals report by Search Console, which groups similar pages together, making them more likely to collectively exceed that threshold.
Look at origin-level aggregations in PSI or the CrUX API. These include user experiences from all pages on the origin, so it's much less granular, but it gives you a sense of typical experiences overall.
Instrument your site with your own first-party Core Web Vitals monitoring. web-vitals.js can be integrated with your existing analytics provider to track vitals on all of your pages. If you're integrating with Google Analytics, you can link your data with the Web Vitals Report to see how your site is doing.
Use your site with the Web Vitals extension enabled to see the Core Web Vitals data for your own experience. Your local experiences may not be representative of most users, but this can be a great tool for validating expectations vs reality.
Use lab data as a proxy. For example, lab data from Lighthouse in PSI can tell you how a mobile user on a slow connection might experience your page. This should really only be used as a last resort when no other field data is available.

Unlimited storage with Chrome extensions chrome.storage.local?

The docs for Chrome extensions chrome.storage.local state:
The maximum amount (in bytes) of data that can be stored in local
storage, as measured by the JSON stringification of every value plus
every key's length. This value will be ignored if the extension has
the unlimitedStorage permission.
However, the docs for Chrome extensions permissions have a note alongside the unlimitedStorage permission which states:
This permission applies only to Web SQL Database and application
cache (see issue 58985).
It's unclear to me what this note means in regards to chrome.storage.local, especially considering the issue referenced is from 2010, which predates introduction of chrome.storage by 5 years.

Trouble-shooting slow-loading documents from DocuSign

a customer representative suggested that I try posting these questions here.
We spent some time monitoring issues with DocuSign loading slowly. While it was now slow every time, when it was slow it seemed to hang up on a particular point in the process.
Below is a screenshot of a trace we ran in the browser and note the element which took 52 seconds to load. When loading was slow, it seemed to hang on this particular element. Could you offer any reasons as to why it could sometimes take 52 seconds or more to load this part?
We also have some other questions:
There seems to be continuous font downloading (2 or 3 meg in size) throughout the process of loading the page. This occurs each time. Why is this and can it be avoided?
Why do we sometimes see Seattle as the connection site when most of the time is Chicago?
We noticed that DocuSign asks for permission to know our location. Does this location factor into where the document is downloaded from? Is the location also used in embedded signing processes?
Thank you for your assistance.
Unfortunately, without a bit more detail I am not entirely sure I can tell you why the page was loading so slow. Is this consistent? If so is it always the same document (perhaps template?) where you see this slowness?
As for your other three questions:
In doing my own test and decryption of the web traffic via fiddler I show the fonts being rendered for each individual tag and not the entire document. This is most likely due to each tag having it's own attributes that can be set (font included).
DocuSign data centers are in Seattle, Chicago and Dallas. All DocuSign traffic can come from any of these three data centers as the system synchronously exists in all three locations. More info can be found here.
DocuSign geo-location is just used to leverage the location capability of HTML5 enabled browsers but the signers IP address is recorded either way. It has no impact on which data center the traffic comes from. It is also included in the embedded signing process. It can be disabled on a per brand basis in the Signing Resource File setting the node DocuSign_DisableLocationAwareness to true.

Streaming ? Or what is it?

I have 2 web hosts from two different hosting company. One is for hosting my web page and the secondary where I upload the videos (mp4 format). At the moment I'm using http://www.longtailvideo.com/players/jw-flv-player/ it because I can use HTML5 and if the client doesn't supports HTML5 it falls back to regular FLV video player.
The videos I receive has .avi or .mpeg extensions. I'm using Miro Video Convertor to convert videos to .mp4 than I upload them to my secondary web host. From there I can easily access the mp4 formatted video via URL. After everything is finished I just copy and paste the URL link to my HTML document, something like this:
<video
src="http://66.55.XXX.XXX/university/students/video1.mp4"
width="640px"
height="480px"
id="vidi"
</video>
I already made my research about video streaming, but... I don't understand or.. am I doing it right? If I just copy paste the link it means I'm streaming the video from web host #1 to web host #2 ? Is it right?
Also, the videos has 1280 x 1024 HD quality and I know if Videos has a higher quality, the buffer, the load time last longer. This is why I re-size videos to 640 x 480 and also to be compatible with HTML5.
How many bandwidth am I using? And a client? If one of person (student) is viewing the video how many bandwidth is he using? I payed for a web host for unlimited storing, because I upload 10, 12 GB of data every week.
I'm very worried for the load or buffer time. Currently the web page is used for ~30, 40 people, but what if the whole year or university will be using the web page? What am I supposed to do?
Am I doing the streaming right? This is why I opted for 2 different web hosts, to have more bandwidth.
Sorry for long post and for my english.
Thank you !
If I just copy paste the link it means I'm streaming the video from web host #1 to web host #2 ? Is it right?
Firstly, it looks like the media file is being served up via plain HTTP, no logic. So I'd not call it "streaming" but rather "progressive download". (It's a marketing ploy by hosting companies--if it is video, it must be streaming, right? Ah...no.)
Secondly, no: the video will not go from 66.55.XXX.XXX to the web server that is hosting your website. Rather it will go straight from to 66.55.XXX.XXX to the web browser.
Also, the videos has 1280 x 1024 HD quality and I know if Videos has a higher quality, the buffer, the load time last longer. This is why I re-size videos to 640 x 480 and also to be compatible with HTML5.
Resizing the video to reduce the bandwidth means you'll need to transcode the video to a smaller size. Setting the width and height attributres on the <video> tag will only change the displayed size. These two attributes have no impact on the bit rate coming from the server, and hence no impact on the buffer or load times.
How many bandwidth am I using? And a client? If one of person (student) is viewing the video how many bandwidth is he using?
There are two terms you need to be aware of here:
Traffic: The number of bytes sent (volume)
Bandwidth: The rate at which the bytes are sent (rate)
It's an important distinction. Again, many hosting companies mix up these concepts in the name of marketing. Be careful.
How does this impact your situation? Think about it like this: If you have a 1GB video on hosted, and it is viewed 10 times, that is 10GB of traffic. The bandwidth depends on the server sending the file, the client's network connection speed, and a network in between. As a rule of thumb you don't need to worry about this, except for two points:
The bit rate of the video needs to be smaller than the bit rate of the network connection between client and server. If not you'll have buffering during video play back.
Your hosting company may (probably!) restricts how many concurrent users can view a video at once. if 100 people download the video at once, each at an average of 2mbps, that is 200mpbs of bandwidth!
Unless you have more than 10-100 viewers a day, I would not worry about bandwidth too much.
A simple way to calculate the bandwidth of your video is:
bit rate = (bytes * 8) / (time in seconds)
Silly example: 800s long 1GB video (rounded for clarity)
bit rate = (1,000,000,000 bytes * 8 bits per byte) / (800 seconds)
bit rate = (8,000,000,000 bits) / (800 seconds)
bit rate = 10,000,000 bits per second
bit rate = 10,000 kilobits per second
bit rate = 10 megabits per second
I payed for a web host for unlimited storing, because I upload 10, 12 GB of data every week.
"Unlimited storage", ah maybe. If you upload enough data at some point someone is going to take notice and tell you that your ToC have been violated. The hosting market is evil that way.
I'm very worried for the load or buffer time. Currently the web page is used for ~30, 40 people, but what if the whole year or university will be using the web page? What am I supposed to do?
At once? Or per day? Is that people watching one video? Or videos viewed? If you have 10-25 concurrent then you probably should be at least mildly worried about the hosting company.
Frankly, the web/video hosting market is full of bait-n-switch tactics, opaque pricing, gangster ToCs, and convoluted marketing speak. You would probably be better served by using a service like Amazon's AWS. Specifically, use Amazon S3 to store your videos and use Amazon CloudFront to stream the videos to the clients. All of this has three distinct advantages to shady hosting companies:
Fair and transparent pricing (including an online calculator)
Pay for what you use (and not more)
Effectively unlimited storage and bandwidth (AWS has terabits of bandwidth and exabytes of storage)
I highly recommend AWS for small but non-trivial projects like you seem to have.
And go full size HD! It is a much more compelling experience for your viewers.
Good luck!

Difference between Ad company statistics, Google Analytics and Awstats on adult sites

I have this problem. I have web page with adult content and for several past months i had PPC advertisement on it. And I've noticed a big difference between Ad company statistics of my page, Google Analytics data and Awstats data on my server.
For example, Ad company tells me, that i have 10K pageviews per day, Google Analytics tells me, that i have 15K pageviews and on Awstats it's around 13K pageviews. Which system should I trust? Should i write my own (and reinvent a wheel again)? If so, how? :)
The joke is, that i have another web page, with "normal" content (MMORPG fan site) and those numbers are +- equal in all three systems (ad company, GA, Awstats). Do you think it's because it's not adult oriented page?
And final question, that is totally offtopic, do you know about Ad company that pays per impression and don't mind adult sites?
Thanks for the answers!
First, you should make sure not to mix up »hits«, »files«, »visits« and »unique visits«. They all have a different meaning and are sometimes called differently. I recommend you to look up some definitions if you are confused about the terms.
awstats has probably the most correct statistics, because it has access to the access.log from the web server. Unfortunately, a cached site (maybe cached by the browser, a proxy from an ISP or your own caching server) might not produce a hit on the web server. Especially if your site is served with good caching hints which don't enforce a revalidation and you are running your own web cache (e.g. Squid) in front of your site, the number will be considerable lower, because it only measures the work of the web server.
On the other hand, Google Analytics is only able to count requests from users which haven't blocked Google Analytics and have JavaScript enabled (but they will count pages served by a web cache). So, this count can be influenced by the user, but isn't affected by web caches.
The ad-company is probably simply counting the number of requests which they get from your site (probably based on their access.log). So, to get counted there, the add must not be cached and must not be blocked by the user.
So, as you can see, it's not that easy to get a single correct value. But as long as you use the measured values in comparison to those from the previous months, you should get at least a (nearly) correct rate of growth.
And your porn site probably serves a high amount of static content (e.g. images from the disk) and most of the web servers are really good at serving caching hints automatically for static files. Your MMORPG on the other hand, might mostly consist of some dynamic scripts (PHP?) which don't send any caching hints at all and web servers aren't able to determine those caching headers for dynamic content automatically. That's at least my explanation, without knowing your application and server configuration :)

Resources