We are managing html contents from datasource and directly write on the web pages using asp.net C#.
Here we are facing the problem :
On the page complete contents are not displaying but while we check the Page source and copy/paste it into a static html page all contents will be displayed.
Is there any limitation of browser related to maximum length of a web page.
I googled and found that the limit of a web page should be 10-30KB but in the same project we have pages with length upto 55 KB.
Can anyone help me out?
I've recently been benchmarking browser load times for very large text files. Here are some data:
IE --dies around 35 megs
Firefox --dies around 60 megs
Safari --dies around 60 megs
Chrome --dies around 50 megs
Again, this is simple browser load time of a basic (if large) English text files. One strange note is that Firefox seems to handle close to 60 megs before becoming non-responsive, but it only puts 55.1 megs out on the viewport. (However I can ctrl-a to get all 60 megs onto the clipboard.)
Naturally your mileage will vary, and this is all related to network latency, and we'll probably see vast differences if you're talking about downloading pictures etc. This is just for a single very large file of english text.
The limits (if they only exist) are higher than 50KB:
$ wget --quiet "http://www.cnn.com" -O- | wc -c
99863
I wouldn't believe there's any particular constant limit for page size. I would guess it rather depends on the memory size the web browser process can allocate.
Install firefox and firebug and try to examine any factors that could be affecting the source code. Unless you are changing something odd in the C# it shouldn't be cut off.
Related
I have a web page with an IMG tag and a link my-server.com/animationID.gif.
When someone opens a web page, my server generates a new GIF animation which appears on the web page.
I'm using the gifencoder node package to generate dynamic 60-frame animations.
The animation updates every second, so I don't really see a good way to cache it...
It takes 1-3 seconds to generate the animation which is very slow.
A few years ago I used services like countdownmail and mailtimers which generate a 60 frame countdown timers. Somehow, they manage to generate it very fast in less than 0.5-1 second.
After some debugging it seems that the addFrame method takes the most of time (and it's called 60 times).
encoder.addFrame(ctx);
Is there a way to increase the generation speed or cache the animation?
When testing my website performance with webpagetest I get excellent results, with my pages being fully loaded under 1s, taking aound 0.6s.
Those tests are being made using my user base location (Brazil - São Paulo), so it may be similar to their result.
But when I do check Google Search Console for the speed result it shows around 1.4s, which is too far away from the results I do have in here.
What I am in doubt is:
Is it because the speed result in Goolge Search Console is still experimental?
Or is there something wrong that I am doing on those tests?
The webpage I am testing is:
https://www.99contratos.com.br/contrato-locacao-residencial.php
And a result I get from webpagetest can be seen clicking the link bellow:
Results
I do appreciate all the help / tips / explanations.
Kind Regards
The data in search console is 'real world data' based on visitor experiences.
It is more accurate than synthetic tests.
What you need to look at is your break-down, not just the speeds. If you have a small percentage in the "red" and "orange" (less than 20%) then you do not need to worry, those users probably have super cheap phones and or a poor 3G connection. You cannot do much about that.
What you need to think about also is where people are accessing your site from. If they are all from abroad then you need a CDN as close to them as possible as latency will ruin your site load speed (so look at your visitor stats in Google Analytics).
Look at what devices your users are accessing your site on, if they all have super cheap android phones then expect higher load times while they process the page (hard to determine).
Just to reassure you - your page scores 98 / 100 for me using Developer Tools Audit, which considering I am in the UK is plenty quick enough.
Couple of suggestions to improve the speed
The main thing you haven't done is inline your critical CSS.
This means that your 'above the fold' content can display the second all the HTML has loaded. By not having to wait for the CSS to load from a separate request this can really speed your page up on FCP and FMP, especially if someone is on a high latency connection.
Also your number of requests could be reduced by using inline SVGs for your icons, making your page smaller and reducing network requests (which yet again helps with round-trip latency as up to 8 requests can be completed at a time, so with 26 requests you have at least 5 round trips to the server (1 (html), 8, 8, 8, 1) ).
The speed (in seconds, not scores) displayed on the speed test results is very influential on the test server region. The closer the test server region is, the faster the loading time will be.
Example of speed test results for your page, using servers from Australia - Sydney, Canada - Vancouver, and your base location: Brazil - São Paulo, using GTmetrix.
Australia - Sydney (3,2s)
Canada - Vancouver (1,9s)
Brazil - São Paulo (0,8s)
So, it can be concluded LARGE POSSIBLE test server region used by Google Search Console is far from your base location.
By the way, when I open your page from Indonesia, it only takes about 0.9-1.2 seconds. So, congratulations, your page is fast!
I have a few questions regarding the report of lighthouse (see screenshot below)
The first is a culture think: i assume the value 11.930 ms stands for 11 seconds and 930 ms. Is this the case?
The second is the Delayed Paint compared to the size. The third entry (7.22 KB) delays the paint by 3,101 ms the fourth entry delays the paint by 1,226 ms although the javascript file is more than three times the size 24.03 KB versus 7.22 KB. Does anybody know what might be the cause?
Screenshot of Lighthouse
This is an extract of a lighthouse report. In the screenshot of google-chrome-lighthouse you can see that a few metrics are written with a comma 11,222 ms and others with a full stop 7.410 ms
Thank you for discovering quite a bug! An issue has been filed in the Lighthouse GitHub repo.
To explain what's likely going on, it looks like this report was generated with the CLI (or at least a locale that is different from the one that it is being displayed in). Some numbers (such as the ones in the table) are converted to strings ahead of time while others are converted at display time in the browser. The browser numbers are respecting your OS/user-selected locale while the pre-stringified numbers are not.
To answer your questions...
Yes, the value it's reporting is 11930 milliseconds or 11 seconds and 930 milliseconds (11,930 ms en-US or 11.930 ms de-DE).
The delayed paint metric is reporting to you how many milliseconds after the load started the asset finished loading. There are multiple factors that influence this number including when the asset was discovered by the browser, queuing time, server response time, network variability, and payload size. The small script that delayed paint longer likely had a lower priority or was added to your page later than the larger script was and thus was pushed out farther.
I have a Excel file I have recently completed. It contains 7 empty TextBoxes, 3 CommandButtons (2 will go away once I sign off this document) and 37 OptionButtons. Everything is working great. The macro code isn't too large (195 kb) but it takes about 15 seconds to load.
Is there a way to make it load faster?
Seems the same file opens almost immediately on a different PC elsewhere on the network even with double the OptionButtons. I think the switch that sections off to my area is going bad or just my port. Might be something else too. Either way it works ok on other PCs so I am calling it good. Need to test out the switch and ports now. Fun fun.
Ive found this gem : http://watirwebdriver.com/page-performance/
But i cant seem to understand what this measures
browser.performance.summary[:response_time]/1000
Does it start measuring from the second i open the browser?
Watir::Browser.new :chrome
or from the last Watir-webdriver command writen?
And how can i set when it starts the timer?
** I've tried several scripts but i keep getting 0 seconds
Thats why im not sure.**
From what I have read (I have not actually used it on a project), the response_time is the time from starting navigation to the end of the page loading - see Tim's (the gem's author) answer in a previous question. The graphical image on Tim's blog will helps to understand the different values - http://90kts.com/2011/04/19/watir-webdriver-performance-gem-released/.
The gem is for getting performance results of single response, rather than overall usage of a browser during a script. So there is no need to start/stop the timer.
If you are getting 0 seconds, it likely means that the response_time is less than 1000 milliseconds (ie in Ruby, doing 999/1000 gives 0). To make sure you are getting something non-zero, try doing:
browser.performance.summary[:response_time]/1000.0
Dividing by 1000.0 will ensure that you get the decimal values (eg 0.013).