We are facing 2 problems when we use the Facebook conversion pixel script in our site, while check the performance with Google page speed insights.
This following scripts loading automatically, but this script causes trouble in 2 areas in page speed insight.
https://connect.facebook.net/signals/config/166971126971821
Comes in unused script section.
Efficient cache policy not set.
Please explain how these 2 problems can be addressed.
Related
I am using Yandex Metrika for my site Thegoldlive.com and facing a Core Web Vitals issue due to it. I believe it's the main reason due to why my site is getting slow. Any way to get rid of it or should I remove this from the site?
When I remove it from the site, the speed of my site gets better. But, I don't want to remove it because it helps me analyze the visitors on site in the best manner. So that's why asking, is there any way to keep both things parallel?
Running your site through PageSpeed Insights it appears your issues are with loading time (TTFB, FCP, and LCP) and shifting content (CLS).
I'm not familiar with Yandex Metrika, but it seems unlikely an analytics solution will slow down these metrics. Mostly they affect responsiveness metrics like FID and INP.
I can't quite see the reason for slow TTFBs (it seems fast to me!), which will directly affect the other loading metrics. You seem to be using a CDN (cloudflare) and the server response time from lab tests seems fast.
It could be you just get a lot of visitors from slow networks/devices? If so one thing that can help here is ensuring sites are eligible for the Back/Forward cache, so at least they get a fast (instant!) load when going back and forwards within the site. Testing your site for this shows your site is using an unload handler, meaning you can't benefit from this performance gain. It looks like you are using Cloudflare's Rocker Loader - ironically something that's supposed to improve performance but that might be holding you back here. I'd turn that off.
For your layout shift issues (CLS), it's must more obvious. You have an advertisement that pops in and out and pushes all the content down. You'd be better to reserve a block of white space for that to slot into, rather than have it dynamically inserted and moving the text around, which is an irritating experience for site visitors.
I am currently attempting to complete the AZ-204 training for the microsoft exam, however on multiple of the learning paths, when I go to load an azure cloud shell as part of the module, it can take upwards of 20 minutes to load.
black screen displayed by shell while loading
It will eventually complete, and allow access to the shell but it can be upwards of half an hour before it completes. Does anyone have any explanation for this? Does training accounts receive a lower priority? I'm fairly confident it cannot be a connection issue as I'm currently in a professional environment with a secure connection (as in I'm not working from home).
Thanks in advance for any help.
Not sure which browser you are seeing this issue with, but Azure Cloud Shell opening without bringing up the prompt is documented as a known issue in Edge and Safari. Chrome is recommended as the browser to be used for running Learn exercises for the best user experience.
If feasible, you can also consider running Microsoft Learn exercises in your own subscription, if you're still unable to use the Sandbox environment for some reason. (Note that if you use your own subscription, you will be charged for any active resources).
If the issue persists, you can report feedback at the bottom of each unit in a Learn module. This will allow you to send the teams communication about the content or Learn experience.
i have a website what can be used by 50 users at the same time. Those users will be in the same room.
My problem is to know how much bandwith (in Mb/s) do I need to rent for that room so that they can access my website comfortably (speed up and down) ?
The average page size of my website is 1MB.
I searched for answers on the internet and all I got was bandwith used in a month (for servers).
Sorry if my question is "vague", I did my best to make it clear.
Thank you in advance for your answers.
Using https://gtmetrix.com/ you can test your websites speed, page size, and load times
There are several alternatives you just have to do the research
The more important issue you should focus on is why your page is 1Mb that should be your first priority to resolve and using tools like gtmetrix can help
I recommend load testing your site to figure that out. If you're at all familiar with JMeter, you can use it to create a script that simulates a user navigating your site, then run multiple instances of that user (in your case, 50) to see how the site holds up under load.
You can learn more about JMeter here:
https://jmeter.apache.org/
If you're not familiar with creating JMeter scripts, you can record and auto-generate basic scripts using the Blazemeter Chrome Extension, here.
For low-load testing (50 users is pretty low), you can upload your JMeter script to Blazemeter, and with a free tier Blazemeter account, you can perform some basic tests to see how your site holds up. If you go that route, I recommend focusing on avg. response time and hits/second in order to determine what your bandwidth need truly is under load.
I've got different results running Google Page Speed Insights (mobile) from Chrome Dev Tools and Google Page Speed Insights page.
When I run Audits Performance (mobile, 3G) from Chrome Dev Tools I get higer score than "official page".
Running from Chrome dev tools it says that I've implemented some optimization but running the test from Google Page Speed Insights page it suggests that optimization.
I've tried test in different timing but the score on Google Page Speed Insights are always lower than Chrome dev tools.
I've implemented some optimization like "defer images not in view" with a lazy loading, I've deferred the css loading but only Page Speed in Chrome dev tools recognize this optimizations.
Can someone help me with this?
I've been having the same issue and I guess the reply here is in this guide at the following FAQ:
Why do the field data and lab data contradict each other? The Field data says the URL is slow, but the Lab data says the URL is fast!
The field data is a historical report about how a particular URL has performed, and represents anonymized performance data from users in the real-world on a variety of devices and network conditions. The lab data is based on a simulated load of a page on a single device and fixed set of network conditions. As a result, the values may differ.
a customer representative suggested that I try posting these questions here.
We spent some time monitoring issues with DocuSign loading slowly. While it was now slow every time, when it was slow it seemed to hang up on a particular point in the process.
Below is a screenshot of a trace we ran in the browser and note the element which took 52 seconds to load. When loading was slow, it seemed to hang on this particular element. Could you offer any reasons as to why it could sometimes take 52 seconds or more to load this part?
We also have some other questions:
There seems to be continuous font downloading (2 or 3 meg in size) throughout the process of loading the page. This occurs each time. Why is this and can it be avoided?
Why do we sometimes see Seattle as the connection site when most of the time is Chicago?
We noticed that DocuSign asks for permission to know our location. Does this location factor into where the document is downloaded from? Is the location also used in embedded signing processes?
Thank you for your assistance.
Unfortunately, without a bit more detail I am not entirely sure I can tell you why the page was loading so slow. Is this consistent? If so is it always the same document (perhaps template?) where you see this slowness?
As for your other three questions:
In doing my own test and decryption of the web traffic via fiddler I show the fonts being rendered for each individual tag and not the entire document. This is most likely due to each tag having it's own attributes that can be set (font included).
DocuSign data centers are in Seattle, Chicago and Dallas. All DocuSign traffic can come from any of these three data centers as the system synchronously exists in all three locations. More info can be found here.
DocuSign geo-location is just used to leverage the location capability of HTML5 enabled browsers but the signers IP address is recorded either way. It has no impact on which data center the traffic comes from. It is also included in the embedded signing process. It can be disabled on a per brand basis in the Signing Resource File setting the node DocuSign_DisableLocationAwareness to true.