pagespeed insights shows no data when I run a website analysis (www.savingwithtalis.com). In January I noticed no data showing on FID. At the same time, a decline in views started happening. Presently, there is no data at all, (Image attached). GA shows visitors however. My website has never had a high volume of visitors and showed data. I run GTmetrix and it shows data (my score is a
I've cleared cache, ran a site security check (I have wordfence installed), ran site health check,ran a Maldet scan, deactivated what i installed in january (cloudflare & nitropack)
gtmetrixsavingwithtalis
pagespeed
Related
I checked this behavior of Google's Lighthouse bot and it seems consistent across several sites I can check. I will use one page as example:
https://groupprops.subwiki.org/wiki/Groups_of_order_16
When I load this webpage on Chrome, the main page HTML loads as gzipped.
In desktop, when I look up the performance entry using window.performance.getEntriesByName I see a transferSize of 26866 and a decodedBodySize of 145861.
The corresponding numbers in mobile are transferSize of 24191 and decodedBodySize of 133739.
When I run Lighthouse from within Chrome Dev Tools, I similarly see a transferSize of 26866 and a decodedBodySize of 145861 for desktop , and transferSize of 24191 and decodedBodySize of 133739 for mobile emulation.
However, when I visit https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fgroupprops.subwiki.org%2Fwiki%2FGroups_of_order_16 to trigger a Lighthouse score calculation for the page, the Google bot that visits the page does not request them gzipped. In fact the transferSize for mobile emulation is 134337, a little higher than the decodedBodySize due to headers. (To capture this information I logged the data using JavaScript running on the page and sent it back to a server).
I confirmed across a diverse range of sites with different setups that Lighthouse bot loads of pages made through the PageSpeed Insights UI (https://developers.google.com/speed/pagespeed/insights/), or the corresponding API, do not use gzip compression even when it's enabled on the server and used by all modern clients.
Ideas?
if a website loads ads through client's (Ads service provider) interfaces. These ads are only displayed to customers who have an AdBlocker enabled, are using the Firefox browser or non Consent.
This ad loads only after the entire web page has been completely retrieved. This results in a "layout shift".
Q) Is this layout shift not considered in the current PageSpeed Insight lab data? Google Bot (PageSpeed Insight) is probably not running Adblocker or Firefox.
Can anyone help me on this?
Start the Google Developer tool in Chrome Browser (Ctrl+Shift+ I or F12 For Windows).
Select the network tab.
Reload the page.
Check if those ads resources are loading or not. If yes then Google is taking those resources into account while calculating the PageSpeed score.
Is it considered in lab data? - probably not (depending on your timings).
However as far as the upcoming Core Web Vitals update it will still affect your score there.
You see Google will user the Chrome User Experience dataset (CrUX dataset) to determine your CLS etc.
The CrUX dataset is collected from real world users (who may be using an ad blocker!).
The big difference is that when collecting the Cumulative Layout Shift (CLS) data for the CrUX dataset Google actually monitors the page for Layout Shifts until page unload.
So if you have a layout shift at any point while someone is using the site it will affect your CLS score.
If you have enough traffic you can see this data under then "Origin Summary" and or "Field Data" sections on your Lighthouse report.
EDIT This issue seems to only affect Chrome and Safari on my Macbook Pro. I can't replicate this issue on other computers and browsers. I thought it might have been malware or virus, so I reformatted my Macbook. Didn't fix the issue All of a sudden, I am running into this issue when developing on my local server as well with MAMP. Assets are missing everywhere and some pages fail to load all together
I've noticed recently when I refresh my Vue SPA with the cache disabled, the page tends to look messed up with missing images/resources.
When I check the console, I see a lot of ERR_CONNECTION_REFUSED for resources that are definitely there. If I refresh the page, the errors go away. It tends to happen after I clear cache and load up the webpage for the first time, or if I disable cache in the developer console.
It turns out there had recently been a DDos attack against my IP address so my hosting service forced rate limit connections to my IP address. So if you ever run into the same issues, check with your hosting company first.
We recently migrated from a SQL Server 2008 SSRS server to a new SQL Server 2016. The entire report catalog was restored and upgraded to this new server. Everything is working, except the horrible performance of the web portal.
The performance while connecting to the web portal from a domain joined computer seems decent, but connecting to the web portal over the internet is seriously frustrating. Even simply trying to browse a directory of reports is a wait of several seconds, and running any given report is similarly slow. It's slow in IE11, Edge, Chrome, Safari, you name it. Like 25+ seconds to go from login to viewing the Home directory.
We are using NETWORK SERVICE as the service account, and NTLM authentication. We aren't getting any permission errors in the UI or the logs. HOWEVER in Edge browser, using the developer tools, I notice several 401 unauthorized HTTPS GET requests for things like reports/assets/css/app-x-x-x-bundle.min.css. So perhaps there is a permissions issue somewhere? Other interesting items in the developer tools is that things like reports/api/v1.0/CatalogItemByPath(path=#path)?#path= are taking like 10 seconds. These are JSON types.
Certain reports that do things like have a parameter depend on another parameter selection will sometimes not work. The waiting icon spins and when the report returns the selection will not be kept, nor the second parameter filled. Sometimes it works, however, which is part of what makes this so maddening.
There are no explicit error messages, but something is getting bogged down in a major way. There are no resource issues on this server that we can see- it's got plenty of headroom in terms of RAM and CPU.
This is not a report optimization problem; the entire UI is slow for everything.
In Google Anlytics I am getting hundreds of hits to pages which don't exist on my website which I assume are some sort of spam or bot realted thing.
I want to make sure that this isn't going to cause any issues to my site or be a security risk.
My website URL is imageworkshop.com, and the links that I am seeing are to the following paths on this domain:
/imagework/ineeta.V1.02.07.php
/imagework/ineeta.V1.02.13.php
/imagework/ineeta.V1.02.15.php
/imagework/ineeta.V1.03.01.php
/imagework/ineeta.V1.02.16.php
/imagework/ineeta.V1.02.08.php
Each of these pages is showing 150-300 page views (they just show 404 errors).
Average time on page shows 2-4 minutes for these.
Source of link shows as (direct) in google analytics.
Is this some kind of attempt at a brute force / SQL injection attack?
The visits have all happened 3-4 days apart through the month of October 2011.
Any suggestions on what this is or if I should be concerned?
The website is built on wordpress and does have a few plugins used - there is always a possibilty that these links are related to a plugin I guess?
I have wordpress up to date with the latest version (currently 3.2.1)
You're probably right. As long as those pages don't exist on your server there is no security risk.