I get the following message when running Google Page Speed Insights for a page:
Field Data - Over the previous 28-day collection period, field data shows that this page cannot be assessed due to missing required Core Web Vitals metric(s).
I also note that two of the fields in PSI are greyed out for this page, as per this screenshot:
PSI Field Data - greyed out fields
However the Lab Data section of PSI shows all the relevant metrics, so it's clear the page is accessible as per this screenshot:
PSI Lab Data
Does anyone know what could be causing the "page cannot be assessed" message and greying out of certain metrics? Please note that the page has always had sufficient traffic to generate field data in PSI.
Thanks
That message gets shown if there's not enough data and it it can fluctuate over time - for example, if there was more site traffic during one 28-day period than another. It's not uncommon to see data for LCP & FCP but not FID. FID can only be measured if the user interacts with the page, and that doesn't always happen on every page load. However, seeing a lack of data for CLS (when there's sufficient LCP & FCP data) is unusual and not expected.
Related
Is there a way to track the amount of data transferred during query refresh?
The size is shown in the 'Queries' pane very briefly (depending on the wait):
The only solution I can think of is saving the pure output to CSV and measuring the filesize.
Is there a more direct, less cumbersome way?
The main reason behind this is that I want to see:
1: If the size could cause issues to users with slow connections
2: The effect of my optimisation attempts
Please note that this would ideally be source-agnostic, however my primary focus are SharePoint lists & files.
I don't think you can do that, but you could check the file size by connecting the the sharepoint folder, then drilling into the Attributes record and grabbing the Size record
See
https://learn.microsoft.com/en-us/power-query/connectors/sharepointfolder
or
https://www.myonlinetraininghub.com/get-data-from-onedrive-or-sharepoint-with-power-query
sample for desktop version, but not sharepoint
let Source = Folder.Files("C:\temp\"),
#"C:\intraday\_mydata csv" = Source{[#"Folder Path"="C:\temp\",Name="mydata.csv"]}[Attributes],
Size = #"C:\temp\_mydata csv"[Size]
in Size
Similar to the project I am working on, this website has a search bar at the top of its home page:
On the linked website, the search bar works seemingly immediately when visiting the website. According to their own website, there have been roughly ~20K MLB players in MLB history, and this is a good estimate for the number of dropdown items in this select widget.
On my project, it currently takes 10-15 seconds to make this fetch (from MongoDB, using Node + Express) for a table with ~15mb of data that contains the data for the select's dropdown items. This 15mb of data is as small as I could make this table, as it includes only two keys (1 for the id, and 1 for the name for each dropdown). This table is large because there are more than 150K options to choose from in my project's select widget. I currently have to disable the widget for the first 15 seconds while the data loads, which results in a bad user experience.
Is there any way to make the data required for the select widget immediately available to the select when users visit, that way the widget does not have to be disabled? In particular:
Can I use localstorage to store this table in the users browser? is 15MB too big for localstorage? This table changes / increases in size daily (not too persistent), and a table in localstorage would then be outdated the next day, no?
Can I avoid all-together having to do this fetch? Potentially there is a way to load the correct data into react only when a user searches for that user?
Some other approach?
Saving / fetching quicker for 15mb of data for this select will improve our react app's user experience by quite a bit.
The data on the site you link to is basically 20k in size. It does not contain all the players but fetches the data as needed when you click on a link in the drop-down. So if you have 20Mb of searchable data, then you need to find a way to only load it as required. How to do that sensibly depends on the nature of the data. Many search bars with large result sets behind them will use a typeahead search where the user's input is posted back as they type (with a decent debounce interval) and the search results matching the user's input sent back in real time (usually with a limit of, say, the first 20 or 50 results).
So basically the answer is to find a way to only serve up the data that the user needs rather than basically downloading the entire database to the browser (option 2 of your list). You must obviously provide a search API to allow that to happen.
I'm trying to view our peak bytes received/sec count for an eventhub in order to scale it properly. However, the portal is showing vastly different results from the "daily" view to the "hourly" view.
This is the graph using the "hourly" view:
From here, it looks like I'm peaking at around 2.5 MB/s. However, if I switch to the "daily" view, the numbers are vastly different:
So I can't make sense of this. It's the exact same counter, showing vastly different results. Anyone know if the Azure portal performs any "adding" or similar?
Edit: Notice that the counter is "bytes received per second". It shouldn't matter if I look at the hourly or daily view, the number of items per second shouldn't be affected by that (tho it clearly is).
I have 4 databases and they have more than 200.000 datas. A viewPanel which shows all datas of database does not load correcttly. It turns out with an error after little bit waiting. If that view does not have lots of datas no error is given.
I could not find a solution for this situation :(
I added this line into Application Property but It did not solved my problem.
xsp.domino.view.navigator=ByNoteId
Regards
Cumhur Ata
There are a number of performance "sins" you can commit on Domino. Unfortunately Domino is too forgiving and somehow still works even if you do them. The typical sins:
Using #Yesterday, #Today, #Now, #Tomorrow ind a view selection formula or a sorted column in a view. I wrote an article about your options to mitigate that
Having code that does a view.refresh before opening a page
Using reader fields and accessing a view that is not categorized by that reader field. Hits only users who can see only few documents. Check this article for possible remedies
Not having a fast temp location for view rebuilds. Typical errors are: not enough disk I/O or having your transaction log on the same channel as your databases. Make sure you have a high performance server
For Windows servers: not taking care of disk fragmentation - includes links to performance trouble shooting
Not using ODS51/52 and have compression for data and design active. Takes a simple command to fix it
That's off my head what you can check. Loading 200k documents into a panel in one go doesn't look like a good UX approach. Paginate it eventually
In one of my sharepoint sites I have a document library with 7,000 document sets, if I count the files inside there are 21000 files in total.
In the beginning we had some views, but when they growth we had list view threshold issues. What I did was to remove some of those views and use search results webparts to get the results the user wants. For me incrementing the threshold is not a solution because this document library grows fast, (2K per month)
This solved the problem for some time.
However, some users do require to export to excel to do pivots based on this data, the only way I can think of is using reporting services in integrated mode with sharepoint, because I can export reports to excel and then they can do pivots.
The question is, will I have the same threshold problem when I make a report based on list data?
What other options do I have?
I have exported files to excel with 600,000+ rows. If the data you are pulling out reaches a certain size you will have to transition to .csv files as excel too has a threshold. The main issues you will run into on very large datasets are timeout issues that can be managed by configuring your http and ssrs timeouts, however, this will lead to other issues including long running reports of 15+ minutes and bandwidth usage.
I would recommend testing your scenario with a simple report to pull your data and see where the limit is reached. Also, look into some filtering mechanisms using parameters to reduce the amount of data returned to the client. If it becomes unmanageable then you may want to look into SSIS or some of the data-warehousing features. SSRS also has cached reporting that can place the processing burden to off hours if real-time data is not a necessity.