Does location affect the PSI lab test - pagespeed-insights

I am curious if your location when running a lab test in page speed insights determines the version of the site that is tested. For instance some sites might implement cookie consent in certain regions, would a test on page speed insights account for this if the test is ran from a region that does not use the cookie consent?
Would the version of the site tested not include the consent modal since the region we are testing from doesn't use the consent modal?

PageSpeed Insights lab tests are run from one of three locations
PageSpeed also runs in a Google datacenter that can vary based on network conditions, you can check the location that the test was by looking at the Lighthouse Report's environment block:
PageSpeed will report running in one of: North America, Europe, or Asia.
Typically it takes the location nearest you.
So yes, if you serve different data based on where it's coming from, then it will reflect that (depending on the accuracy of your detection).

Related

Azure Spatial Anchor Region Avalability

I want to create a wayfinding app for a specific building using AR Core.
Because Google Cloud Anchor service has a 24-hour limit, I thought the Azure Spatial Anchor Service might do the job.
But my location is in East Europe. According to the docs, East Europe is not yet supported.
Has anyone tried these services from my location?
I'd try using either West Europe or North Europe. They're the closest regions that we have Azure Spatial Anchors in to eastern Europe. (You can get a sense for the Azure network's RTT to various points on the global from the Azure network round trip latency statistics page. It's between Azure data centers, so it doesn't take into account things like the app user's cell provider/ISP.)
Also, take a look at the effective anchor experience guidelines for some recommendation about building your UX. For example, consider designing the locating experience assuming the user will spend a few seconds doing that and may need to create a new anchor if an existing one cannot be found. Also, look for opportunities to create a watcher while something else is happening in your app so that you can overlap multiple operations that the user needs to wait for.
For example, in one of our apps, we start loading 3D assets and create a watcher at the same time. When the assets are done loading, we switch to an AR view, and, often, the anchor has also been located by the time the assets have loaded. If not, the UX can handle that case too.

Using Application Insights from multiple regions

I'm trying to find any documentation/advisory info on whether or not to use the same App Insights instance from multiple regions. I'm assuming that if I have an API App Service in useast, it's not recommended to use an App Insights instance from West region as it would add latency.
I just got the feedback from MS application insights team, the answer is no performance issue:
Application insights sends data to their backend asynchronously - so
the actual network RT time should not matter.
Also, even though the
app insights is in West Region, the 'ingest' endpoints are globally
distributed, and telemetry is always sent to the nearest available
'ingest' point.
Details are here.
For the official document, I'm asking for it(but not sure if they would have one).
Hope it helps.

Not able to see Google Maps APIs Usage Statistics

API statistics report in Google Console project doesn't show data though we are making hundreds of requests every day. Refer attached screenshot, it always shows blank.
I have billing enabled in project.
API statistics started showing once I upgraded the plan.
I'm not sure if it's Google's policy to not collect stats for free plan to improve the performance of paid plans.

How to check which region more quick from my country

I'm creating Azure web site. My country landed between Europe and Asia. Azure region option include 5 region (East Asia, West Europe, North europe, West US, East US).
How to choose which is quick from my country?
It all depends on the routing to the Windows Azure datacenters. Have you tried testing the download speed, pings, ... to the Europe and Asia datacenters?
Try a tool like wget or even Visual Studio. With Visual Studio (I think you'll need the Ultimate) you can create load tests which can perform different actions on your Web Site (like downloading files, loading pages, ...). Run this test on Web Sites deployed in different datacenters to get a better view about performance from your country (and you can use this information to choose where you want to deploy your Web Site).
Note that performance can still vary ISPs. If performance is key for your application consider using Cloud Services together with the Windows Azure Traffic Manager (configured with performance load balancing). The Traffic Manager can redirect your user to the closest datacenter in terms of performance.

Minimize downtime in Azure

We are experiencing a very serious unscheduled downtime of our Azure application today for what is now coming up to 9 hours. We reported to Azure support and the ops team is actively trying to fix the problem and I do not doubt that. We managed to get our application running on another "test" hosted service that we have and redirected our CNAME to point at the instance so our customers are happy, but the "main" hosted service is still unavailable.
My own "finger in the air" instinct is that the issue is network related within our data center (west europe), and indeed, later on in the day the service dash board has gone red for that region with a message to that effect. (Our application is showing as "Healthy" in the portal, but is unreachable via our cloudapp.net URL. Additionally threads within our application are logging sql connection exceptions into our storage account as it cannot contact the DB)
What is very strange, though, is that the "test" instance I referred to above is also in the same data centre and has no issues contacting the DB and it's external endpoint is fully available.
I would like to ask the community if there is anything that I could have done better to avoid this downtime? I obeyed the guidance with respect to having at least 2 roles instances per role, yet I still got burned. Should I move to a more reliable data centre? Should I deploy my application to multiple data centres? How would I manage the fact that my SQL-Azure DB is in the same datacentre?
Any constructive guidance would be appreciated - being a techie, I've never had a more frustrating day being able to do nothing to help fix the issue.
There was an outage in the European data center today with respect to SQL Azure. Some of our clients got hit and had to move to another data center.
If you are running mission critical applications that cannot be down, I would deploy the application into multiple regions. DNS resolution is obviously a weak link right now in Azure, but can be worked around (if you only run a website it can be done very simply using Response.Redirects or similar)
Now, there is a data synchronization service from Microsoft that will sync up multiple SQL Azure databases. Check here. This way, you can have mirror sites up in different regions and have them be in sync with SQL Azure perspective
Also, be a good idea to employ a 3rd party monitoring service that would detect problems with your deployed instances externally. AzureWatch can notify or even deploy new nodes if you choose to, when some of the instances turn "Unresponsive"
Hope this helps
I can offer some guidance based on our experience:
Host your application in multiple data centers, complete with Sql Azure databases. You can connect each application to its data center specific Sql Server. You can also cache any external assets (images/JS/CSS) on the data center specific Windows Azure machine or leverage Azure Blog Storage. Note: Extra costs will be incurred.
Setup one-way SQL replication between your primary Sql Azure DB and the instance in the other data center. If you want to do bi-rectional replication, take a look at the MSDN site for guidance.
Leverage Azure Traffic Manager to route traffic to the data center closest to the user. It has geo-detection capabilities which will also improve the latency of your application. So you can redirect map http://myapp.com to the internal url of your data center and a user in Europe should automatically get redirected to the European data center and vice versa for USA. Note: At the time of writing this post, there is not a way to automatically detect and failover to a data center. Manual steps will be involved, once a failover is detected and failover is a complete set (i.e. you will failover both the Windows Azure AND Sql Azure instances). If you want micro-level failover, then I suggest putting all your config the in the service config file and encrypt the values so you can edit the connection string to connect instance X to DB Y.
You are all set now. I would create or install a local application to detect the availability of the site. A better solution would be to create a page to check for the availability of application specific components by writing a diagnostic page or web service and then poll it from a local computer.
HTH
As you're deploying to Azure you don't have much control about how SQL server is setup. MS have already set it up so that it is highly available.
Having said that, it seems that MS has been having some issues with SQL Azure over the last few days. We've been told that it only affected "a small number of users". At one point the service dashboard had 5 data centres affected by a problem. I had 3 databases in one of those data centres down twice for about an hour each time, but one database in another affected data centre that had no interruption.
If having a database connection is critical to your app, then the only way in the Azure environment to ensure against problems that MS haven't prepared against (this latest technical problem, earthquakes, meteor strikes) would be to co-locate your sql data in another data centre. At the moment the most practical way to do this is to use the synch framework. There is an ability to copy SQL Azure databases, but this only works within a data centre. With your data located elsewhere you could then point your app at the new database if the main one becomes unavailable.
While this looks good on paper though, this may not have helped you with the latest problem as it did affect multiple data centres. If you'd just been making database copies on a regular basis, that might have been enough to get you through. Or not.
(I would have posted this answer on server fault, but I couldn't find the question)
This is just about a programming/architecture issue, but you amy also want to ask the question on webmasters.stackexchange.com
You need to find out the root cause before drawing any conclusions.
However. my guess one of two things was the problem
The ISP connectivity differs for the test system and your production system. Either they use different ISPs, or different lines from the same ISP. When I worked in a hosting company we made sure that ou IP connectivity went through at least two different ISPS who did not share fibre to our premises (and where we could, they had different physical routes to the building - the homing ability of backhoes when there's a critical piece of fibre to dig up is well proven
Your datacentre had an issue with some shared production infrastructure. These might be edge routers, firewalls, load balancers, intrusion detection systems, traffic shapers etc. These typically are also often only installed on production systems. Defences here involve understanding the architecture and making sure the provider has a (tested!) DR plan for restoring SOME service when things go pair shaped. Neatest hack I saw here was persuading an IPS (intrusion prevention system) that its own management servers were malicious. And so you couldn't reconfigure it at all.
Just a thought - your DC doesn't host any of the Wikileaks mirrors, or Paypal/Mastercard/Amazon (who are getting DDOS'd by wikileaks supporters at the moment)?

Resources