How high will the resolution be for OpenMapTiles Aerial/Satellite Imagery? - resolution

So this is more of an inquiry,
I visited https://openmaptiles.com/satellite/ and it is my understanding that "Highres aerial imagery" is still to be added. So, my first question will this imagery comparable to the imagery we see in Google Maps and Bing Maps? I attached some of OpenMapTiles lower resolution imagery for comparison. Also, is there a scheduled release date for coverage of the US? Maybe I'm getting ahead of things here but will the imagery be available for download at a one time fee and will it be as affordable as the OpenStreetMap tiles?

I guess this is related to the cost to acquire high resolution data. At the moment, free data sources such as Sentinel-2, Landsat-8, and CBERS4 are 5m to 30m resolution. Higher resolution data like what one would see in Google Maps and Bing Map are very expensive as they require commercial satellite images or aereal photographs. The gov't and local municipalities often provide high-res geospatial data for free on their websites (like open data), if you need data for only a few urban area.

Related

azure video streaming live how calculate the cost by size audience?

I want to make a live events with 100k viewers or more for 3 hours... How can I calculate the cost
I read documentation but the azure docs are quite ambiguous regarding the pricing
There is no shortcut really where you can just enter an audience size and get a result - different factors will impact your approach including, bitrates, support level, number of events, redundancy and geo reach required etc.
A typical approach is to build a model, maybe using excel, and test different combinations and scenarios in the Azure online cost calculator.
You can then experiment with changing details like regions, support level etc to see the impact.

How do I analyze a bunch of profiles

I have a bunch of profiles about my application(200+ profiles per week), I want to analyze them for getting some performance information.
I don't know what information can be analyzed from these files, maybe
the slowest function or the hot-spot code in the different release versions?
I have no relevant experience in this field, has anyone had that?
Well, it depends on the kind of profiles you have, you can capture number of different kinds of profiles.
In general most people are interested in the CPU profile and heap profile since they allow you to see which functions/lines use the most amount of CPU and allocate the most memory.
There are a number of ways you can visualize the data and/or drill into them. You should take a look at help output of go tool pprof and the pprof blog article to get you going.
As for the amount of profiles. In general, people analyze them just one by one. If you want an average over multiple samples, you can merge multiple samples with pprof-merge.
You can substract one profile from another to see relative changes. By using the -diff_base and -base flags.
Yet another approach can be to extract percentages per function for each profile and see if they change over time. You can do this by parsing the output of go tool pprof -top {profile}. You can correlate this data with known events like software updates or high demand and see if one functions starts taking up more resources than others(it might not scale very well).
If you are ever that the point where you want to develop custom tools for analysis, you can use the pprof library to parse the profiles for you.

Will chunk/bundle optimisations help on my website if first Input delay(FID) is already less?

According to core web vitals there are only 3 core vitals for measuring the user experience of any website LCP(Largest contentful paint), FID(First input delay) and CLS(Cumulative Layout shift). According to Pagespeedinsights or CRUX dashboard, FID of my website is in good limits i.e 90% of users have an input delay of less than 100 ms
Will there be any benefit if I do the chunk optimisations(splitting, lazy loading) on the user experience of people landing on my website?
I understand that it will effect TBT(Total Blocking Time), TTI(Time to interactive) but anyways it doesn't matter if my FID is ver less. is my understanding correct?
I work on several large sites and we measure FID and TBT across thousands of pages. My work on this shows there is little correlation between TBT and FID. I have lots of pages reporting TBT of 2s or more but then are in the 90% score for FID. So I would NOT spend money or time optimizing TBT, what I would do instead is optimize for something that you can correlate to a business metric. For instance, add some user timings to measure how fast a CTA button appears and when it becomes interactive. This is a metric that is useful.
Being in the green on the core web vitals report (for one or all metrics) is great, but it doesn't mean that you should not try to improve performance further. In fact, if all your competitors have better FID / CLS / LCP / etc. than you, you will be at a disadvantage. Generally speaking, I think the web vitals report can be used as a guide to continuously prioritise changes to try and improve performance.
While it's impossible to predict the improvements without looking at a current report and the codebase, it seems fair to expect code-splitting to improve FID and LCP, and lazy-loading to help with LCP. Both improvements would benefit users.
Note that TBT and FID are pretty similar.

How to get unsampled data from Google Analytics API in a specific day

I am building a package that uses the Google Analytics API for Python.
But, in severous cases when I have multiple dimensions the extraction by day is sampled.
I know that if I use sampling_level = LARGE will use a sample more accurate.
But, somebody knows if has a way to reduce a request that you can extract one day without sampling?
Grateful
setting sampling to LARGE is the only method we have to decide the amount of sampling but as you already know this doesn't prevent it.
The only way to reduce the chances of sampling is to request less data. A reduced number of dimensions and metrics as well as a shorter date range are the best ways to ensure that you dont get sampled data
This is probably not the answer you want to hear but, one way of getting unsampled data from Google analytics is to use unsampled reports. However this requires that you sign up for Google Marketing Platform. With these you can create an unsampled report request using the API or the UI.
There is also a way to export the data to Big Query. But you lose the analysis that Google provides and will have to do that yourself. This too requires that you sign up for Google Marketing Platform.
there are several tactics of building unsampled reports, most popular is splitting your report into shorter time ranges up to hours. Mark Edmondson did a great work on anti-sampling in his R package so you might find it useful. You may start with this blog post https://code.markedmondson.me/anti-sampling-google-analytics-api/

How to embed basic weather report for current time for fixed location in web page?

What I need:
I need to output a basic weather reports based on the current time and a fixed location (a county in the Republic of Ireland).
Output requirements:
Ideally plain text accompanied with a single graphical icon (e.g.
sun behind a cloud etc.).
Option to style output.
No adverts; no logos.
Free of charge.
Numeric Celsius temperature and short textual description.
I appreciate I'm that my expectations are high so interpret the list more as a "wish-list" rather than delusional demands.
What I've tried:
http://www.weather-forecast.com - The parameters for the iframe aren't configurable enough. Output is too bloated.
Google Weather API - I've played with PHP solutions to no avail though in any case, apparently the API is dead: http://thenextweb.com/google/2012/08/28/did-google-just-quietly-kill-private-weather-api/
My question:
Can anyone offer suggestions on how to embed a simple daily weather report based on a fixed location with minimal bloat?
Take a look at http://www.zazar.net/developers/jquery/zweatherfeed/
It's pretty configurable, although I'm not sure if there is still too much info for your needs. I've only tried it with US locations; all you need is a zipcode. The examples show using locations from other countries. I'm assuming it's a similar setup to get locations added for Ireland.

Resources