Translator resource in a non global region - region

I created a Translator resource with a specific region (non global) (example: eastus).
when I'm calling the global api "api.cognitive.microsofttranslator.com", its guaranteed it will be processed on the specific region? or at least in the same geo?
Or I should use the specific geo url (such as "api-nam.cognitive.microsofttranslator.com")
I'm sending this header as part of the request "Ocp-Apim-Subscription-Region:"
Example url I'm using: https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&to=en
+what happens on cases of disasters and BCDR? the call is guaranteed to be called within the Geo? or it will just fail?

Translator is a non-regional (global) service.
However, it allows user to force requests among DCs using geographical endpoints. Which DCs handle requests for various endpoints as documented here.
https://learn.microsoft.com/en-us/azure/cognitive-services/translator/reference/v3-0-reference#base-urls

Related

Creating custom metric descriptors continually results in HTTP 500

I think I've broken my project's custom metrics.
Earlier yesterday, I was playing around with the cloud monitoring api, and I created a metric descriptor and added some time series data to it using the latest python3 cloud monitoring library create_time_series call. Satisfied with the results, I deleted the descriptor using the library, which threw an error as I had incorrectly passed in the descriptor's name. I called it again with the correct name, and it succeeded, but now every call to create_time_series on this project fails with an HTTP 500. The error message included simply says to "Try again in a few seconds," which I have, to no avail.
I have verified that I can create time series data on other projects of mine, and it works as expected. The API Explorer available in google's API documentation for metrics also gets an HTTP 500 back on calls to this project, but works fine on others. CURLing requests yields the same results.
My suspicion is that I erroneously deleted the custom.googleapis.com endpoint in its entirety, and that is why I am unable to create new metric descriptors/time series data. Is there a way to view the state of this endpoint, or recreate it?
It is impossible to to delete the data stored in your Google Cloud project but deleting the metric descriptor renders the data inaccessible. Also, according to data retention policy, there is a deletion of this data when it expires.
To delete your custom metric descriptor, call the metricDescriptors.delete method. You can follow the steps in this guide.
You are calling CreateMetricDescriptor every time when you call CreateTimeSeries. Some or all of these calls specify no metric labels, and these calls are therefore overwriting the metric descriptor with one that has no labels. The calls to ‘CreateTimeSeries’, on the other hand, do specify metric labels, causing the metric labels to be auto-added to the descriptor.
Custom metric names typically begin with custom.googleapis.com/, which differs from the built-in metrics.
When you create a custom metric, you define a string identifier that represents the metric type. This string must be unique among the custom metrics in your Google Cloud project and it must use a prefix that marks the metric as a user-defined metric. For Monitoring, the allowable prefixes are custom.googleapis.com/ and external.googleapis.com/prometheus. The prefix is followed by a name that describes what you are collecting. For details on the recommended way to name a custom metric, see Naming conventions.

Serving some files from Azure CDN to only certain users

Suppose I have a website that is served by an Azure CDN endpoint (via files that have been uploaded to blob storage).
I want the minified website content to be available to everyone -- that part is easy, since that's what the CDN does by default.
Ideally, I would also have the sourcemaps available on that same CDN (so that the default behavior of //# sourceMappingURL=0-8d1d0e3cc4594b2c2758.js.map within my JS files would "just work"). However, I'd like for those sourcemaps to only be served to a subset of users.
Is there a way of accomplishing this scenario? I'm happy to defined "subset" in any way that would make this scenario work (e.g., being connected to a certain VPN or being in a certain IP-address range; or using Fiddler to set a secret header; etc.)
Thanks!
I assume that what you need is to build a system that, in production, allows to offer sourcemaps to a certain group of users, for instance, a team of developers, but not to everyone, the sourcemaps should not be publicly accessible.
There are different alternatives that can help achieve this goal.
On the one hand, we can try to use a rules engine that analyzes the received HTTP traffic and offers one or the other response depending on the criteria deemed appropriate.
These rules engines allows you to customize how HTTP requests are handled, by defining a set of possible match condition(s) on the incoming requests, and actions to be performed if the match condition(s) apply.
Azure CDN provides two types of rules engines, one standard rule engine for Azure CDN from Microsoft, and other premium from Verizon, which provide more advanced features.
How you use these rule engines depends largely on how you need to identify your user group and what you want to do to condition the response offered by your application to a sourcemap request.
For instance, one of the standard rule engines match conditions - also available in the premium rule engine - is the remote IP address where the request comes from: maybe it could be a good criterion to discriminate between your different subsets of users.
Or, as you suggested with the use of Fiddle, you can analyze incoming request header in search of a custom one.
The Azure CDN Verizon Premium rule engine provides more advanced match conditions based in browser, device type, etcetera.
Once the users have been identified, the system must consider the action to take depending on whether they belong to one or another group.
Both the standard and Verizon rules engines provides that could be relevant for this purpose.
I think that the best option, if you can use the Verizon rule engine, will be to deny access to the HTTP requests send by users that does not belong to the group allowed to access the sourcemaps.
Other options, although I think more difficult to implement if your are working with webpack and SPA, can be redirect the requests received from one subset of users to certain files which contains the sourcemaps - or to different index.html pages if you are using SPA in your frontend, each with different js and css resources, with sourcemaps or not -, or rewrite the URL to directly deliver a different set of files.
Another possible action could be to not include the inline sourcemap location in your minified files and to take advantage of the capabilities to modify response headers and Append a SourceMap header that points to the actual sourcemaps instead. This header will only be sent for the desired user group. Again, depending of how you are building your frontend it could not be an easy task.
Finally, if you are using Webpack and the SourceMapDevToolPlugin to build your frontend, you can use the publicPath option to point, in production, your sourcemaps to a non public, more developer oriented, URL location. This is the approach followed in this article. I think this approach is also worth looking into.

Progressive web app within the scope of another progressive web app

I wish to make a progressive web app for a domain www.xyz.com and another progressive web app for the domain www.xyz.com/abc. How should I go about it and what will be the behaviour of service workers in both the cases? Will there be 2 service worker registrations or a single one will work for both the PWAs? Also, should I make 2 manifest files for them?
You can have only one PWA for both URLs. When you create your service worker, its scope covers the current folders and the relative children.
If you place it at the root level, you can use one service worker for both URLs, without the need to make thinks more complex introducing nested constructions.
You can define different caching strategies within the same service worker with ease.
If you want to deepen the PWA topic, you can have a look at the articles I wrote about them, starting from basic concepts and then going deeper with more advanced themes.
Update
The scope property of the manifest file determines which HTTP calls can be intercepted by the Service Worker (SW).
If you have scope: '/', your SW will intercept all HTTP requests made by your app. If it would be /user, the SW couldn't intercept a request that doesn't fall within its scope (e.g. /orders/code.js), but it would intercept all GET calls under the user root, like /user/details.
In your case you can have therefore two SWs (even on the same page) where you can define different scopes to intercept, you have just to be careful in not overlapping the scopes among them.
If you plan to cache also static assets, keep it in mind that the scope for the SW is given by the folder where the SW file is stored and its subfolders.
About multiple manifest files, Jeff gave an answer about it on SO.

What is the difference (if any) between a route and an endpoint in the context of a RESTful API?

Question
I have a probably rather simple question, but I'm unable to find an answer with nice explanations:
What is the difference (if any) between a route and an endpoint in the context of a RESTful API developed within a Node.js / Express application (but these concepts may be broader?!)?
(Does it relate to URLs in some way?)
Example
For example, in this article: https://medium.com/#purposenigeria/build-a-restful-api-with-node-js-and-express-js-d7e59c7a3dfb we can read:
We imported express which we installed at the beginning of the course, app.get makes a get request to the server with the route/endpoint provided as the first parameter, the endpoint is meant to return all the todos in the database.
These concepts are used interchangeably, which makes me confused.
(please note that I'm a 100% beginner with REST API, nodejs and express but I try to do my best to learn).
Edit
The two first answers chronologically speaking make me even more confused as they are perfectly antagonistic.
3 different concepts here:
Resource: {id: 42, type: employee, company: 5}
Route: localhost:8080/employees/42
Endpoint: GET localhost:8080/employees/42
You can have different endpoints for the same route, such as DELETE localhost:8080/employees/42. So endpoints are basically actions.
Also you can access the same resource by different routes such as localhost:8080/companies/5/employees/42. So a route is a way to locate a resource.
Read more: Endpoint vs. route
Read more: Endpoint vs. resource
Route
URI path used to access the available endpoints.
example: http://www.mywebsite.com/
Endpoint
performs a specific action.
has one or more parameter(s).
returns back data.
example: GET http://www.mywebsite.com/Products
A Route is the URI, and the Endpoint is the action performed on the URI.
Routes and endpoints are associated concepts - you can't really have one without the other.
What is an endpoint?
Generally speaking, an "endpoint" is one end of a communication channel where one system interacts with another system. This term is also used similarly in networking.
For a typical web API, endpoints are URLs, and they are described in the API's documentation so programmers know how to use/consume them. For example, a particular web API may have this endpoint:
GET https://my-api.com/Library/Books
This would return a list of all books in the library.
What is a route?
A "route" is typically code that matches incoming request paths to resources. In other words, it defines the URL and what code will be executed. A route path might contain regular expressions, patterns, parameters, and involve validation. For example, consider this route path:
"{controller}/{action}/{id?}"
In ASP.NET, pattern matching is applied, so GET https://my-api.com/Library/Books/341 would call the Books public method on the Library class, passing a parameter of 341. Routing frameworks can be very flexible and versatile.
The simplest example of an endpoint is to put a file you want to be consumed (say data.json) inside the public_html folder of your web server. It can be reached by GET https://my-api.com/data.json. The routing is handled by the web server out of the box and no routing code is required.
Some good things to read next:
Express.js - Routing
Wordpress Developer Resources - Routes and Endpoints
When to use "client-side routing" or "server-side routing"?
Endpoints are basically use to perform specific task and return data and endpoints are kind of part of a route.
For example is route and this is also a route but here both of them are returning different data not he same so, we can say that the last two parameter here is kind of end point means the id and question string.
endpoints:
/56075017/difference-between-route-and-endpoint
/56040846/how-to-use-the-classweight-option-of-model-fit-in-tensorflow-js
route:
https://stackoverflow.com/questions/56075017/difference-between-route-and-endpoint
https://stackoverflow.com/questions/56040846/how-to-use-the-classweight-option-of-model-fit-in-tensorflow-js
In this example: http://example.com/my-website/comments/123:
Route:
my-website/comments/123
Endpoints: (a fancy word for a URL with an action)
GET http://example.com/my-website/comments/123. returns the comment data.
DELETE http://example.com/my-website/comments/123. deletes the comment and returns the now-deleted comment data.

Customizing Zuul url endpoints for services in JHipster

I have a group of microservices, called "client-Foo", where Foo is the name for some particular third-party client.
Using those names as-is creates some really ugly endpoint urls, so I want to translate them to a much nicer hierarchal form.
I added a custom PatternServiceRouteMapper that take the serviceId client-Foo and turns it into the url client/Foo.
This gives the url I want, but also breaks the service mapping because it also changes the registered serviceId to client/Foo; thus, when Zuul goes to route it fails because there is no client/Foo service, its id is client-Foo!
I cannot hardcode any paths because the application requires an arbitrary number of different "client-*" services.
By looking at ZuulProxyAutoConfiguration, you can see that you can replace some beans to achieve your goal and in particular you should consider providing your own implementation of RouteLocator interface or extend DiscoveryClientRouteLocator class.
Your service instances could also register in Eureka server with additional data in metadataMap that your RouteLocator could use.
I would simply add a configuration defined Zuul route :
zuul:
routes:
client-foo: /foo/**
Also I would advise against having a dash in a service-id as it can confuse the config server api (in /config/foo-profile.yml where profile is the spring profile for which you want to get the config).

Resources