Progressive web app within the scope of another progressive web app - nested

I wish to make a progressive web app for a domain www.xyz.com and another progressive web app for the domain www.xyz.com/abc. How should I go about it and what will be the behaviour of service workers in both the cases? Will there be 2 service worker registrations or a single one will work for both the PWAs? Also, should I make 2 manifest files for them?

You can have only one PWA for both URLs. When you create your service worker, its scope covers the current folders and the relative children.
If you place it at the root level, you can use one service worker for both URLs, without the need to make thinks more complex introducing nested constructions.
You can define different caching strategies within the same service worker with ease.
If you want to deepen the PWA topic, you can have a look at the articles I wrote about them, starting from basic concepts and then going deeper with more advanced themes.
Update
The scope property of the manifest file determines which HTTP calls can be intercepted by the Service Worker (SW).
If you have scope: '/', your SW will intercept all HTTP requests made by your app. If it would be /user, the SW couldn't intercept a request that doesn't fall within its scope (e.g. /orders/code.js), but it would intercept all GET calls under the user root, like /user/details.
In your case you can have therefore two SWs (even on the same page) where you can define different scopes to intercept, you have just to be careful in not overlapping the scopes among them.
If you plan to cache also static assets, keep it in mind that the scope for the SW is given by the folder where the SW file is stored and its subfolders.
About multiple manifest files, Jeff gave an answer about it on SO.

Related

Serving some files from Azure CDN to only certain users

Suppose I have a website that is served by an Azure CDN endpoint (via files that have been uploaded to blob storage).
I want the minified website content to be available to everyone -- that part is easy, since that's what the CDN does by default.
Ideally, I would also have the sourcemaps available on that same CDN (so that the default behavior of //# sourceMappingURL=0-8d1d0e3cc4594b2c2758.js.map within my JS files would "just work"). However, I'd like for those sourcemaps to only be served to a subset of users.
Is there a way of accomplishing this scenario? I'm happy to defined "subset" in any way that would make this scenario work (e.g., being connected to a certain VPN or being in a certain IP-address range; or using Fiddler to set a secret header; etc.)
Thanks!
I assume that what you need is to build a system that, in production, allows to offer sourcemaps to a certain group of users, for instance, a team of developers, but not to everyone, the sourcemaps should not be publicly accessible.
There are different alternatives that can help achieve this goal.
On the one hand, we can try to use a rules engine that analyzes the received HTTP traffic and offers one or the other response depending on the criteria deemed appropriate.
These rules engines allows you to customize how HTTP requests are handled, by defining a set of possible match condition(s) on the incoming requests, and actions to be performed if the match condition(s) apply.
Azure CDN provides two types of rules engines, one standard rule engine for Azure CDN from Microsoft, and other premium from Verizon, which provide more advanced features.
How you use these rule engines depends largely on how you need to identify your user group and what you want to do to condition the response offered by your application to a sourcemap request.
For instance, one of the standard rule engines match conditions - also available in the premium rule engine - is the remote IP address where the request comes from: maybe it could be a good criterion to discriminate between your different subsets of users.
Or, as you suggested with the use of Fiddle, you can analyze incoming request header in search of a custom one.
The Azure CDN Verizon Premium rule engine provides more advanced match conditions based in browser, device type, etcetera.
Once the users have been identified, the system must consider the action to take depending on whether they belong to one or another group.
Both the standard and Verizon rules engines provides that could be relevant for this purpose.
I think that the best option, if you can use the Verizon rule engine, will be to deny access to the HTTP requests send by users that does not belong to the group allowed to access the sourcemaps.
Other options, although I think more difficult to implement if your are working with webpack and SPA, can be redirect the requests received from one subset of users to certain files which contains the sourcemaps - or to different index.html pages if you are using SPA in your frontend, each with different js and css resources, with sourcemaps or not -, or rewrite the URL to directly deliver a different set of files.
Another possible action could be to not include the inline sourcemap location in your minified files and to take advantage of the capabilities to modify response headers and Append a SourceMap header that points to the actual sourcemaps instead. This header will only be sent for the desired user group. Again, depending of how you are building your frontend it could not be an easy task.
Finally, if you are using Webpack and the SourceMapDevToolPlugin to build your frontend, you can use the publicPath option to point, in production, your sourcemaps to a non public, more developer oriented, URL location. This is the approach followed in this article. I think this approach is also worth looking into.

Storing assets within the AWS ecosystem

I am newbie to AWS development (but have extensive experience on traditional development).
I need to build a web app with ReactJS frontend, NodeJs/Express backend, MySQL. Its SaaS app possibly with thousands of clients. There will be a use case where we have a Parent client having hundreds of Child clients.
So, parent-child relationship within clients itself. Child's settings supersede parents. Each client (doesn't matter child or parent) will have its unique logo and style. Child may or may not override logos and styles. If Child doesn't override it gets from Parent Client. and so on..
I can handle logos/styles/settings at the time of client's onboarding using some configuration tool. Thus, I will upload/change the logos/styles/settings for parent and/or child clients- at the time of client's implementation. I need ability to change these logos/styles/settings, later, whenever clients demand so.
What are my options on how to design the app: (again, I am newbie to AWS)?
Storage-wise, what's the best place to store logos/styles/settings? If AWS S3, will it provide me certain folder layout to handle parent-child or should I dump all images/styles(css) in single folder with client's prefix on each item?
Other option, pulling of images/styles/settings during runtime when site renders. Thus, I will to determine parent-child relationships for every click on web app and determine where to grab the resources from. Little overhead at runtime since I am pushing the parent-child logic at runtime instead of configuration-time/one-time.
Any thoughts/alternate design/suggestions/pros&cons with respect to AWS environment?
Assets are definitely best place in Amazon S3, each asset is referred to as an object within Amazon S3. You give the object a key such as client/main.css. By doing this you could separate out each client into their own prefix (you might see this to look like a subfolder within the GUI).
With setting it depends how sensitive they are, if it is simply for your frontend then you could store a JSON file in S3 within the same prefix as your assets. Otherwise if there should be some security over the settings you can use DynamoDB which boasts "DynamoDB offers consistent single-digit millisecond latency".
As Chris Williams has already mentioned, use S3 as your raw data store for images, js, css, html, other assets. Additionally, you can set up a cloudfront distribution in front of these assets to serve them quickly to your customers. Cloudfront has edge support as well so your website will be performant globally.
Theres a lot of great resources on S3 + Cloudfront for website content serving available online.

Angular6 micro front-ends routing

I have angular 6 micro front end application. It's having 4 different applications inside Main application. And how do i implement routing between those applications.And how do i implement routing in Main application (i have many child routes in Main application) and Sub applications too. I am using "#angular/elements".
Please find my code in this this repository https://github.com/nagaraju123/microfrontend
Routing for a "true" microfrontend architecture should follow:
Each microfrontend is a separate service in your infrastructure
You have an ingress/reverse proxy in front of these services that allows routing to a specific service based on path
You have a single domain name: app.yoursite.com
You configure the ingress to route to the correct microfrontend based on path (e.g. /namespace/accounting goes to the accounting frontend)
The microfrontends themselves control how they make requests (e.g. the accounting frontend serves some accountingPage.js, and code within that page will make all fetch requests with prefix: /namespace/accounting)
Summary:
It really depends on what you mean by "microfrontend" though. Often when people say microfrontend, they refer to creating separate JS bundles, but still sharing a single backend.
A "true" microfrontend architecture achieves total encapsulation of both the static assets/javascript and the backend/request handlers. Separation of concerns, not separation of technologies. Code served by one microfrontend is totally isolated from code served by another... stitched together by a common "platform" service.

IIS worker threads issue

I have my site hosted on IIS hosting. Site has feature that needs calling WCF service and then return result. The issue is that site is processing calling to WCF service another web site calling is freezing and not return content fast (this is just static content). I setup two chrome instances with different imacros' scripts, which one is calling page that requests wcf service and another one page is just static content. So here I can just see that when first page that requests wcf services freezes, another one page also freezes and when first is released the second is too.
Do I need reconfigure something in my Web.Config or do should I do something else to get possible to get static content immediately.
I think that there are two seperate problems here:
Why does the page that uses the WCF service freeze
Why does the static content page freeze
On the page that calls the WCF service a common problem is that the WCF client is not closed. By default there are 10 WCF connections with a timeout of 1 min. The first 10 calls go fine (say they execute i 2 secs), then the 11th call comes, there are no free wcf connections it must therefore wait 58 secs for a connection to timeout and become available.
On why your static page freezes. It could be that your client only allows one connection to the site, the request for the static page is not sent untill the request for the page with the wcf services is complete.
You should check the IIS logs to see how must time IIS is reporting that the request is taking.
I would say that this is a threading issue. This MSDN KB article has some suggestions on how to tune your ASP.NET threading behavior:
http://support.microsoft.com/kb/821268
From article - ...you can tune the following parameters in your Machine.config file to best fit your situation:
maxWorkerThreads
minWorkerThreads
maxIoThreads
minFreeThreads
minLocalRequestFreeThreads
maxconnection
executionTimeout
To successfully resolve these problems, do the following:
Limit the number of ASP.NET requests that can execute at the same time to approximately 12 per CPU.
Permit Web service callbacks to freely use threads in the ThreadPool.
Select an appropriate value for the maxconnections parameter. Base your selection on the number of IP addresses and AppDomains that are used.
etc...
Consider such scenario: when you make a request to IIS your app changes, deletes or creates some file outside of App_Data folder. This often tends to be a log file which is mistakenly was put at bin folder of the app. The file system changes lead to AppDomain reloading by IIS as it thinks that app was changed, hence the experienced delay. This may or may not apply to your issue, but it is a common mistake in ASP.NET apps.
Well, maybe there is no problem...
It may be just the browser's same domain simultaneous requests limit.
Until the browser not finished the request to the first page (the WCF page), it won't send the request to the second page (the static).
Try this:
Use different browsers for each page (for example chrome/firefox).
Or open the second page in chrome in incognito window (Ctrl + Shift + N).
Or try to access each page from different computer.
You could try to use AppFabric and see what is wrong with your WCF services http://msdn.microsoft.com/en-us/windowsserver/ee695849

How is a web application started? Where is the entry point (if there's one)?

I am using IIS to develop some web applications. I used to believe that every application should have a entry point. But it seems a web application doesn't have one.
I have read many books and articles addressing how to build an ASP.NET application under IIS, but they are just not addressing the most obvious and basic thing that I want to know.
So could anyone tell me how is a web application started? What's the difference between a traditional desktop application and a web application in terms of their working paradigm, such as the starting and terminating logic.
Many thanks.
Update - 1 - 23:14 2011/1/4
My current understanding is:
When some request arrives, the URL contained in the request will be extracted by the IIS. I guess IIS must have maintained some kind of a internal table which maps a URL to corresponding physical directory on disk. Let's take the following URL as an example:
http://myhost/webapp/page1.aspx
With the help of the aforementioned internal table, IIS will locate the page1.aspx file on disk. And then this file is checked and the code-behind code file is located. And then proper page class instance will be contructed and its methods defined in the code-behind file will be invoked in a pre-defined order. The output of the series of method invoking will be the response sent to the client.
Update - 2 - 23:32 2011/1/4
The URL is nothing but an identifier that serves as an index into the aforementioned internal table. With this index, IIS (or any kind of web server technology) could find the physical location of the resource. Then with some hint (such as file extension name like *.aspx), the web server knows what handler (such as the asp.net ISAPI handler) should be used to process that resource. That chosen handler will know how to parse and execute the resource file.
So this also explains why a web server should be extensible.
It depends what language and framework you are using, but broadly there are a number of entry points that will be bound to HTTP requests (e.g. by URL). When the server receives a request that matches one of these bindings, the bound code is executed.
There may also be various filter chains and interceptors that are executed based on other conditions of the request. There will probably also be some set-up code that the server executes when it starts up. Ultimately, there is still a single entry-point - the main() function of the server - but from the web application's perspective it is the request bindings that matter.
Edit in response to question edits
I have never used IIS, but I would assume there is no "lookup table", but instead some lookup rules. I shall talk you through the invocation of a .jsp page on an Apache server, which should be basically the same process.
The webapp is written and placed in the file system - e.g. C:/www/mywebapp
The web server is given a configuration rule telling it that the URL path /webapp/ should be mapped to C:/www/mywebapp
The web server is also configured to recognise .jsp files as being JSP servlets
The web server receives a request for /webapp/page1.jsp, this is dispatched to a worker thread
The web server uses its mapping rules to locate C:/www/mywebapp/page1.jsp
The web server wraps the code in the JSP file in a class with method serveRequest(request, response) and compiles it (if not already done so)
The web server calls the serveRequest function, which is now the entry point of the user code
When the user code is finished, the web server sends the response to the client, and the worker thread terminates
This is the most basic system - resource-based servlets (i.e. .jsp or .aspx files). The binding rules become much more complicated when using technologies like MVC frameworks, but the essential concepts are the same.
Similar to what OrangeDog mentioned in his answer, there is plenty that goes on when serving these pages Before you even get to your code.
Not only in asp.net mvc, but in asp.net in general there are various pieces that come into play when you're executing a request.
There is code like modules, handlers, etc that again do processing Before it gets to the code of the page. Additionally you can map the same page to be able to process different urls.
The concept of handler in asp.net is important, as there are various handlers that are responsible of processing requests that match extensions and/or http verbs (get, head, post). If you take a look into %systemroot%\Microsoft.NET\Framework64\v4.0.30319\Config\web.config, you can see a section. You can also see the handlers in IIS (these can be changed x site).
For example, the HttpForbiddenHandler is one that just rejects the request. It is configured to be called for special files like the sources "*.cs".
You can define your own handler, that is nothing more than a class that implements an IHttpHandler interface. So it has 2 methods: ProcessRequest and IsReusable. This is more similar to your cgi program, as the implementation is mainly a method that produces HTML or any other type of output based on the information in the request.
Asp.net pages build on top of that, and have plenty of extra features meant to make it easier for you to develop pages. You implement a class that inherits from Page, and there are 2 code files associated to it (.aspx and .cs). The same can be said for asp.net mvc, but it is structured differently. There is much more than it, if you want to take advantage of it you'd need to learn about it.
The downside of those abstractions, is that it makes some developers lose track of the context they're at / about the underlying. The context is still the same, you're producing an application that takes a request and produces an output. The difference is that there is plenty more code in place intended to make it easier.
In terms of a more detailed list of IIS's ASP.NET Request Lifecycle, there are a quite a few stages in the HTTPApplication Pipeline. Faily recently there was a good blog post that I thought summarized them very concisely and well. It's "HTTP Request Lifecycle Events in IIS Pipeline that every ASP.NET Developer Should Know" by Suprotim Agarwal.
For a more detailed explanation you should check out the MSDN article on the subject. This will also go into information on what happens before that pipeline.

Resources