How web app can used offlinely base on Service Worker? - web

I know there are many documents about Service Worker, also many questions already asked.
But today is a long day with me, so I'm very tired to read many many docs now.
I just want to explain my thinking about Service Worker, how it helps us serve web app offline, and I hope everybody can tell me whether it's right or not.
Everything I know about Service Worker is it intercepts on the network request job of browser, and do something. So I guess when it intercepts, it will cached every request. So when the network isn't connected, Service Worker uses the data it cached for serving to the users
Thanks for all reply,

Yes, your thoughts are right. Here I will provide some more details about the whole functioning.
A service worker (SW), like a web worker, runs on different thread than the one used by the main web app. This allows SW to keep running even when the web app is not opened, allowing for instance to receive and show web notifications.
A SW, differently from a web worker, used for generic purposes, acts specifically as a proxy between our web application and the network. However is up to us to define and implement what and how the SW has to cache data locally, otherwise, by default, the SW doesn't know what to store in the cache.
For this we have to implement caching strategies that target static assets (like .js or .css files, for instance) or even URLs (but keep in mind that the CACHE API, used by the SW, can only cache GET calls, no PUT/POST).
Once the assets or URLs we are interested are defined within the scope of a specific strategy, the SW will intercept all outgoing requests and see if there is a match and eventually provide the data from the local cache, instead of going over the network.
Of course this depends on the strategy we chose/implement.
Since the requested data is already available locally, the SW can deliver it even when the user is offline.
If interested, I wrote an article, describing in detail the service workers and some of the most common caching strategies, applied to different scenarios.

Related

Scaling nodejs app with pm2

I have an app that receives data from several sources in realtime using logins and passwords. After data is recieved it's stored in memory store and replaced after new data is available. Also I use sessions with mongo-db to auth user requests. Problem is I can't scale this app using pm2, since I can use only one connection to my datasource for one login/password pair.
Is there a way to use different login/password for each cluster or get cluster ID inside app?
Are memory values/sessions shared between clusters or is it separated? Thank you.
So if I understood this question, you have a node.js app, that connects to a 3rd party using HTTP or another protocol, and since you only have a single credential, you cannot connect to said 3rd party using more than one instance. To answer your question, yes it is possibly to set up your clusters to use a unique use/pw combination, the tricky part would be how to assign these credentials to each cluster (assuming you don't want to hard code it). You'd have to do this assignment when the servers start up, and perhaps use a a data store to hold these credentials and introduce some sort of locking mechanism for each credential (so that each credential is unique to a particular instance).
If I was in your shoes, however, what I would do is create a new server, whose sole job would be to get this "realtime data", and store it somewhere available to the cluster, such as redis or some persistent store. The server would then be a standalone server, just getting this data. You can also attach a RESTful API to it, so that if your other servers need to communicate with it, they can do so via HTTP, or a message queue (again, Redis would work fine there as well.
'Realtime' is vague; are you using WebSockets? If HTTP requests are being made often enough, also could be considered 'realtime'.
Possibly your problem is like something we encountered scaling SocketStream (websockets) apps, where the persistent connection requires same requests routed to the same process. (there are other network topologies / architectures which don't require this but that's another topic)
You'll need to use fork mode 1 process only and a solution to make sessions sticky e.g.:
https://www.npmjs.com/package/sticky-session
I have some example code but need to find it (over a year since deployed it)
Basically you wind up just using pm2 for 'always-on' feature; sticky-session module handles the node clusterisation stuff.
I may post example later.

How to detect and possibly ignore processing a bad/hung client browser request

I'm developing a node web application. And, while testing around, one of the client chrome browser went into hung state. The browser entered into an infinite loop where it was continuously downloading all the JavaScript files referenced by the html page. I rebooted the webserver (node.js), but once the webserver came back online, it continued receiving tons of request per second from the same browser in question.
Obviously, I went ahead and terminated the client browser so that the issue went away.
But, I'm concerned, once my web application go live/public, how to handle such problem-client-connections from the server side. Since I will have no access to the clients.
Is there anything (an npm module/code?), that can make best guess to handle/detect such bad client connections from within my webserver code. And once detected, ignore any future requests from that particular client instance. I understand handling within the Node server might not be the best approach. But, at least I can save my cpu/network by not rendering to the bad requests.
P.S.
Btw, I'm planning to deploy my node web application onto Heroku with a small budget. So, if you know of any firewall/configuration that could handle the above scenario please do recommend.
I think it's important to know that this is a pretty rare case. If your application has a very large user base or there is some other reason you are concerned with DOS/DDOS related attacks, it looks like Heroku provides some DDOS security for you. If you have your own server, I would suggest looking into Nginx or HAProxy as load balancers for your app combined with fail2ban. See this tutorial.

Developing XPages for Clusters

I am about to start a new XPage project which will be used world-wide. I am a bit concerned because they are worried about performance and are therefore thinking about using this application in with a load balancer or a in a cluster. I have been looking around and I have seen that there can be issues with scoped variables (for example of the user starts the session on one server and is then sent to another, certain scoped variables go missing). I have also seen this wonderful article which focuses on performance, but does not really mention anything about a clustered environment.
Just a bit of extra info: concurrent users should not be higher than 600, but may grow over time, there are about 3000 users total. The XPage application will be a portal for a two data sources (an active database and its archive).
My question is this: as a developer, what must I pay very close attention to when developing an application that may run behind a load balancer or in a clustered environment?
Thanks for your time!
This isn't really an answer...but I can't fit this in a comment.
We faced a very similar problem.
We have an xpage SPA (Single Page Application) application that has been in production for 2-3 years, with variable user load up to 300-400 concurrent users who login for 8 hr sessions, we have 4 clustered Domino servers, 1 being a "workhorse" running all scheduled jobs, and 3 dedicated HTTP servers.
We use SSO in Domino and the 3 HTTP servers participate, so a user only has to authenticate once and can access all HTTP servers. We use a reverse proxy, so all users go to www.ourapp.com but get redirected to servera.ourapp.com, serverb.ourapp.com, etc., once they get directed to a server, the rev proxy issues a cookie to the client. This provides a "sticky" session to whichever server they have been directed to, and rev proxy will only move them to a different server if the server they are on becomes unavailable.
We use "user" managed session beans to store config for each user, so if the user moves server, if the user's bean does not exist, it will be created. But they key point is: because of sticky session, the user will only move if we bring a server down or the server was to fail. Since our app is a SPA, a lot of the user "config" is stored client side, so if they get booted to a different server (to the user, they are still pointed to www.ourapp.com) nothing really changes.
This has worked really well for us so far.
The app is also accessed by an "offline" external app, it points to the rev proxy (www.ourapp.com), but we did initially run into problems because this app was not passing back the Rev Proxy "sticky" cookie token, so 1 device was sending a request to proxy which got routed to server A, then 1 sec later to server B, then A..B..C, all sorts of headaches...since the cluster can be a few seconds out of sync, if sending requests to same doc...conflicts. As soon as we got the external app to pass back rev proxy token for each session, problem solved.
The one bit of your question I don't quite understand is: "...The XPage application will be a portal for a single database (no replicas) and an archive database (no replicas). " Does that mean the portal will be clustered, but the DB users are connecting to will not be clustered?
We haven't really coded any differently than if app was on 1 server, since the user's session is "stuck" to one server. We did need persistent document locking across all the servers. We initially used the native document locking, but $writers does not cluster, so we had to implement our own...we set a field on doc so that "lock" clustered (we also had to then implement s single lock store...sigh, can talk about that another time). Because of requirements, we have to maintain close to 1 million docs in 3 of the app databases, we generate huge amounts of audit data, but we push that out to SQL.
So I'd say this is more of an admin headache (I'm also the admin for this project, so in our case I can confirm this!) than a designer headache.
I'd also be interested to hear what anyone else has to say on this subject.

Load test a Backbone App

I've got an NGinx/Node/Express3/Socket.io/Redis/Backbone/Backbone.Marionette app that proxies requests to a PHP/MySQL REST API. I need to load test the entire stack as a whole.
My app takes advantage of static asset caching with NGinx, clustering with node/express and socket is multi-core enabled using Redis. All that's to say, I've gone through a lot of trouble to try and make sure it can stand up to the load.
I hit it with 50,000 users in 10 seconds using blitz.io and it didn't even blink... Which concerned me because I wanted to see it crash, or at least breath a little heavy; but 50k was the max you could throw at it with that tool, indicating to me that they expect you to not reasonably be able to, or need to, handle more than that... Which is when I realized it wasn't actually incurring the load I was expecting because the load is initiated after the page loads and the Backbone app starts up and kicks off the socket connection and requests the data from the correct REST API endpoint (from different server).
So, here's my question:
How can I load test the entire app as a whole? I need the load test to tax the server in the same way that the clients actually will, which means:
Request the single page Backbone app from my NGinx/Node/Express server
Kick off requests for the static assets from NGinx (simulating what the browser would do)
Kick off requests to the REST API (PHP/MySQL running on a different server)
Create the connection to the Socket.io service (running on NGinx/Node/Express, utilizing Redis to handle multi-core junk)
If the testing tool uses a browser-like environment to load the page up, parsing the JS and running it, everything will be copasetic (NGinx/Node/Express server will get hit and so will the PHP/MySQL server). Otherwise, the testing tool will need to simulate this by firing off at least a dozen different kinds of requests nearly simultaneously. Otherwise it's like stress testing a door by looking at it 10,000 times (that is to say, it's pointless).
I need to ensure my app can handle 1,000 users hitting it in under a minute all loading the same page.
You should learn to use Apache JMeter http://jmeter.apache.org/
You can perform stress tests with it,
see this tutorial https://www.youtube.com/watch?v=8NLeq-QxkSw
As you said, "I need the load test to tax the server in the same way that the clients actually will"
That means that the tests is agnostic to the technology you are using.
I highly recommend Jmeter, is widely used and you can integrate it with Jenkins and do a lot of cool stuff with it.

How do I maintain state across multiple web servers?

Can I have multiple web servers hooked up to a SQL Server cluster and still maintain a user's session?
I've thought of various approaches. The one suggested by the Microsoft site is to use response.redirect to the "correct" server. While I can understand the reasoning for this, it seems kind of short sighted.
If the load balancer is sending you to the server currently under the least strain, surely as a developer you should honor that?
Are there any best practices to follow in this instance? If so, I would appreciate knowing what they are and any insights into the pros/cons of using them.
Some options:
The load balancer can be configured to have sticky sessions. Make sure your app session timeout is less than the load balancers or you'll get bounced around with unpredictable results.
You can use a designated state server to handle session. Then it won't matter where they get bounced by the LB.
You can use SQL server to manage session.
Check this on serverfault.
https://serverfault.com/questions/19717/load-balanced-iis-servers-with-asp-net-inproc-session
I'm taking here from my experience of Java App Servers, some with very sophisticated balancing algorithms.
A reasonable general assumption is that "Session Affinity" is preferable to balancing every request. If we allocate the initial request for each user with some level of work-load knowledge (or even on a random basis) and the population comes and goes them we do end up with a reasonable behaviours. Remember that the objective is to give each user a good experience not to end up with evenly used servers!
In the event of a server failing we can then see our requests move eleswhere and we expect to see our session transfered. Lots of way to achieve that (session in DB, session state propogated via high speed messaging ...).
This isn't probably the answer you're looking for, but can you eliminate the NEED for session state? We've gone to great lengths to encode whatever we might need between requests in the page itself. That way I have no concern for state across a farm or scalability issues with having to hang onto something owned by someone who might never come back.
While you could use "sticky" sessions in your load balancer, a more optimal path is to have your Session use a State Server instead of InProc. At that point, all of your webservers can point to the same state server and share session.
http://msdn.microsoft.com/en-us/library/ms972429.aspx MSDN has plenty to say on the subject :D
UPDATE:
The State Server is a service on your windows server boxes, but yeah it produces a single point of failure.
Additionally, you could specify serialization of the session to a SQL Server, which wouldn't be a single point of failure if you had it farmed.
I'm not sure of how "heavy" the workload is for a state server, does anyone else have any metrics?

Resources