Technical Differences Between Service and Web Workers - multithreading

I've studied Web and Service Workers and I know that they are intended for different approaches. This thread describes them in more detail. However, what I don't understand is the technical difference between these two. While a Service Worker is meant to be a proxy between a server and a client-side application, a Web Worker can be that too. It has access to XMLHttpRequest so you can use it as proxy too.
What is the technical difference between a Web Worker and a Service Worker?

The key difference between the two is that a Service Worker is intended to intercept network requests that would typically be sent directly to a remote service and handle the event such that the front end client code can continue working even when the network is unavailable. Which is to say provide the basis of an offline mode for a web app. The front end code makes standard fetch() requests as if it were talking to the server that are intercepted by the service worker.
A Web Worker is just a general purpose background thread. The intention here is to run background code such that long running tasks do not block the main event loop and cause a slow UI. Web Workers do not intercept network requests, rather the front end code explicitly sends messages to the Web Worker.

Related

Access an Azure Web App from a Web Job using the localhost endpoint?

If I have a Web App (ASP.NET MVC) deployed in Azure and I also had a Web Job configured to run alongside the web app, my understanding is that the Web Job is an console application (or sorts) that runs and waits on message from a queue.
When a message arrives, can the WebJob call the WebApp using a local address:
http://localhost:4564/api/myFunc
as opposed to:
http://mynewapp.azurewebsites.net/api/myFunc
(1) can it be done? (2) Does it make sense to do?
Thanks!
No it is not possible for the WebJob to directly send requests to the site via localhost. This limitation is documented on the sandbox page.
Apart from the fact that communicating with a localhost without proper set up (described here) may be blocked, i see no potential blockers. But i would still avoid such implementation to avoid any hiccups and go with one of the approaches described here in the answer (consider shared storage option).

Is it safe to host NServiceBus subscribers (message/event handlers) under IIS (including Azure Web Roles)

We are designing a system that is web based but also uses NServiceBus and Azure Service Bus to communicate. We have an on premiss server running IIS for the application and also several cloud services running web roles for communicating with external parties (those parties call on a RESTful interface in the cloud and the message is put on the bus or vice versa).
Both the cloud solutions and the on premiss server need to subscribe and publish messages. The publishing does not seem an issue but what happens to those subscriptions if IIS shuts down the processes, do they get woken again when a message arrives or is the service bus really pull based subscription so requiring an active listener.
I have seen questions on here about hosting publishers but nothing about the safety of subscribers.
Extra Info:
Quite by chance we noticed that sometimes in our development environment the applications would need to be started a couple of times before the messages would start arriving. However it occurred to me that if there were actually messages already in the queue when the application started then they would be processed and otherwise not. So the restart just means that it sees older messages and processes them then once running it gets on just fine. However another colleague noted that nsb related startup log files were only being generated after he visited the website hosted in the same web application for the first time. I have just had a similar problem, messages were in the queue but the breakpoint on the handler was not being hit. When I hit a webapi method on the same iis application instance suddenly messages were being processed. So my conclusion from this is that no, it is not safe to host a subscriber under iis or in this case even a web role.
I am answering my own question based on my experience rather than any deep knowledge of NServiceBus or Azure Service Bus.
It seems that it is not safe to rely on the service bus listener to be active under IIS or a web role. Both can shut down the process and, as the bus listener relies on polling the service rather than messages being pushed to the listener from the service, so no further messages will be received until the process starts again.
The good news is that the process will be restarted when the associated web site or web service is hit. so if you absolutely know that you will receive more traffic on the site than on the bus then you might take the risk that the bus listener will remain active. However for our project we are in the process of splitting the listener into a separate windows service.

Kafka as Messaging queue in Microservices

To give you background of the question, i am considering Kafka as a channel for inter service communication between micro services. Most of my micro services would have web nature (Either Web server/ REST server/ SOAP server to communicate with existing endpoints).
At some point i need asynchronous channel between micro services so i a considering Kafka as the message broker.
In my scenario, i have RESTfull microservice once its job is done, it that pushes messages to Kafka queue. Then another micro service which is also web server (embedded tomcat) with small REST layer would consume those messages.
Reason for considering messaging queue is that even if my receiving micro service is down for some reason, all my incoming message would be added to queue and data flow wont be disturbed. Another reason is that Kafka is persistent queue.
Typically, Kafka consumer are multi threaded.
Question is that, receiving micro service being web server, i am concerned about creating user threads in a servlet container managed environment. It might work. But considering the fact that user created threads are bad within web application, i am confused. So what would be your approach in this scenario.
Please suggest.

Connections pooling REST calls made from Bluemix nodejs apps in to DataCenter services via Datapower

Hi We have a UI component deployed to Bluemix on Noedjs which makes REST service calls (JSON/XML) to services deployed in Data-center. These calls will go through the IBM Data Power gateway as a security proxy.
Data Power establishes an HTTPS Mutual Authentication connection (using certs that are exchanged offline) to the caller.
Although this method is secure it is time consuming to set up and if this connection is in setup for each service request it will create a slow response for the end user.
To optimize response time we are looking for any solution which can pool connections between nodejs app deployed on Bluemix and DataPower security proxy. Any one has any experience in this area?
In regards to "it is time-consuming to set up", in datapower you can create a multi-protocol gateway (MPGW) in front of your services to act as router. The MPGW will match services calls based on their URI and route them accordingly. In this scenario, you will only need to configure a single endpoint in the Bluemix Cloud Integration service in order to work with all your services. One downside to this approach is that it will be harder to control access to specific on-premise services because they will all be exposed to your Bluemix app as a single service.
In regards to optimizing response times, where are you seeing the bottleneck?
If the establishment of the tcp connections is causing too much overhead, you should be able to configure your Node.js app to use or re-use persistent connections via keepalive settings or you can look into setting up a connection pool that manages that for you (e.g. https://www.npmjs.com/package/generic-pool seems a popular choice).
On the datapower side, make sure the front/back persistent timeout is set according to your requirements:http://www-01.ibm.com/support/knowledgecenter/SS9H2Y_7.2.0/com.ibm.dp.doc/mpgw_availableproperties_serviceview.html?lang=en
Other timeout values in datapower can be found at http://www-01.ibm.com/support/docview.wss?uid=swg21469404

Communication between 2 web apps running in a azure web role instance

I have 2 web applications running in a web role and I only run single instance in the azure cloud. I would like to send and receive notifications between these 2 applications and any outsider should not have access to them.
That means, web service in both of them are out unless there is a
way to block outsiders from accessing a web service and only a
request from same system would succeed (May be vip and request ip
comparison would do, anything beyond that?).
File system watchers. Create a LocalStorage and use it in both
web apps and watch for files webappA and webappB in each other.
Use Azure Storage Queues.
MSMQ - not interested as its not supported in azure.
Could you please list other options available for me in azure web role
? Thanks in advance.
Note: Please avoid suggesting Internal Endpoint as I am running only a single instance with 2 web applications running in it.
You can set up "private" web services to listen on Internal endpoints. These are not accessible via the outside world. You could have a WebAppOne endpoint and WebAppTwo endpoint, both marked Internal. You then just query the role environment to discover the assigned port for each, and fire up your ServiceHost.
Or... you could use a queue to pass information, as long as:
You're ok with it being asynchronous
You're ok with messages being looked at "at least" once
You're ok with messages possibly being looked at out of order
Or... your apps could write information to an Azure table. No need to expose the table to the outside world.

Resources