Should an app have a single Gateway for all users and switch to the relevant identity for requests, or multiple Gateways (one per user) with identity initially set and never changed?
Should the Gateway be connected and disconnected after each request, or should it be initially connected once and left open?
The following applies to v1.4 of the node implementation of Gateway in the fabric-network package.
You should have a single gateway for each individual user. Once you disconnect a gateway you should not use it again (ie don't attempt to call connect again).
A basic pattern for multi-user apps that are long running would be to create a gateway for each user and cache it. Use a stale policy to disconnect and disgard that gateway if it hasn't been used for a while by the user.
Related
I want to make a secure hyperledger fabric infrastructure to manage all nodes based on physical devices.
The front-end user application writes to HL. It asks for a random node and if it answers application sends request and payload.
What is the best way to guarantee private communication between off-chain frontend app and hyperledger?
I have already created private domain secured by SSL certificate for every node but this method doesn’t sound scalable - what if we have 10k nodes? Is there a better approach?
If your intent is to communicate directly with the Peer, the endpoint's already able to be secured with TLS.
However, under an ideal situation, your web app, would communicate with your back-end server (lets say NodeJS Express server). Your Express server would be TLS secured and your web app would communicate via https. Your Express server would then use the Fabric Node SDK to communicate with your network, which is also TLS secured communication. You're not configuring anything more extensively than you would have while building a TLS-secured web server in the first place.
To your last point, who owns the 10k nodes? An organization would only be expected to own a few nodes, and your few nodes would be handling your transactions, you wouldn't be submitting to other organizations peers. You owning so many peer's in a network would defeat the purpose of Fabric's consensus, allowing you to compromise the network by always being able to provide policy quorum.
Given an Azure hosted Web Role with a highly-available WebAPI (say 99.95%, as per https://azure.microsoft.com/en-us/documentation/articles/resiliency-disaster-recovery-high-availability-azure-applications/) application that has ~1000 clients. Client is a ReactJS application. The WebAPI application will push notifications tailored to specific client groups (e.g. not all client users are interested in all events, but >1 user may be interested in the same event).
From reading the SignalR documentation and playing with some samples it feels like SignalR Groups will help us flow the right events to the right ReactJS application instances. Additionally we would use one of the SignalR scale out providers to make sure that the we push to the clients from the right WebAPI server instance.
Question: How do applications recover when the "right WebAPI" instance becomes unavailable?
I can imagine a server-side active/passive scheme with some complexity around making sure there is at least one 'server' for each Hub Client...but can a Server connect (in an unsolicited way) to a Hub Client? Would we have each Hub Client connect (when registering for a Group) to >1 Server?
How have applications solved this issue with SignalR?
I think I missed the obvious point that the scale-out providers and the back plane provide the very protection that clients need against servers that go-away. Clients don't connect to a specific server, but to a load-balanced name.
I'm looking into migrating an application to Service Fabric running on Azure. It's a realtime chat-style application using SignalR. I'd like to have an instance of a service running, self-hosting a SignalR hub (via OWIN) for each "affinity group" in which users are communicating. This is so I can avoid having to scale out SignalR with a backplane. I'd like to be able to spin these services up and down as groups of users enter and leave my application. I would expect I could host tens of these services per VM with a typical load of hundreds of users per group.
My idea is that I'd have a service locator that clients connect to initially to discover which service (port) is hosting their group. I would also have a service that spun up an instance of the chat service when the first request for that group came in.
How would I architect this in Service Fabric on Azure so that a) each of the services/actors is accessible with a SignalR client from the internet? and b) I'm only running as many services as necessary to serve m active groups out of n total groups? The demand for this app is very transient and spiky, so I'm hoping to take advantage of the fact that services are simply processes and can be provisioned in a matter of seconds vs. my current scenario where I have to spin up entire cloud services and wait tens of minutes to handle spikes (at which point it's too late)
You would do a few things:
Have a "service manager service" that intercepted initial join requests and created new Service Fabric services on the fly if they didn't already exist OR if they did already exist resolve the service's current location and then return that address to the client
Alternatively they could just pass back the internal service name (if you're ok exposing that information) and the client could do the resolution and then connection. To some degree this will depend on how much info you want to expose to the client, whether you can or want to modify it to "know about" Service Fabric, etc.
The client would then connect to the actual backing service directly
You would have to come up with some sort of mechanism for the actual chat services to know that there is nobody left and to either delete themselves or to go back through the manager.
You probably would be best off modeling the chat service as a Reliable Service rather than an actor as the Reliable Services stack allows more flexibility around communication protocols/stacks.
Hi We have a UI component deployed to Bluemix on Noedjs which makes REST service calls (JSON/XML) to services deployed in Data-center. These calls will go through the IBM Data Power gateway as a security proxy.
Data Power establishes an HTTPS Mutual Authentication connection (using certs that are exchanged offline) to the caller.
Although this method is secure it is time consuming to set up and if this connection is in setup for each service request it will create a slow response for the end user.
To optimize response time we are looking for any solution which can pool connections between nodejs app deployed on Bluemix and DataPower security proxy. Any one has any experience in this area?
In regards to "it is time-consuming to set up", in datapower you can create a multi-protocol gateway (MPGW) in front of your services to act as router. The MPGW will match services calls based on their URI and route them accordingly. In this scenario, you will only need to configure a single endpoint in the Bluemix Cloud Integration service in order to work with all your services. One downside to this approach is that it will be harder to control access to specific on-premise services because they will all be exposed to your Bluemix app as a single service.
In regards to optimizing response times, where are you seeing the bottleneck?
If the establishment of the tcp connections is causing too much overhead, you should be able to configure your Node.js app to use or re-use persistent connections via keepalive settings or you can look into setting up a connection pool that manages that for you (e.g. https://www.npmjs.com/package/generic-pool seems a popular choice).
On the datapower side, make sure the front/back persistent timeout is set according to your requirements:http://www-01.ibm.com/support/knowledgecenter/SS9H2Y_7.2.0/com.ibm.dp.doc/mpgw_availableproperties_serviceview.html?lang=en
Other timeout values in datapower can be found at http://www-01.ibm.com/support/docview.wss?uid=swg21469404
I have two flows, both have gateways. One of the flow has outbound gateway to webservice. The second flow should connect to first one. Problem is that I dont know how to do it correctly.
Once your provider is up and running, in the consumer side, What you need is a web service inbound gateway
Bare minimum example (assuming you have imported ws namespace):
<ws:inbound-gateway id="myInbound" request-channel="myRequestChannel"/>
You will also need to configure your web.xml file, so that incomming SI specific requests are handled by MessageDispatcherServlet.
For application layer abstraction, I will also add a generic gateway listening to the same channel.