It's my understanding that if I connect to a windows Azure web role with HTTPS that there is an initial handshake to exchange certificates and then another connection is made to get data.
Can someone explain to me is the connection persisted or if the user needs another page a few minutes later would there me another exchange of handshakes? How about if the WebRole was serving data from the Web API, would that be the same?
It depends on a client capabilities but in terms of modern web browsers I wouldn't be so worried about single connection(handshake) per request:
HTTP 1.1 - Persistent connection
Modern browsers use HTTP 1.1 by default which according to RFC 2616 makes connection persistent by default. Another important aspect of HTTP 1.1 is that it forces support of HTTP pipelining which means that multiple requests to the same endpoint will be send in a batch and response will be also received in a batch (on the same connection). Browsers generally have a limit of connections per server (Chrome - 2 connections by default) and reuse connections.
Azure: It looks like Azure will drop connection if idle for 4 minutes
Handshake
Every first connection requires full handshake but subsequent can reuse session ticket (ID), but this depends on a client. Microsoft introduced TLS session resumption some time ago - What's New in TLS/SSL (Schannel SSP) in Windows Server and Windows. As long as you have only one host serving HTTPS connection it should resume sessions, according to this blog post:
There’s also a warning about session resumption. This is due to the
Azure load balancer and non-sticky sessions. If you run a single
instance in your cloud service, session resumption will turn green
since all connections will hit the same instance.
It should not make any difference if it's a WebAPI or Website. You can always test it using SSLyze.
Related
We have a WebApi in Azure that sends requests to a VM cluster that is load balanced via an Azure Cloud Service. We see occasional timeouts where requests are working, then one times out for no reason. Reissuing the request immediately succeeds.
In Fiddler I see:
[Fiddler] The connection to '[myApp].cloudapp.net' failed. Error: TimedOut (0x274c). System.Net.Sockets.SocketException A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 40.122.42.33:9200
I can't find any telemetry in the portal that shows any kind of error, and all is fine when the request is issued from my api. Also, I don't see anything in the Event Logs on my VMs.
I am thinking it might have something to do with TCP port closure, but I am unfamiliar with this. My requests are specifying 'Connection: keep-alive', so I assume that subsequent requests to the same protocol/domain would attempt to use the same connection. It usually works, however.
Is there any kind of throttling on the number of active connections that can come into my Cloud Service? It is possible that these timeouts happen during peak load (though we don't have enough consistent traffic to verify this).
thanks!
I am using Rabbit MQ broker in one of mobile apps that we are developing, I am bit puzzled about security aspects. we are using cloud hosted rabbitmq and hosting platform has given us user name and password (which have been changed since) and we are using SSLconnection so not so much worried about MIM or eavesdropping.
my concern is anybody who knows host and port can make connection to rabbitmq, since we have mobile app we are storing rabbitmq user name and password on device (although encrypted) so I guess that anybody who gets physical access to device and somehow decrypts username password can login to rabbitmq, and once you are logged in you can pretty much do anything on rabbitmq like deleting queues etc..
How are MQ like Rabbitmq used in mobile environment. Is there a better / more secure way of using rabbitmq.
In my experience, it is best to not have your mobile app connect to rabbitmq directly. Use a web server in between the app and RabbitMQ. Have your mobile app connect to your web server via HTTP based API calls. The web server will connect to RabbitMQ, and you won't have to worry about the mobile app having the connection information in it.
There are several advantages of this, on top of the security problem:
better management of RabbitMQ connections
easier to scale number of mobile users
ability to add more logic and processing to the back-end, as needed, without changing the mobile app
creating a connection to RabbitMQ is an expensive operation. It requires a TCP/IP connection. once that connection is open it stays open until you close it. if you open a connection from your mobile app and leave it open, you are reducing the number of available connections to RabbitMQ. if you open and close the connection quickly, you are inducing a lot of extra cost in creating and closing the connections constantly.
with a web server in the middle, you can open a single connection and have it manage multiple mobile devices. the web server will handle the http requests and use the one connection to rabbitmq to push messages to it.
since an HTTP web request is a short-lived connection, you'll be able to handle more users in a short period of time, than you would with direct rabbitmq connections.
this ultimately leads to better scalability as you can add another web server to handle thousands more mobile app instances, while only adding 1 new RabbitMQ connection.
this also lets you add middle-tier logic inside of the web server. you can add additional layers of processing as needed, without changing the mobile app. change the web server code and redeploy as needed.
if you must do this without a server in the middle, you likely won't be able to get around the security issue that you're having. the mobile device will contain the necessary information to make the connection.
Hi We have a UI component deployed to Bluemix on Noedjs which makes REST service calls (JSON/XML) to services deployed in Data-center. These calls will go through the IBM Data Power gateway as a security proxy.
Data Power establishes an HTTPS Mutual Authentication connection (using certs that are exchanged offline) to the caller.
Although this method is secure it is time consuming to set up and if this connection is in setup for each service request it will create a slow response for the end user.
To optimize response time we are looking for any solution which can pool connections between nodejs app deployed on Bluemix and DataPower security proxy. Any one has any experience in this area?
In regards to "it is time-consuming to set up", in datapower you can create a multi-protocol gateway (MPGW) in front of your services to act as router. The MPGW will match services calls based on their URI and route them accordingly. In this scenario, you will only need to configure a single endpoint in the Bluemix Cloud Integration service in order to work with all your services. One downside to this approach is that it will be harder to control access to specific on-premise services because they will all be exposed to your Bluemix app as a single service.
In regards to optimizing response times, where are you seeing the bottleneck?
If the establishment of the tcp connections is causing too much overhead, you should be able to configure your Node.js app to use or re-use persistent connections via keepalive settings or you can look into setting up a connection pool that manages that for you (e.g. https://www.npmjs.com/package/generic-pool seems a popular choice).
On the datapower side, make sure the front/back persistent timeout is set according to your requirements:http://www-01.ibm.com/support/knowledgecenter/SS9H2Y_7.2.0/com.ibm.dp.doc/mpgw_availableproperties_serviceview.html?lang=en
Other timeout values in datapower can be found at http://www-01.ibm.com/support/docview.wss?uid=swg21469404
I'm currently working on a Windows Azure application using WebAPI and SignalR for communication. Both services are hosted via OWIN on a Worker role with multiple instances.
Current solution
Currently we start one Owin host with WebAPI on port 443 on every machine and one SignalR Owin host on the instance input endpoint port (e.g. 10106-1010x) on every machine.
Everything works fine, but some of our customer are sitting behind a firewall where all ports except 80/443 are blocked -> so no websocket communication there (WebAPI works fine).
New solution
We are starting one Owin host with WebAPI and SignalR on every instance. So both HTTP and WebSocket traffic will be routed through the loadbalancer over port 443 -> no more instance input endpoints (and no more firewall problems).
The problem
The problem now is that sometimes the WebSocket connection can be established and sometimes not (browser independent). If the connection can't be established the following error appears in the console:
Error during WebSocket handshake: Unexpected response code: 400
No transport could be initialized successfully. Try specifying a different transport or none at all for auto initialization.
I've already added the role instance id to the websocket response messages from the server, but couldn't find any (ir)regularities (e.g. a single instance doesn't respond, ...). All SignalR servers seem to be up and running, but sometimes the connection can't be established.
You can test it yourself by going to the following link. If you don't get an error dialog ("Connection to server lost") it is working, otherwise try to refresh the page several times.
-
I'm not looking for a scaleout feature for SignalR (as described here or here). The client just connects to one (random) server (worker role instance) and communicates with the server until a close message is sent. If he connects again he can be routed to any other server. Also there is no communication between the servers.
Update/Solution
halter73 was right, each instance generates its own anti-CSRF token. To avoid this I implemented my own IDataProtector/IDataProtectionProvider, similar to these to SO questions (see here and here).
If you can look at content of the 400 response (this may be difficult since it is an SSL encrypted response to a WebSocket request), you will probably see a message similar to "The ConnectionId is in the incorrect format."
SignalR uses the server's machine key to create an anti-CSRF token, but this requires that all the servers in your farm share a machine key for the token to be properly decrypted in when SignalR requests hop servers. The /negotiate is the request that retrieves the anti-CSRF token. When the SignalR client then uses the anti-CSRF token to make a /connect request, it sometimes fails when the /connect request is processed by a different server which didn't create the token and therefore is unable to decrypt it.
Here is an issue that filed on GitHub by someone who experienced a similar issue: https://github.com/SignalR/SignalR/issues/2292.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We are running a self hosted AppService with ServiceStack 3.x
We would like to have a automatic failover mechanism on the clients if the current service running as master fails.
Clients at the moment are strong typed C# using the default SS JSONClient, but we will add web based clients (AngularJS) in the future.
Does anybody have an idea, how that could be done?
Server side redundancy & failover:
That's a very broad question. A ServiceStack self hosted application is no different to any other web-facing resource. So you can treat it like a website.
Website Uptime Monitoring Services:
You can monitor it with regular website monitoring tools. These tools could be as simple as an uptime monitoring site that simply pings your web service at regular intervals to determine if it up, and if not take an action, such as triggering a restart of your server, or simply send you an email to say it's not working.
Cloud Service Providers:
If you are using a cloud provider such as Amazon EC2, they provide CloudWatch services that can be configured to monitor the health of your host machine and the Service. In the event of failure, it could restart your instance, or spin up another instance. Other providers provide similar tools.
DNS Failover:
You can also consider DNS failover. Many DNS providers can monitor service uptime, and in the event of a failover their service will change the DNS route to point to another standby service. So the failover will be transparent to the client.
Load Balancers:
Another option is to put your service behind a load balancer and have multiple instances running your service. The likelihood of all the nodes behind the load balancer failing is usually low, unless there is some catastrophically wrong with your service design.
Watchdog Applications:
As you are using a self hosted application, you may consider making another application on your system that simply checks that your service application host is running, and if not restarts it. This will handle cases where an exception has caused you app to terminate unexpectedly - of course this is not a long term solution, you will need to fix the exception.
High Availability Proxies (HAProxy, NGINX etc):
If you are run your ServiceStack application using Mono on a Linux platform there are many High Availability solutions, including HAProxy or NGINX. If you run on a Windows Server, they provide failover mechanisms.
Considerations:
The right solution will depend on your environment, your project budget, how quickly you need the failover to resolve. The ultimate considerations should be where will the service failover to?
Will you have another server running your service, simply on standby - just in case?
Will you use the cloud to start up another instance on demand?
Will you try and recover the existing application server?
Resources:
There are lots of articles out there about failover of websites, as your web service use HTTP like a website, they will also apply here. You should research into High Availability.
Amazon AWS has a lot of solutions to help with failover. Their Route 53 service is very good in this area, as are their loadbalancers.
Client side failover:
Client side failover is rarely practical. In your clients you can ultimately only ever test for connectivity.
Connectivity Checking:
When connectivity to your service fails you'll get an exception. Upon getting the exception, the only solution would be to change the target service URL, and retry the request. But there are a number of problems with this:
It can be as expensive as server side failover, as you have to keep the failover service(s) online all the time for the just-in-case moments. Some server side solutions would allow you to start up a failover service on demand, thus reducing cost significantly.
All clients must be aware of the URL(s) to failover too. If you managed the failover at DNS i.e. server side then clients wouldn't have to worry about this complexity.
Your client can only see connectivity failures, there may not be an issue with the server, it may be their connectivity. Imagine the client wifi goes down for a few seconds while servicing your request to your primary service server. During that time the client gets the connectivity exception and you try to send the request to the failover secondary service server, at which point their wifi comes online. Now you have clients using both the primary and secondary service. So their network connectivity issues become your data consistency problems.
If you are planning web based clients, then you will have to setup CORS support on the server, and all clients will require compatible browsers, so they can change the target service URL. CORS requests have the disadvantages of having more overhead that regular requests, because the client has to send OPTIONS requests too.
Connectivity error detection in clients is rarely fast. Sometimes it can take in excess of 30 seconds before a client times out a request as having failed.
If your service API is public, then you rely on the end-user implementing the failover mechanism. You can't guarantee they will do so, or that they will do so correctly, or that they won't take advantage of knowing your other service URLs and send requests there instead. Besides it look very unprofessional.
You can't guarantee that the failover will work when needed. It's difficult to guarantee that for any system, even big companies have issues with failover. Server side failover solutions sometimes fail to work properly but it's even more true for client side solutions because you can test the failover solution ahead of time, under all the different client side environmental factors. Just because your implementation of failover in the client worked in your deployment, will it work in all deployments? The point of the failover solution after all is to minimise risk. The risk of server side failover not working is far less than client, because it's a smaller controllable environment, which you can test.
Summary:
So while my considerations may not be favourable of client side failover, if you were going to do it, it's a case of catching connectivity exceptions, and deciding how to handle them. You may want to wait a few seconds and retry your request to the primary server before immediately swapping to the secondary just in case it was an intermittent error.
So:
Catch the connectivity exception
Retry the request (maybe after a small delay)
Still failing, change the target host and retry
If that fails, it's probably a client connectivity issue.