We want to receive signals in Windows Azure Cloud Service and would like some feedback on our strategy.
Our current Project:
Physical GPS unit that runs as a client.
Windows Azure Cloud Service that runs as a server.
1) Physical GPS unit: We are using XT-4000[Xirgo Technologies] physical Gps unit which is a powerful tracking, monitoring and control gateway device. This device requires UDP or TCP port to communicate.
2) Windows Azure Cloud Service: Here we need to open up a TCP listener, which will listen for incoming data which is pushed by the device [XT-4000].
Here's what we are thinking our strategy should be. All advice is appreciated.
Using Worker Role.
Set the TCP listener for receiving incoming signals from device.
[But the question is what should be the IPaddress and port no. of Windows Azure Cloud Service as the device needs to send the data with the help of IPaddress and port no.]
The following command will be SET in the device to push data
Command for the device
“+XT:1001,<Port no>,<IPaddress>,<1>”
1) You can have any port number that you want (UDP or TCP). Just set it in the config file for the service.
2) The IP address will be a little tricky. Once you deploy your service you will be assigned a VIP. This will not change SO LONG AS you do not delete the service. If you do, you will lose the VIP & the devices will stop working. It would be better if the device could accept a URL, which would eliminate the problem entirely. Regardless, once deployed you can still update the service, but you will need to use upgrade vs delete/redeploy or VIP swap.
Pat
Related
I want to test my data-plane application and I want to find out if there is a way using containers.
After I bring up a container with my app in it, can I direct all my machine-generated Internet traffic to the container, process that traffic in my application and send it back down to the host network namespace and out the physical interface (say, eth0)?
Example:
I access Facebook and all traffic (DNS/UDP, HTTPS/TCP) for this should go to my container app in the same machine, get processed by my application and then sent out via eth0. Return traffic from the Internet comes back into my app first and then sent to the host client (browser here).
I have many IoT clients that will soon be in the field. I want some way to have full access to the Device Portal currently on port 8080 without it being publicly exposed.
My thoughts are to develop a management server that accepts connections from multiple clients with keep alive. The connection from the IoT could be net sockets but that is open for feedback.
The management server would show the connection status of each IoT device. It would have the functionality to launch a browser session with the connected IoT device. The IoT device would serve the local Device Portal:8080 through the socket to the management servers browser session. Interaction from the management servers browser session would be transmitted back through the socket and in turn interact with the Device Portal.
I have looked over information for a few days and can’t find examples of website interaction through sockets. I request your feedback on such an approach and also ask is there are any open source projects that may assist in getting to this goal.
Thank you
Have a look at https://openport.io. It does exactly what you ask.
A socket is just a software representation of a TCP connection. Ports would still be required. At least 1 port anyway. If you are accessing all those devices on the same network, you can use a reverse proxy or a VPN for external access into your network and those devices. Always use an SSL cert or IPsec Tunnel for the Proxy or VPN connection. If you open up your firewall on 80 and/or 443 to your Apache web server, apache could redirect to the backed 8080 port. Alternatively, OpenVPN could be used to give you access to the entire network by just opening port 1154 and setting up the configurations. If using OpenVPN, you would still route to the device portal on 8080, as usual, using your internal IP or host name.
If all your devices are on the same network, the nice thing about using VPN to get in is that you can connect to OpenVPN on your cell phone and then connect via SSH with an app like Termius on IOS, or any other SSH capable IOS App to your IoT device, and get things done quickly; like rebooting IoT devices, setting permissions, checking logs on the go.
Lastly, if your planning to pay for Azure, you could do that, I guess ($$$)
https://azure.microsoft.com/en-us/pricing/details/iot-hub/
I have created a simple service for receiving UDP packets and am trying to deploy in Service Fabric.
When running locally, I am able to spam packets to the service (running in local SF Cluster), but when deployed to Azure the service, and even the VM, does not receive the UDP packets.
I even RDPed into the VM and installed Wireshark, my packets weren't present.
I did the same with a standard Windows DC VM, and was able to see the packets arrive.
Clearly there is an issue with the firewall which is configured upon creation of a SF cluster.
PS I have followed the steps here https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-connect-and-communicate-with-services to ensure that the LB rule is set to UDP (as is not an option upon creation).
EDIT - Note I also followed the advice on this question Service Fabric Stateless Server Custom UDP Listener
So turns out that modifying an existing Load Balancer rule does not change the protocol on the firewall (speculation).
I created a NEW Load Balancer rule with the appropriate protocol/port and traffic was flowing.
Modifying an existing Load Balancer rule to change to an appropriate protocol does not work, and traffic ends at firewall.
I have the next setup in Azure Resource Manager :
1 scale set with 2 virtual machines having Windows Server 2012 .
1 Azure Redis cache (C1 standard)
1 Azure load balancer (Layer 4 in the OSI network reference stack)
Load balancer is basically configured using :
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-get-started-internet-portal
. Session persistence is set to none for the rules.
On both VMs from scale set I have deployed a test web app which uses SignalR 2 on .Net 4.5.2.
The test web app uses Azure Redis cache as backplane .
The web app project can be found here on github : https://github.com/gaclaudiu/SignalrChat-master.
During the tests I did notice that after a signalr connection is opened , all the data sent from the client, in the next requests, arrives on the same server from the scale set , it seems to me that SignalR connection know on which sever from the scale set to go.
I am curios to know more on how this is working , I tried to do some research on the internet but couldn't find something clear on this point .
Also I am curios to know what happens in the next case :
Client 1 has a Signalr opened connection to Server A.
Next request from the client 1 through SignalR goes to the Server B.
Will this cause an error ?
Or client will just be notified that no connection is opened and it will try to open a new one?
Well I am surprised that it is working at all. the problem is: signalr performs multiple requests until the connection is up and running. There is no guarantee that all requests go to the same VM. Especially if there is no session persistence enabled. I had a similar problem. You can activate session persistence in the Load Balancer but as you pointed out acting on OSI layer 4 will do this using the client ip (imagine all guys from the same office hitting your API using the same IP). In our project we use Azure Application Gateway which works with cookie affinity -> OSI Application layer. So far it seems to work as expected.
I think you misunderstand how the load balancer works. Every TCP connection must send all of its packets to the same destination VM and port. A TCP connection would not work if, after sending several packets, it then suddenly has the rest of the packets sent to another VM and/or port. So the load balancer makes a decision on the destination for a TCP connection once, and only once, when that TCP connection is being established. Once the TCP connection is setup, all its packets are sent to the same destination IP/port for the duration of the connection. I think you are assuming that different packets from the same TCP connection can end up at a different VM and that is definitely not the case.
So when your client creates a WebSocket connection the following happens. An incoming request for a TCP connection is received by the load balancer. It decides, determined by the distribution mode, which destination VM to send the request onto. It records this information internally. Any subsequent incoming packets for that TCP connection are automatically sent to the same VM because it looks up the appropriate VM from that internal table. Hence, all the client messages on your WebSocket will end up at the same VM and port.
If you create a new WebSocket it could end up at a different VM but all the messages from the client will end up at that same different VM.
Hope that helps.
On your Azure Load Balancer you'll want to configure Session persistence. This will ensure that when a client request gets directed to Server A, then any subsequent requests from that client will go to Server A.
Session persistence specifies that traffic from a client should be handled by the same virtual machine in the backend pool for the duration of a session. "None" specifies that successive requests from the same client may be handled by any virtual machine. "Client IP" specifies that successive requests from the same client IP address will be handled by the same virtual machine. "Client IP and protocol" specifies that successive requests from the same client IP address and protocol combination will be handled by the same virtual machine.
SignalR only knows the url you provided when starting the connection and it uses it to send requests to the server. I believe Azure App Service uses sticky sessions by default. You can find some details about this here: https://azure.microsoft.com/en-us/blog/azure-load-balancer-new-distribution-mode/
When using multiple servers and scale-out the client can send messages to any server.
Thank you for your answers guys.
Doing a bit of reading , it seems that azure load balancer use by default 5-touple distribution.
Here is the article https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-distribution-mode
The problem with 5-touple is that is sticky per transport session .
And I think this is causing the client request using signalr to hit the same Vm in the scale set. I think the balancer interprets the signalr connection as a single transport session.
Application gateway wasn't an option from the beginning because it has many features which we do not need ( so it doesn't make sense to pay for something we don't use).
But now it seems that application gateway is the only balancer in azure capable of doing round robin when balancing traffic.
I am new to Azure and trying to setup our companies testing environment in Azure.
As I understand it for two machines to talk to each other in Azure they need to be in the same cloud service, i.e. our web server and DB server.
So I have created a service, then created each of the VM's in that service. They are both running. In the endpoints I can see:
web server:
NAME PROTOCOL PUBLIC PORT PRIVATE PORT LOAD-BALANCED SET NAME
HTTP TCP 80 80 -
HTTPS TCP 443 443 -
PowerShell TCP 5986 5986 -
Remote Desktop TCP 50232 3389 -
db server:
NAME PROTOCOL PUBLIC PORT PRIVATE PORT LOAD-BALANCED SET NAME
MSSQL TCP 1433 1433 -
PowerShell TCP 54327 5986 -
Remote Desktop TCP 52459 3389 -
in the cloud service the input areas
INPUT ENDPOINTS
protoApp : 123.456.789.227:80
protoApp : 123.456.789.227:443
protoApp : 123.456.789.227:5986
protoApp : 123.456.789.227:50232
protodb : 123.456.789.227:1433
protodb : 123.456.789.227:54327
protodb : 123.456.789.227:52459
I can connect to the protodb server but not the protoapp server (on the given ports).
There are two / three questions really.
Should they both be in the same cloud service?
Should the live DB and web server be in a seperate cloud server (not created them yet)
Can anyone think of a reason why I can no longer MSTSC / rdp to one of the machines, even though the endpoints say its all fine, the machine is running and the cloud service says it has it as an endpoint.
No reason why not, though you should look at creating a Virtual Network to connect them
You should consider this if
Performance dictates it
You want extra security - consider somebody hacks the web server, they then immediately have access to the same server that hosts the data. Really you should restrict the incoming IPs for MSSQL to something trusted anyway, or the same subnet if you use a Virtual Network
Cost is not an issue
I've sometimes had trouble using mstsc to directly connect via RDP to Azure VMs. If you go to http://manage.windowsazure.com and navigate to your VM, there will be a "Connect" option at the bottom. This will download a .rdp file which might help.
Something else worth noting, If you're using Azure VMs, you won't qualify for Microsoft's uptime SLA unless you have two or more VMs per cloud service configured as part of an Availability Set. So straight away you should consider that the number of VMs you're planning will double if you want to have a production/highly available environment, and you should consider the impact this will have on your application architecture too.