Non-blocking service to receive messages on port via UDP - linux

I want to build a service on my Linux VPS which listens to a certain UDP port and does something with the (text)message which is captured. This processing consists of appending the message to a locally stored txt-file and send it as http, with a post variable to another server.
I've looked into Nginx but as far is can see this server can only be bound to receive http packets. Although it is asynchronous.
What is the best way to achieve this listening-service on linux? And which has the capabilities to do the above mentioned processing?
Is for instance node.js a possibilty? It looks great

For simplicity, you can use xinetd, and for the app you can use any scripting language, which will read the packet from the stdin and save it to the file.

Related

Communication between REST and UDP server

I have a REST server to handle communication between my database server and Android/iOS devices, the REST server is also able to send push messages via Firebase. My second server is a UDP server, that receive and send messages to a IOT device, both server are written in Node.js and running on different EC2 instances.
Then my UDP server receive a message from the IOT device, lets say some GPS data. Is there a good way to call some methods from my REST server via the UDP server? Or send the data to it ? Are there any ways that the two server can communicate with each other ?
You could implement a separate API on your REST server that would be called from your UDP server.
Interprocess communication is a wide topic, there are plenty of ways to do it, it all depends on your needs.
via http
via tcp/ip or udp
via a database (or even a file)
using named sockets (on unix/linux)
using a pub-sub library
using a message queue library
by piping standard input/output

run shell script on client remotly from server

I have one server and multiple clients. The server wants to run shell script on each device it wants to. Absolutely it's not possible via simple socket because we may have thousands of devices. Also server and devices should be always connected via socket. after a lot of search I found out that the solution might be NAT-T. But still I don't know how to use that or if there is another solution.
Please help me what should I do on clients and server.
If you don't know the clients address and port upfront, you need to connect to the server with the clients. 1000s of devices are no problem. You run in a socket limit around 65000 open ports (check ulimit). Build an object stream between client and server and execute the script based on the object the client receives. You could also set an interval on the clients and let them check with simple http(s) every n secs if there is something to do for them?
See for example here: Node Stream Docs
Or here: Node HTTP Docs

How to control source ip or port for UDP packet with nodejs

I'm working on an application that interfaces with embedded equipment via the SNMP protocol. To facilitate testing, I've written a simulator for the embedded equipment with Nodejs and the snmpjs library. The simulator responds to SNMP gets/sets and sends traps to the managing application. The trap messages are constructed by the snmpjs library, but sent manually using Node's standard UDP sockets.
This works well when simulating one equipment, but I've run into an issue when attempting to simulate multiple equipment. Specifically, the managing application identifies the source equipment of SNMP traps by analyzing the source IP/port of the UDP packet carrying the trap. This precludes my simulating multiple equipment simultaneously, which is the most common use case for the application.
So, my question is: Is there some way to control/spoof the source IP or port of the udp packet with Nodejs? Or, perhaps, would it be possible to use some kind of proxy to achieve the desired result?
(Note: Running the simulators on a single machine is a strict requirement. Also, it is not sufficient that I have unique IPs/ports for each simulator, I must be able to know their values ahead of time so that I can configure the managing application to interface with them correctly.)
The solution was simple. I overlooked this line from the node documentation for the send method of udp sockets, "If the socket has not been previously bound with a call to bind, it's assigned a random port number..." I just needed to bind the socket to a port first. I've verified this with a test script.

Does data passed across a unix domain socket cross the kernel boundary?

We're writing a proxy for a network server where instead of connecting directly over TCP, the client program will connect to a local unix domain socket to send its data, and the proxy application will then forward it over TCP.
My question is this: does the data the application sends over the unix domain socket cross the kernel boundary before the proxy receives it? The reason I ask is that if so, we could expect to see a benefit from using splice(2). If not, we wouldn't.
Of course Unix sockets go via the kernel, but your question is founded on a misconception. You wouldn't see a benefit from introducing another copy step via splice.

how to efficiently transfer file between 2 node.js instances?

I'm developing chat application using app.js which is webkit+node.js framework.
So i have node.js plus bridged web browser environment on both sides.
I want to make file transfer feature somewhat similar to Skype one.
So, initial idea is to:
1.connect clients to main server.
2.Each client gets ip of oposite ones.
3.Start socket or websocket server on both clients and connect to each other.
4.Sender reads the file and transmits it to the reciver.
Question are:
1.Im not really sure that one client can "see" the other.
2.file is a binary data, but websockets are made for text messages so i need some kind of coding/decoding stuff. I thought about base 64 but it has 30% of "overhead" information. So i need something more effitient (base 128?).
3.If it is not efficient to use websocket should i use TCP sockets instead? What problems can appear if i decide to use them?
Yeah i know about node2node and BinaryJS, i just dont know should i use them or not. And i really what to do something myself.
OK, with your communication looking like this:
(C->N)<->N<->(N->C)
(...) is installed on one client's machine. N's are node servers, C's are web clients.
This is out of your control. Some file sharing apps send test packets from the central server to clients, to check whether ports are open and NAT rules are configured correctly, etc. Your clients will start their own servers on some port, your master server can potentially create a test connection to these servers to see whether they're started correctly and open to the web, BEFORE telling other clients that they can send files.
Websockets are great for status messages from your servers to the web GUIs and general client-to-client communication. For the actual file transfers, I would use TCP sockets, see the next answer. On the other hand base64 encoding is really not a slow process, play with it and benchmark its performance, then decide with some data to back up your decision.
You could use a combination: websockets from your servers to the web GUIs, but TCP communication between the servers themselves. TCP servers (and streams) aren't hard to set up in Node, I see no disadvantages. It might actually be less complicated than installing node2node on those servers, since TCP is already built-in.

Resources