I need to communicate an Arduino Uno R3 with Tinysine GSM Shield (sim 900 module) to a NodeJS Server Socket using TCP/IP sockets. So, the embedded system is a TCP client. I need the Arduino sends some message and receive the answer, using the received data to blink a different collor led. The tcp socket is working, and I can send the message, and process it in my server socket, but cant receive the answer (socket.write) in embedded (actually, I receive some fuzzy and variables caracters).
My Server Socket works fine, using the Hercules like client TCP, I could complete whole process.
I am using the SIM900 and InetGSM libraries with AT Commands to TCP connection (initially I dont want to use AT HTTP connection because It would change my system).
How do I can receive a legible message from my server socket? Thanks
I solved it!
When I send the AT+CIPSEND command, what I receive is a sequence of caracters with the AT response for the command and the data sent from my server.
So, what I need to do is storing it in an array and select the data position in this array or pick the answer in the right array position, like example below.
For example:
//Im waiting for char '1'
//After sending message with AT commands ...
char answer;
for(i = 0; i < 15; i++){ //15 is an random limit value that worked for me, I dont know why
answer = (char)gsm.read();
if(answer == '1'){
Serial.println("I find the answer!");
}
}
Related
In an developing a Windows Desktop Console Application in c++ in Visual Studio 3013 for Windows Desktop, which acts as a client and tries to connect to a server. Once the Connection with the server is successful, it sends a handshaking signal to the server and waits for a response from the server. Iam using winsocks2 in this application.
The receive function I am using is a blocking call
iResult = recv(ConnectSocket, recvbuf, recvbuflen, 0);
So until my server sends a response, the client is blocked.
What I need is the client to check for response only for a certain time (say 10 sec). If there is no response within this time frame ,I want the client to disconnect from the server.
How to achieve this?
Call setsockopt on your socket with the SO_RCVTIMEO option and a timeout value in milliseconds. For example:
DWORD timeoutMs = 10000;
iResult = setsockopt(ConnectSocket, SOL_SOCKET,
SO_RCVTIMEO, &timeoutMs, sizeof(timeoutMs));
...
More details about setsockopt on MSDN: https://msdn.microsoft.com/en-us/library/windows/desktop/ms740476%28v=vs.85%29.aspx
Running on node in OS X, I am trying to use node-serialport to talk to an Arduino. All communication to the Arduino works as expected when using Arduino IDE's Serial Monitor, or the OS X utility SerialTools. However, when just running my node app, node-serialport tells me the connection is successful, but I get no communication. If I first make a connection to the arduino with Arduino IDE's Serial Monitor or SerialPorts, then run my node app, the node app sends and receives data just fine using node-serialport.
I'm not familiar with serial communication, but it seems like the other serial utilities are able to properly start a connection (which is then available to node-serialport), but node-serialport is not able to connect on its own.
Is there a way to get absolutely all connection information, so I can compar the utilities' successful connections to node-serialports non-working connection?
Any other ideas as to why this would be happening?
I have a working solution, but unfortunately not a complete explanation. Reviewing some related issues such as What's going on after DTR/RTS is sent to an FTDI-based Arduino board?, I determined that even just restarting the node app (rather than requiring another serial connection app) gave node the ability to communicate through the serial port. I'm beyond my depth, but I suspect that initially establishing the RTS connection restarts the arduino, and only after that happens can node-serialport communicate through the connection.
My workaround is to simply give the Arduino some time to reset before attempting a second serialport connection, which works.
var firstConnect = true;
serialPort.open(function (error) {
if (firstConnect){
firstConnect = false;
//First connection, letting Arduino reset
setTimeout(function(){serialPort.open()},4000)
} else {
//Second connection, which will work
serialPort.on('data', function(data) {
//data parsing function
//...
}
}
});
I'm building a tcp-message server with nodejs.
It just sits on my server waiting to accept a connection from lets say an Arduino.
As it, at connection-time, identifies itself with an unique ID (not an IP) I'm able to write data from server > arduino without knowing the IP address of the client-device.
But for that to be efficient, I want the connection to be open as long as possible, preferably as long as the client-device closes the connection. (eg on ip change or something)
This is the (relevant part of) the server:
var net = require('net'),
sockets = {};
var tcp = net.createServer(function(soc){
soc.setKeepAlive(true); //- 1
soc.on('connect', function(data){
soc.setKeepAlive(true); //- 2
});
soc.on('data', function(data) {
//- do stuff with the data, and after identification
// store in sockets{}
})
}).listen(1111);
Is soc.setKeepAlive(true) the right method to keep the connection alive?
If so, what is the right place to put it? In the connect event (1), or right in the callback (2).
If this is not the way to do it, what is?
Your best bet is to periodically send your own heartbeat messages.
Also, you don't need soc.on('connect', ...) because the client socket is already connected when that callback is executed.
I'm testing communication between two NodeJS instances over TCP, using the net module.
Since the TCP doesn't rely on messages (socket.write()), I'm wrapping each message in a string like msg "{ json: 'encoded' }"; in order to handle them individually (otherwise, I'd receive packets with a random number of concatenated messages).
I'm running two NodeJS instances (server and client) on a CentOS 6.5 VirtualBox VM with bridged network and a Core i3-based host machine. The test lies on the client emitting a request to the server and waiting for the response:
Client connects to the server.
Client outputs current timestamp (Date.now()).
Client emits n requests.
Server replies to n requests.
Client increments a counter on every response.
When finished, client outputs the current timestamp.
The code is quite simple:
Server
var net = require('net');
var server = net.createServer(function(socket) {
socket.setNoDelay(true);
socket.on('data', function(packet) {
// Split packet in messages.
var messages = packet.toString('utf-8').match(/msg "[^"]+";/gm);
for (var i in messages) {
// Get message content (msg "{ content: 'json' }";). Actually useless for the test.
//var message = messages[i].match(/"(.*)"/)[1];
// Emit response:
socket.write('msg "PONG";');
}
});
});
server.listen(9999);
Client
var net = require('net');
var WSClient = new net.Socket();
WSClient.setNoDelay(true);
WSClient.connect(9999, 'localhost', function() {
var req = 0;
var res = 0;
console.log('Start:', Date.now());
WSClient.on('data', function(packet) {
var messages = packet.toString("utf-8").match(/msg "[^"]+";/gm);
for (var i in messages) {
// Get message content (msg "{ content: 'json' }";). Actually useless for the test.
//var message = messages[i].match(/"(.*)"/)[1];
res++;
if (res === 1000) console.log('End:', Date.now());
}
});
// Emit requests:
for (req = 0; req <= 1000; req++) WSClient.write('msg "PING";');
});
My results are:
With 1 request: 9 - 24 ms
With 1000 requests: 478 - 512 ms
With 10000 requests: 5021 - 5246 ms
My pings (ICMP) to localhost are between 0.6 - 0.1 seconds. I've not intense network traffic or CPU usage (running SSH, FTP, Apache, Memcached, and Redis).
Is this normal for NodeJS and TCP or it is just my CentOS VM or my low-performance host? Should I move to another platform like Java or a native C/C++ server?
I think that a 15 ms delay (average) per request on localhost is not acceptable for my project.
Wrapping the messages in some text and searching for a Regex match isn't enough.
The net.Server and net.Socket interfaces have a raw TCP stream as an underlying data source. The data event will fire whenever the underlying TCP stream has data available.
The problem is, you don't control the TCP stack. The timing of it firing data events has nothing to do with the logic of your code. So you have no guarantee that the data event that drives your listeners has exactly one, less than one, more than one, or any number and some remainder, of messages being sent. In fact, you can pretty much guarantee that the underlying TCP stack WILL break up your data into chunks. And the listener only fires when a chunk is available. Your current code has no shared state between data events.
You only mention latency, but I expect if you check, you will also find that the count of messages received (on both ends) is not what you expect. That's because any partial messages that make it across will be lost completely. If the TCP stream sends half a message at the end of chunk 1, and the remainder in chunk 2, the split message will be totally dropped.
The easy and robust way is to use a messaging protocol like ØMQ. You will need to use it on both endpoints. It takes care of framing the TCP stream into atomic messages.
If for some reason you will connecting to or receiving traffic from external sources, they will probably use something like a length header. Then what you want to do is create a Transform stream that buffers incoming traffic, and only emits data when the amount identified in the header has arrived.
Have you done any network dump? You may be creating network congestion due to the overhead introduced by enabling 'no delay' socket property. This property will send data down to TCP stack as soon as possible and if you have very small chunks of information it will lead to many TCP packets with small payloads, thus the decreasing transmission efficiency and eventually having TCP pausing the transmission due to congestion. If u want to use 'no delay' for your sockets, try increasing your receiving socket buffer so that data is pulled from the tcp stack faster. Let us know if that helped.
I have problem with a TCP socket receiving messages with wrong destination port.
The OS is Ubuntu Linux 10.10 and kernel version is 2.6.31-11-rt, but this happens with other kernels, too. The C/C++ program with this problem does this:
A TCP server socket is listening to connections in INADDR_ANY at port 9000.
Messages are received with recv(2) by a TCP message receiver thread. Connection is not closed after reading message, but the thread continues to read from the same connection forever.
Error: also messages to other ports than 9000 are received by the TCP message receiver. For example, when a remote SFTP client connects to the PC where TCP message receiver is listening, it causes the TCP message receiver to receive also the SFTP messages. How is this EVER possible? How can the TCP ports "leak" this way? I think SFTP should use port 22, right? Then how it's possible those messages are visible in port 9000?
More info:
At the same time there's a raw socket listening on another network interface, and the interface is in promiscuous mode. Could this have an effect?
The TCP connection is not closed in between message receptions. The message listener just keeps reading data from the socket. Is this really a correct way to implement a TCP message receiver?
Has anyone seen this kind of problem? Thanks in advance.
EDIT:
Ok, here is some code. The code looks to be allright, so the main strange thing is, how a TCP socket can ever receive data sent to another port?
/// Create TCP socket and make it listen to defined port
TcpSocket::listen() {
m_listenFd = socket(AF_INET, SOCK_STREAM, 0)
...
bzero(&m_servaddr, sizeof(sockaddr_in));
m_servaddr.sin_family = AF_INET;
m_servaddr.sin_addr.s_addr = htonl(INADDR_ANY);
m_servaddr.sin_port = htons(9000);
bind(m_listenFd, (struct sockaddr *)&m_servaddr, sizeof(sockaddr_in);
...
listen(m_listenFd, 1024);
...
m_connectFd = accept(m_listenFd, NULL, NULL);
}
/// Receive message from TCP socket.
TcpSocket::receiveMessage() {
Uint16 receivedBytes = 0;
// get the common fixed-size message header (this is an own message structure)
Uint16 numBytes = recv(m_connectFd, msgPtr + receivedBytes, sizeof(SCommonTcpMSGHeader), MSG_WAITALL);
...
receivedBytes = numBytes;
expectedMsgLength = commonMsgHeader->m_msgLength; // commonMsgHeader is mapped to received header bytes
...
// ok to get message body
numBytes = recv(m_connectFd, msgPtr + receivedBytes, expectedMsgLength - receivedBytes, MSG_WAITALL);
}
The TCP connection is not closed in between message receptions. The
message listener just keeps reading data from the socket. Is this
really a correct way to implement a TCP message receiver?
Yes but it must close the socket and exit when it receives the EOS indication (recv() returns zero).
I think it's ten to one your raw socket and TCP socket FDs are getting mixed up somewhere.
Umm... It appears that it was the raw socket after all which has received the messages. Can see from the log that it's the raw message handler printing out message reception things, not the TCP message handler. Duh...! :S Sorry. So it seems the binding of the raw socket into the other network interface doesn't work correctly. Need to fix that. Funny though that sometimes it works with SSH/SFTP, sometimes it does not.