I have the following code:
if (net.connect(host, port)) {
String req = "GET /curTemp?temp=" + String(temperatureFahrenheit) + " HTTP/1.1\r\n" +
"Host: " + host + "\r\n" +
"Authorization: Basic xxxxxxxxxxxxxxxxx\r\n" +
"Connection: close\r\n\r\n";
net.print(req);
// Get headers
while (net.connected()) {
String line = net.readStringUntil('\r');
if (line == "\r") {
break;
}
}
// Get temperature
StaticJsonBuffer<50> jsonBuffer;
JsonObject& root = jsonBuffer.parseObject(net.readStringUntil('\r'));
const char* temp = root["temp"];
Serial.print("Temp:");
Serial.println(temp);
} else {
Serial.println("connection failed");
}
This is taking 15 seconds to complete and I'm not sure why. I can use the same request in a web browser and it comes back immediately. It specifically is the net.readStringUntil that seems to be taking the time.
UPDATE: I found a workaround by setting setTimeout(1000) but I'm not understanding why this is necessary. Shouldn't the request close the connection and readStringUntil() terminate when complete? Maybe I don't understand WiFiClientSecure?
UPDATE2: I found the problem. See answer below.
Ok, I'm putting this in to clarify. Every example shows '\r' for readStringUntil() which can be confusing. So what happens is if you use '\r' then the next read will begin with '\n' so if you are looking for the end of the headers, it will actually be '\n\r\n' because of the carry over from the previously read line. So what happens is your data is out of sync with crlfs. I found that using '\n' for the terminator works better because the standard format for headers ends with '\r\n'. Then you can test for end of headers by looking for just '\r' (the \n terminator gets eaten by the readStringUntil()). You can then start another loop to get your data. Then everything works fine with no need of setTimeout.
Example:
if (net.connect(host, port)) {
String req = "GET /curTemp?temp=" + String(temperatureFahrenheit) + " HTTP/1.1\r\n" +
"Host: " + host + "\r\n" +
"Authorization: Basic dXNlcjpkYWYxMjM0\r\n" +
"Connection: close\r\n\r\n";
net.print(req);
delay(1000);
// Get headers
while (net.available()) {
// Note \n for terminator
String line = net.readStringUntil('\n');
// Only \r because \n is eaten by readStringUntil()
if (line == "\r") {
break;
}
}
// Now look for data (in my case only one line of JSON)
// No \r\n on this one.
String line = net.readStringUntil('}');
// Put back the character eaten by readStringUntil()
line = line + "}";
Serial.println(line);
Related
I want to use a function in loop condition and reuse it in the loop, like this code in Java, but I want it in kotlin :
while ( (data = in.readLine()) != null ) {
System.out.println("\r\nMessage from " + clientAddress + ": " + data);
}
I tried copying this code in android studio to automatically convert to Kotlin and the code looks like this :
while (reader.readLine().also { line = it } != null) {
Log.d("Line", line)
lines.add(line)
}
So I managed to get the lines with var line = "" before the loop, but the loop doesn't stop when the reader is done getting the message sent from the socket.
My wish is to send 2 messages through the socket, so I try to get all lines of one message, and when the second one arrives, I have to clear my lines variable to get the next message, but I can't.
Thanks !
You can use reader.lineSequence() function:
reader.lineSequence().forEach {
Log.d("Line", it)
lines.add(it)
}
Or as suggested in documentation, you can improve it by using File.useLines() which automatically closes the File reader so you don't have to do it manually
I was able to print raw ZPL commands from PHP directly to the printer, except that I can't print more than 1 label at once after windows update to windows-10 on the TLP 2844-Z printer and my first time when installing WebClientPrint Processor (WCPP) in windows-10. When I was trying to emulate ZPL printer in the ZPL Printer app it also happened. The only exception was when I try this on the mac Safari browser, it's doing fine.
Working request script (still working in Safari, and previously in all other browser):
for(var i=0; i<rows.length; i++){
javascript:jsWebClientPrint.print('useDefaultPrinter=' + $('#useDefaultPrinter').attr('checked') + '&printerName=' + $('#installedPrinterName').val() + '¶m=' + rows[i].value);
}
What's preventing me was the permission asking:
on Chrome weren't generated as many time as the request were (which aren't the problem on Safari).
Example when request were 2:
it only ask for permission once, resulting (only 1 label printed):
when it supposed to be (2 labels printed):
I was able to reproduce the above by using the following script:
for (var i = 0; i < rows.length; i++) {
var url = ('useDefaultPrinter=' + $('#useDefaultPrinter').attr('checked') + '&printerName=' + $('#installedPrinterName').val() + '¶m=' + rows[i].value);
window.open('webclientprint:' + domain + url);
}
Which aren't ideal since many tabs would be generated only to print, where previously you don't need any new tab to do the same.
Any idea how to solve this? So that it would print as many as the request ask?
What I did to solve this was to make each request on a separate tab and closed the tab once it's executed. To make it simple I make it into a separate function.
Request script changed into:
for (var i = 0; i < rows.length; i++) {
if (i > 0)
delayPrint(rows[i], i); // separate function
else
javascript: jsWebClientPrint.print('useDefaultPrinter=' + $('#useDefaultPrinter').attr('checked') + '&printerName=' + $('#installedPrinterName').val() + '¶m=' + rows[i].value);
}
separate function used for delaying request and to make each request on a separate tab and closed the tab once it's executed:
function delayPrint(data, interval) {
setTimeout(function() {
var wnd = window.open('webclientprint:' + domain + ('useDefaultPrinter=' + $('#useDefaultPrinter').attr('checked') + '&printerName=' + $('#installedPrinterName').val() + '¶m=' + rows[i].value));
setTimeout(function() {
wnd.close(); // close once it's done
}, 1000);
}, interval * 3000);
}
I've got 2 processes communicating over TCP sockets. Side A sends a string to side B, which is sometimes encrypted using standard crypto/cipher package. The resulting string may include a new line character but Side B's bufio scanner is interpreting it as the end of the request. I want side B to continue accepting lines, append them and wait for a known end-of-command character before further processing it. Side B will return a response to Side A, so the connection remains open and therefore cannot use a close-connection event as a command delimiter.
Everything is working fine for single-line commands, but these new line characters in the encrypted output cause issues (about 10% of the time).
Side A will send in the following formats (the third is a legitimate example of a problem string I'm trying to process correctly):
callCommand()
callCommand("one","two","three")
callCommand("string","encrypted-data-to-follow","[7b��Cr��l��G���bH�#x��������� �(z�$�a��0��ڢ5Y7+��U�QT�ΐl�K�(�n�U��J����QK�BX�+�l\8H��-g�y.�.�1�f��I�C�Ȓ㳿���o�xz�8?��c�e ��Tb��?4�hDW���
�<���Е�gc�������N�V���ۓP8 �����O3")
We can fairly reliably say the end-of-command keys are a close parentheses ")" and a new line character.
Side A's function to send to side B:
func writer(text string) string {
conn, err := net.Dial("tcp", TCPdest)
t := time.Now()
if err != nil {
if _, t := err.(*net.OpError); t {
fmt.Println("Some problem connecting.\r\n")
} else {
fmt.Println("Unknown error: " + err.Error()+"\r\n")
}
} else {
conn.SetWriteDeadline(time.Now().Add(1 * time.Second))
_, err = conn.Write([]byte(text+"\r\n"))
if err != nil {
fmt.Println("Error writing to stream.\r\n")
} else {
timeNow := time.Now()
if timeNow.Sub(t.Add(time.Duration(5*time.Second))).Seconds() > 5 {
return "timeout"
}
scanner := bufio.NewScanner(conn)
for {
ok := scanner.Scan()
if !ok {
break
}
if strings.HasPrefix(scanner.Text(), "callCommand(") && strings.HasSuffix(scanner.Text(), ")") {
conn.Close()
return scanner.Text()
}
}
}
}
return "unspecified error"
}
Side B's handling of incoming connections:
src := "192.168.68.100:9000"
listener, _ := net.Listen("tcp", src)
defer listener.Close()
for {
conn, err := listener.Accept()
if err != nil {
fmt.Println("Some connection error: %s\r\n", err)
}
go handleConnection(conn)
}
func handleConnection(conn net.Conn) {
remoteAddr := conn.RemoteAddr().String()
fmt.Println("Client connected from " + remoteAddr + "\r\n")
scanner := bufio.NewScanner(conn)
wholeString := ""
for {
ok := scanner.Scan()
if !ok {
break
}
//Trying to find the index of a new-line character, to help me understand how it's being processed
fmt.Println(strings.Index(scanner.Text(), "\n"))
fmt.Println(strings.Index(wholeString, "\n"))
//for the first line received, add it to wholeString
if len(wholeString) == 0 {
wholeString = scanner.Text()
}
re := regexp.MustCompile(`[a-zA-Z]+\(.*\)\r?\n?`)
if re.Match([]byte(wholeString)) {
fmt.Println("Matched command format")
handleRequest(wholeString, conn)
} else if len(wholeString) > 0 && !re.Match([]byte(wholeString)) {
//Since we didn't match regex, we can assume there's a new-line mid string, so append to wholeString
wholeString += "\n"+scanner.Text()
}
}
conn.Close()
fmt.Println("Client at " + remoteAddr + " disconnected.\r\n")
}
func handleRequest(request string, conn net.Conn) {
fmt.Println("Received: "+request)
}
I'm not really sure this approach on Side B is correct but included my code above. I've seen a few implementations but a lot seem to rely on a close of connection to begin processing the request, which doesn't suit my scenario.
Any pointers appreciated, thanks.
Your communication "protocol" (one line being one message, not quite a protocol) clearly cannot handle binary data. If you want to send text data in your protocol, you could convert your binary data to text, using a Base64 encoding for example. You would also need some semantics to indicate that some text was converted from binary.
Or you could change your protocol to handle binary data natively. You could prepend the length of the binary data to follow, so that you know you have to read this data as binary and not interpret a newline character as the end of the message.
There are many protocols doing this very well, perhaps you don't need to come up with your custom one. If you want to send text messages, HTTP is very simple to use, you could format your data as JSON, using Base64 to convert your binary data to text:
{
"command": "string",
"args": [
"binaryDataAsBase64"
]
}
I'm attempting to build a Handler to perform a put operation with every logging event on a log file over Sftp. Currently, I've built a new logger in line of my Groovy script that uses an SftpConnector artifact's ChannelSftp to perform a put and append to a log file. When I run my code I receive an error message saying it's an Invalid Type=105. If I append to a string and at the end of my script upload the strings contents to my log file then there are no issues. I'm guessing that the error I'm receiving is due to multiple put operations on the same file in rapid secession?
def LOG = Logger.getLogger(Logger.GLOBAL_LOGGER_NAME)
def handler = new Handler() {
///String outputBuffer
#Override
void publish(LogRecord record) {
ProgressMonitor monitor = new ProgressMonitor()
String aRecord = new SimpleDateFormat("MMM dd, YYYY hh:mm:ss aaa").format(new Date()).toString() + " " + record.level.toString() + ":" + " " + record.message.toString()
def stream = IOUtils.toInputStream(aRecord,"UTF-8")
connector.getChannelSftp().put(stream ,props.getProperty("sftp.log"),monitor,2)
while(!monitor.isFinished()){
//just pause until logging is done.
}
stream.close()
//outputBuffer = outputBuffer + new SimpleDateFormat("MMM dd, YYYY hh:mm:ss aaa").format(new Date()).toString() + " " + record.level.toString() + ":" + " " + record.message.toString() + "\n"
}
#Override
void flush() {
}
#Override
void close() throws SecurityException {
}
void push(){
connector.getChannelSftp().put(IOUtils.toInputStream(outputBuffer, "UTF-8"),
props.getProperty("sftp.log"), 2)
connector.getChannelSftp().put(IOUtils.toInputStream('\n'),
props.getProperty("sftp.log"),2)
}
}
//SftpHandler handler = new SftpHandler(props)
//def handler = new FileHandler(new
File(props.getProperty("log.location")).absolutePath, true) //provides a writer for log file.
handler.setFormatter(new SimpleFormatter()) //defines the logger file
format. must be declared
LOG.addHandler(handler) //adding the handler.
I figured out the problem:
When attempting a put on the open ChannelSftp it works fine so long as the previous put operation has completed. If there are multiple puts on the same channel over the same session the server will execute and lock the file.
The way to get around this is to open a new channel, execute the put, and close the channel after inside the custom handler. Each put command is executed over a different session and will be queued if the file is locked rather than simply rejected.
The only solution I can think of is to do it with JS, but I can't pass any variables from the phantom script to the JS I'm trying to execute.
You should take a look at CasperJS. It's a very nice PhantomJS script that allows you to easily perform this kind of web behavior.
As far as communicating with your PhantomJS script, as stands today, you have a few reliable options:
Pass your data in via command line args.
Exchange data via reading/writing of files.
Have your PhantomJS script call your Node.js script via GETS/POSTS.
Yes, there are issue in the QtWebKit bridge between C++ and JS to pass stuff up and down.
It works, but better from JS to C++ than the opposite.
We have a number of issues to address, but this is one of the highest in number one in terms of demand.
In the meanwhile, I usually "decorate" the page object like this:
var page = require("webpage").create();
page.evaluateWithParams = function(func) {
var args = [].slice.call(arguments, 1),
str = 'function() { return (' + func.toString() + ')(',
i, ilen, arg;
for (i = 0, ilen = args.length; i < ilen; ++i) {
arg = args[i];
if (/object|string/.test(typeof arg)) {
str += 'JSON.parse(' + JSON.stringify(JSON.stringify(arg)) + '),';
} else {
str += arg + ',';
}
}
str = str.replace(/,$/, '); }');
return this.evaluate(str);
}
And then you can call a function within the scope of the page like this:
var a = 1, b = 2;
page.evaluateWithParams(function(arg1, args) {
// your code that uses arg1 and arg2
}, a, b);
Hope this helps.
Ivan