Convert http payload to string for debugging - string

I am using this C++ code on an ESP8266 to check on several http accessible data. Once in a while this device acts strangelay and I would like to be able to look at the data it gets. So I collect all payloads to be able to check it via rest api. The following code ALMOST works
String payloadAll = "start ###";
http.begin(client, getLink2);
httpCode = http.GET();
if ( httpCode > 0 ) { payload = http.getString(); payloadAll = payloadAll + payload.c_str() + "### ENDE-PAYLOAD1 ###"; }
the output looks like this:
start ###Internals: DEF 1 60 192.168.40.1:1502 TCP DeviceName 192.168.40.1:1502 EXPECT idle FD 6 FUUID 5xxxxx-fxxx-9xxx-fxxx-exxxxxxxxxxx IODev SolarEdge Interval 60 LASTOPEN 1647580317.55657 MODBUSID 1 MODE master MODEL SE8K-XXXXXXXXXX### ENDE-PAYLOAD1 ###
That is the first line of the payload only. I don't know much about the formast the requested data is in but once it is in the String, I can search ind find keyword and such from the lines I don't see in the output, so I know the data is there. If I leave out the ".c_str()" part, I will only get "start ###### ENDE-PAYLOAD1 ###"
How can I get all the lines - formatting is not important - it can all be one line, I just need to be able to see it as plain text.
Thanks!

Related

Spark stream data from IBM MQ

I want to stream data from IBM MQ. I have tried out this code I found on Github.
I am able to stream data from the Queue but each time it streams, it takes all the data from it. I just want to take the current data that is pushed into the queue. I looked up on many sites but didn't find the correct solution.
In Kafka we had something like KafkaStreamUtils for streaming the near-real-time data. Is there anything similar to that in IBM MQ so that it streams only the latest data?
The sample in the link you provided shows that it calls the following method to recieve from the the IBM MQ:
CustomMQReciever(String host , int port, String qm, String channel, String qn)
If you review CustomMQReciever here you can see that it is only Browsing the messages from the queue. This means the message will still be on the queue and the next time you connect you will receive the same messages:
MQQueueBrowser browser = (MQQueueBrowser) qSession.createBrowser(queue);
If you wanted to remove the messages from the queue you would need to call a method that does consume them from the queue instead of browsing them from the queue. Below is an example of changes to CustomMQReciever.java that should accomplish what you want:
Under the initConnection() change the above code to the following to cause it to remove the messages from the queue:
MQMessageConsumer consumer = (MQMessageConsumer) qSession.createConsumer(queue);
Get rid of:
enumeration= browser.getEnumeration();
Under receive() change the following:
while (!isStopped() && enumeration.hasMoreElements() )
{
receivedMessage= (JMSMessage) enumeration.nextElement();
String userInput = convertStreamToString(receivedMessage);
//System.out.println("Received data :'" + userInput + "'");
store(userInput);
}
To something like this:
while (!isStopped() && (receivedMessage = consumer.receiveNoWait()) != null))
{
String userInput = convertStreamToString(receivedMessage);
//System.out.println("Received data :'" + userInput + "'");
store(userInput);
}

Node-RED + DB2 - msg : string[18] "No response object"

So, I'm a beginner in Node-RED and need to make a simple API with DB2 queries through flows. I'm using node-red-contrib-db2 to accomplish that. The thing is, I managed to get the results to several payloads to the debugger node, either triggered by timestamp or HTTP Request. However, I can't get these results on HTTP Reply and can't find the reason. Is it a problem with the db2 plugin or just me?
Exported nodes below:
[{"id":"96197abb.fd4098","type":"http in","z":"b4aa8db5.217028","name":"","url":"/wastes","method":"get","upload":false,"swaggerDoc":"","x":150,"y":140,"wires":[["9affb306.caf7e"]]},{"id":"bda39d37.edb418","type":"http response","z":"b4aa8db5.217028","name":"","statusCode":"200","headers":{},"x":940,"y":100,"wires":[]},{"id":"41708443.e4670c","type":"inject","z":"b4aa8db5.217028","name":"","topic":"","payload":"","payloadType":"date","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":220,"y":40,"wires":[["22a6e217.ead65e"]]},{"id":"9d1e6783.eb246","type":"ibmdb","z":"b4aa8db5.217028","mydb":"3a218407.1cca74","name":"IOCDATA","x":560,"y":40,"wires":[["80e51c1b.23b378"],[]]},{"id":"80e51c1b.23b378","type":"debug","z":"b4aa8db5.217028","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","x":730,"y":40,"wires":[]},{"id":"22a6e217.ead65e","type":"function","z":"b4aa8db5.217028","name":"SQL Query","func":"msg.database = \"iocdata\";\nmsg.payload = \"select * from viseu.waste_view\";\nreturn msg;","outputs":1,"noerr":0,"x":390,"y":40,"wires":[["9d1e6783.eb246"]]},{"id":"4a6bd014.f39868","type":"ibmdb","z":"b4aa8db5.217028","mydb":"3a218407.1cca74","name":"IOCDATA","x":500,"y":140,"wires":[["bda39d37.edb418","74e28d3e.039be4"],[]]},{"id":"9affb306.caf7e","type":"function","z":"b4aa8db5.217028","name":"SQL Query","func":"msg.database = \"iocdata\";\nmsg.payload = \"select * from viseu.waste_view where id = 1\";\nreturn msg;","outputs":1,"noerr":0,"x":330,"y":140,"wires":[["4a6bd014.f39868"]]},{"id":"74e28d3e.039be4","type":"debug","z":"b4aa8db5.217028","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","x":950,"y":180,"wires":[]},{"id":"3a218407.1cca74","type":"IbmDBdatabase","z":"","host":"10.102.0.62","port":"50002","db":"iocdata"}]
This is an issue with the ibmdb node you are using - it is not reusing the received message when it sends its results. That means the msg.req and msg.res properties provided by the HTTP In node are not set on the message by the time it reaches the HTTP Response node. This means the response node doesn't not what request to respond to.
To work around the issue, one approach, which isn't ideal, is to store msg.req and msg.res in flow context using a Change node before the ibmdb node, and then copy them back onto the msg after the ibmdb node. This isn't ideal because it can only handle one request at a time.
It would be best to raise an issue against the ibmdb node.
I managed to reach success in my flow, at the cost of many workarounds and variable juggling. But it IS working now. Select count + select rows + join rows where msg.complete is set when count value is reached. Here is the code:
[{"id":"96197abb.fd4098","type":"http in","z":"b4aa8db5.217028","name":"","url":"/wastes","method":"get","upload":false,"swaggerDoc":"","x":90,"y":140,"wires":[["d5f42a96.83f688"]]},{"id":"bda39d37.edb418","type":"http response","z":"b4aa8db5.217028","name":"","statusCode":"200","headers":{},"x":980,"y":260,"wires":[]},{"id":"4a6bd014.f39868","type":"ibmdb","z":"b4aa8db5.217028","mydb":"3a218407.1cca74","name":"SELECT waste_view","x":360,"y":200,"wires":[["35f99a5a.c7f87e"],[]]},{"id":"9affb306.caf7e","type":"function","z":"b4aa8db5.217028","name":"SQL Query","func":"msg.database = \"iocdata\";\nmsg.payload = \"select count(*) from viseu.waste_view\";\n\nreturn msg;","outputs":1,"noerr":0,"x":170,"y":200,"wires":[["4a6bd014.f39868"]]},{"id":"74e28d3e.039be4","type":"debug","z":"b4aa8db5.217028","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","x":890,"y":380,"wires":[]},{"id":"d5f42a96.83f688","type":"change","z":"b4aa8db5.217028","name":"","rules":[{"t":"set","p":"req","pt":"flow","to":"req","tot":"msg"},{"t":"set","p":"res","pt":"flow","to":"res","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":260,"y":140,"wires":[["9affb306.caf7e"]]},{"id":"c3ebb136.aa8988","type":"change","z":"b4aa8db5.217028","name":"","rules":[{"t":"set","p":"req","pt":"msg","to":"req","tot":"flow"},{"t":"set","p":"res","pt":"msg","to":"res","tot":"flow"}],"action":"","property":"","from":"","to":"","reg":false,"x":800,"y":260,"wires":[["bda39d37.edb418"]]},{"id":"ca59ece2.844b3","type":"join","z":"b4aa8db5.217028","name":"","mode":"custom","build":"array","property":"payload","propertyType":"msg","key":"topic","joiner":"\\n","joinerType":"str","accumulate":false,"timeout":"","count":"","reduceRight":false,"reduceExp":"","reduceInit":"","reduceInitType":"","reduceFixup":"","x":630,"y":260,"wires":[["c3ebb136.aa8988","74e28d3e.039be4"]]},{"id":"35f99a5a.c7f87e","type":"function","z":"b4aa8db5.217028","name":"SQL Query","func":"msg.rowcount = msg.payload[1];\nmsg.database = \"iocdata\";\nmsg.payload = \"select * from viseu.waste_view\";// fetch first \" + msg.count[1] + \" rows only\";\n\nreturn msg;","outputs":1,"noerr":0,"x":550,"y":200,"wires":[["327a8ae.a8ce2f6"]]},{"id":"2666e2ba.41dc8e","type":"ibmdb","z":"b4aa8db5.217028","mydb":"3a218407.1cca74","name":"SELECT waste_view","x":800,"y":200,"wires":[["9008e06f.bf6d7"],[]]},{"id":"ec61a7f3.68cf8","type":"debug","z":"b4aa8db5.217028","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"count","x":650,"y":320,"wires":[]},{"id":"327a8ae.a8ce2f6","type":"change","z":"b4aa8db5.217028","name":"","rules":[{"t":"set","p":"rowcount","pt":"flow","to":"rowcount","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":670,"y":140,"wires":[["2666e2ba.41dc8e"]]},{"id":"90204f2d.8bafe8","type":"change","z":"b4aa8db5.217028","name":"","rules":[{"t":"set","p":"rowcount","pt":"msg","to":"rowcount","tot":"flow"}],"action":"","property":"","from":"","to":"","reg":false,"x":310,"y":320,"wires":[["6888cd0d.d00064"]]},{"id":"9008e06f.bf6d7","type":"counter","z":"b4aa8db5.217028","name":"","init":"0","step":"1","lower":"","upper":"","mode":"increment","outputs":"1","x":220,"y":260,"wires":[["90204f2d.8bafe8"]]},{"id":"6888cd0d.d00064","type":"function","z":"b4aa8db5.217028","name":"if rowcount === count","func":"if (msg.count === msg.rowcount) {\n msg.complete = true;\n}\n\nreturn msg;","outputs":1,"noerr":0,"x":440,"y":260,"wires":[["ca59ece2.844b3","ec61a7f3.68cf8","a63f6ad6.26f08"]]},{"id":"a63f6ad6.26f08","type":"debug","z":"b4aa8db5.217028","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"rowcount","x":660,"y":380,"wires":[]},{"id":"3a218407.1cca74","type":"IbmDBdatabase","z":"","host":"10.102.0.62","port":"50002","db":"iocdata"}]
EDIT 21/02/2018: the previous solution is not very good, because the counter saves its value mysteriously and I can't reset it as I wanted it. That makes the counter surpass the wished rowcount value. So, I had to make my own counter in a function node. New code below:
[{"id":"96197abb.fd4098","type":"http in","z":"b4aa8db5.217028","name":"","url":"/wastes","method":"get","upload":false,"swaggerDoc":"","x":90,"y":60,"wires":[["d5f42a96.83f688"]]},{"id":"bda39d37.edb418","type":"http response","z":"b4aa8db5.217028","name":"","statusCode":"200","headers":{},"x":720,"y":220,"wires":[]},{"id":"4a6bd014.f39868","type":"ibmdb","z":"b4aa8db5.217028","mydb":"3a218407.1cca74","name":"SELECT waste_view","x":740,"y":60,"wires":[["35f99a5a.c7f87e"],[]]},{"id":"9affb306.caf7e","type":"function","z":"b4aa8db5.217028","name":"SQL Query","func":"msg.database = \"iocdata\";\nmsg.payload = \"select count(*) from viseu.waste_view\";\n\nreturn msg;","outputs":1,"noerr":0,"x":530,"y":60,"wires":[["4a6bd014.f39868"]]},{"id":"74e28d3e.039be4","type":"debug","z":"b4aa8db5.217028","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","x":550,"y":280,"wires":[]},{"id":"d5f42a96.83f688","type":"change","z":"b4aa8db5.217028","name":"save req and res","rules":[{"t":"set","p":"req","pt":"flow","to":"req","tot":"msg"},{"t":"set","p":"res","pt":"flow","to":"res","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":290,"y":60,"wires":[["9affb306.caf7e"]]},{"id":"ca59ece2.844b3","type":"join","z":"b4aa8db5.217028","name":"","mode":"custom","build":"array","property":"payload","propertyType":"msg","key":"topic","joiner":"\\n","joinerType":"str","accumulate":false,"timeout":"","count":"msg.count","reduceRight":false,"reduceExp":"","reduceInit":"","reduceInitType":"","reduceFixup":"","x":390,"y":220,"wires":[["74e28d3e.039be4","c3ebb136.aa8988"]]},{"id":"35f99a5a.c7f87e","type":"function","z":"b4aa8db5.217028","name":"SQL Query","func":"msg.rowcount = msg.payload[1];\nmsg.database = \"iocdata\";\nmsg.payload = \"select * from viseu.waste_view\";\n\nreturn msg;","outputs":1,"noerr":0,"x":950,"y":60,"wires":[["327a8ae.a8ce2f6"]]},{"id":"2666e2ba.41dc8e","type":"ibmdb","z":"b4aa8db5.217028","mydb":"3a218407.1cca74","name":"SELECT waste_view","x":380,"y":140,"wires":[["90204f2d.8bafe8"],[]]},{"id":"327a8ae.a8ce2f6","type":"change","z":"b4aa8db5.217028","name":"save rowcount","rules":[{"t":"set","p":"rowcount","pt":"flow","to":"rowcount","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":160,"y":140,"wires":[["2666e2ba.41dc8e"]]},{"id":"90204f2d.8bafe8","type":"change","z":"b4aa8db5.217028","name":"get rowcount and count","rules":[{"t":"set","p":"rowcount","pt":"msg","to":"rowcount","tot":"flow"},{"t":"set","p":"count","pt":"msg","to":"count","tot":"flow"}],"action":"","property":"","from":"","to":"","reg":false,"x":630,"y":140,"wires":[["6888cd0d.d00064"]]},{"id":"6888cd0d.d00064","type":"function","z":"b4aa8db5.217028","name":"if count === rowcount","func":"//fix: msg.count ultrapassa msg.rowcount\nmsg.count = msg.count+1 || 1;\n\nif (msg.count === msg.rowcount) {\n msg.complete = true;\n msg.count = 0;\n}\n\nreturn msg;","outputs":1,"noerr":0,"x":880,"y":140,"wires":[["82ecfa98.9473d8"]]},{"id":"c3ebb136.aa8988","type":"change","z":"b4aa8db5.217028","name":"get req, res","rules":[{"t":"set","p":"req","pt":"msg","to":"req","tot":"flow"},{"t":"set","p":"res","pt":"msg","to":"res","tot":"flow"}],"action":"","property":"","from":"","to":"","reg":false,"x":550,"y":220,"wires":[["bda39d37.edb418"]]},{"id":"82ecfa98.9473d8","type":"change","z":"b4aa8db5.217028","name":"save count","rules":[{"t":"set","p":"count","pt":"flow","to":"count","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":210,"y":220,"wires":[["ca59ece2.844b3"]]},{"id":"3a218407.1cca74","type":"IbmDBdatabase","z":"","host":"10.102.0.69","port":"50002","db":"iocdata"}]

Storing Value in Arduino Sent From Python via Serial

I have been trying to send a value from a Python program via serial to an Arduino, but I have been unable to get the Arduino to store and echo back the value to Python. My code seems to match that I've found in examples online, but for whatever reason, it's not working.
I am using Python 3.5 on Windows 10 with an Arduino Uno. Any help would be appreciated.
Arduino code:
void readFromPython() {
if (Serial.available() > 0) {
incomingIntegrationTime = Serial.parseInt();
// Collect the incoming integer from PySerial
integration_time = incomingIntegrationTime;
Serial.print("X");
Serial.print("The integration time is now ");
// read the incoming integer from Python:
// Set the integration time to what you just collected IF it is not a zero
Serial.println(integration_time);
Serial.print("\n");
integration_time=min(incomingIntegrationTime,2147483648);
// Ensure the integration time isn't outside the range of integers
integration_time=max(incomingIntegrationTime, 1);
// Ensure the integration time isn't outside the range of integers
}
}
void loop() {
readFromPython();
// Check for incoming data from PySerial
delay(1);
// Pause the program for 1 millisecond
}
Python code:
(Note this is used with a PyQt button, but any value could be typed in instead of self.IntegrationTimeInputTextbox.text() and the value is still not receieved and echoed back by Arduino).
def SetIntegrationTime(self):
def main():
# global startMarker, endMarker
#This sets the com port in PySerial to the port with the Genuino as the variable arduino_ports
arduino_ports = [
p.device
for p in serial.tools.list_ports.comports()
if 'Genuino' in p.description
]
#Set the proper baud rate for your spectrometer
baud = 115200
#This prints out the port that was found with the Genuino on it
ports = list(serial.tools.list_ports.comports())
for p in ports:
print ('Device is connected to: ', p)
# --------------------------- Error Handling ---------------------------
#Tell the user if no Genuino was found
if not arduino_ports:
raise IOError("No Arduino found")
#Tell the user if multiple Genuinos were found
if len(arduino_ports) > 1:
warnings.warn('Multiple Arduinos found - using the first')
# ---------------------------- Error Handling ---------------------------
#=====================================
spectrometer = serial.Serial(arduino_ports[0], baud)
integrationTimeSend = self.IntegrationTimeInputTextbox.text()
print("test value is", integrationTimeSend.encode())
spectrometer.write(integrationTimeSend.encode())
for i in range(10): #Repeat the following 10 times
startMarker = "X"
xDecoded = "qq"
xEncoded = "qq"
while xDecoded != startMarker: #Wait for the start marker (X))
xEncoded = spectrometer.read() #Read the spectrometer until 'startMarker' is found so the right amound of data is read every time
xDecoded = xEncoded.decode("UTF-8");
print(xDecoded);
line = spectrometer.readline()
lineDecoded = line.decode("UTF-8")
print(lineDecoded)
#=====================================
spectrometer.close()
#===========================================
#WaitForArduinoData()
main()
First, this is a problem:
incomingValue = Serial.read();
Because read() returns the first byte of incoming serial data reference. On the Arduino the int is a signed 16-bit integer, so reading only one byte of it with a Serial.read() is going to give you unintended results.
Also, don't put writes in between checking if data is available and actual reading:
if (Serial.available() > 0) {
Serial.print("X"); // Print your startmarker
Serial.print("The value is now ");
incomingValue = Serial.read(); // Collect the incoming value from
That is bad. Instead do your read immediately as this example shows:
if (Serial.available() > 0) {
// read the incoming byte:
incomingByte = Serial.read();
That's two big issues there. Take care of those and let's take a look at it after those fundamental issues are corrected.
PART 2
Once those are corrected, the next thing to do is determine which side of the serial communication is faulty. Generally what I like to do is determine one side is sending properly by having its output show up in a terminal emulator. I like TeraTerm for this.
Set your python code to send only and see if your sent values show up properly in a terminal emulator. Once that is working and you have confidence in it, you can attend to the Arduino side.

Firebase... Add/Update Firebase Using node.js Script

I have arbitrary JSON that is sensibly laid out like this:
[
{
"id":100,
"name":"Buckeye, AZ",
"status":"OPEN",
"address":{
"street":"416 S Watson RD",
"city":"Buckeye"
...
}
}
]
I've written a node.js script like this for proof of concept (why I'm using node is that the JS API seems better supported than REST or Ruby for this. I could be wrong):
http = require('http')
Firebase = require('firebase')
all_sites_url = "http://supercharge.info/service/supercharge/allSites"
firebase_url = "https://tesla-supercharger.firebaseio.com/"
http.get(all_sites_url, (res) ->
body = ""
res.on "data", (chunk) ->
body += chunk
return
res.on "end", ->
response = JSON.parse(body)
all_sites = response
send_to_firebase(response)
return
return
).on "error", (e) ->
console.log "Got error: ", e
return
send_to_firebase = (response) ->
firebase_ref = new Firebase(firebase_url)
for charger in response
console.log charger
new_child = firebase_ref.push()
new_child.set {id: charger.id, data: charger}, (error) ->
if error
console.log "Data cound not be saved #{error}"
else
console.log "Data saved successfully"
The result is a unique id generated by Firebase, which has as a child a data and an id child. The data child has the expected information like name, status, etc.
What I'd prefer is to generate a key-value pair. E.g., for an id of 100:
- 100
- name
- address
street
city
etc. So my first question is how to accomplish this or if it is even sensible.
After the first time around, this data (call it the data from an external server) will be there and a mobile app will have added some fields. These are not present in the data already there. Next time I fetch data from the external server, I want to update things that have changed that the server would know about, like status. I don't want to tamper with things that only the mobile devices would know about like remote_observations.
I know I'm seeming a bit dense here, but I'm trying to put together a sensible data model that will be updatable from that server using a CRON job and incrementally updatable from a bunch of mobile devices.
Any help is much appreciated.
UPDATE: I have found that this works for getting the structure I want:
send_to_firebase = (response) ->
firebase_ref = new Firebase(firebase_url)
for charger in response
firebase_ref.child(charger.id).update charger, (error) ->
if error
console.log "Data could not be saved #{error}"
else
responses_pending += 1
console.log "Data saved successfully : #{responses_pending} pending"
firebase_ref.on 'value', ->
console.log "value received rp=#{responses_pending}"
process.exit() if (responses_pending -= 1) < 1
So the code I settled on is this:
http = require('http')
Firebase = require('firebase')
firebase_url = '/path/to/your/firebase'
# code to get JSON of the form:
{
"id":100,
"name":"Buckeye, AZ",
"status":"OPEN",
"address":{"street":"416 S Watson RD",
"city":"Buckeye",
"state":"AZ",
"zip":"85326",
"country":"USA"},
... etc.
}
# Asynchronous get of JSON hash from some server or other.
get_my_fine_JSON().on 'complete', (response) ->
send_to_firebase(response)
send_to_firebase = (response) ->
firebase_ref = new Firebase(firebase_url)
length = response.length
for charger in response
firebase_ref.child(charger.id).update charger, (error) ->
if error
console.log "Data could not be saved #{error}"
else
console.log "Data saved successfully"
process.exit() if length -= 1 is 0
Discussion:
The idea was to have a Firebase structure like this:
- 100
- address
street: "123 Main Street"
etc.
That's reason 1 why id is pulled up to be the primary key. Reason 2 is so that I can uniquely identify an object pulled off the external server as the "same" one in my Firebase and apply any updates necessary.
Epiphany 1: Update is more like upsert. If the key is there, whatever hash you supply replaces matching values. If it's not there, then Firebase happily adds it. Which is way cool because it covers both the push and patch cases.
Epiphany 2: This process will hang waiting for events if nothing tells it to stop. That's why the countdown index, length is decremented until the code has upserted (for lack of a better term) each item.
Observation 1: Doing this in node.js is super fast compared with REST using Python or Ruby. And this upsert stuff is wicked cool if I'm understanding it right.
Observation 2: There isn't a ton of wisdom out there as of this writing regarding writing node shell scripts to do this kind of stuff. Maybe it's a good idea, maybe a bad one. I don't know.
Observation 3: Because of the asynchronous nature of node and the Firebase Javascript API (both GOOD THINGs), terminating a process before the last bit is done can be tricky because your process has to hang on just long enough to complete its last request/response with Firebase. This is, as mentioned before, done in the completion handler of the update. Otherwise we wouldn't necessarily be complete when the process exited.
Caveat 1: Related to observation 2, this could be a bad idea, but I haven't been able to find resources that speak to the problem.
Caveat 2: This could be a horrid abuse or misunderstanding of the Firebase update API. I am reporting observed behavior in the limited case of my specific data. YMMV.
Caveat 3: I'm hoping the process lifetime is as I suggest it is in observation 3.
A note to the decaffeinated: The Javascript for this is so trivially different that it shouldn't be too tough to translate. Or go to js2coffee and paste the Coffeescript into the right pane to get real Javascript in the left pane that you can tune.

How to capture probe request data or probe response data sent from wireless router

I have a tplink-wr703n wireless router with OpenWrt.
I know I can capture all kinds of data when the adapter is in monitor mode.
I want to the adapter work in master mode, and I also want to capture probe request data sent from client or probe response data sent from my router.
I have tried to use libpcap to capture data, but I failed.
Can you tell me how I can get that data?
You can set up several modes on one radio card simultaneously.
Using the "iw" command you should be able to create a secondary wifi device interface with type monitor, I guess you could read all frame types from this one.
See http://wireless.kernel.org/en/users/Documentation/iw/vif/
I am also trying to prepare a scapy script to capture probe request only.
there is an Indian guy who made this nice video:https://www.youtube.com/watch?v=Z1MbpIkzQjU
His script seems to work in his enviroment but for some reason I cant get this to work for me.
I will appreciate your assistance.
The script is:
#!/usr/bin/python
import sys
from scapy.all import *
clientprobes = set()
def PacketHandler(pkt):
if pkt.haslayer(Dot11ProbeReq):
if len(pkt.info) > 0:
testcase = pkt.addr2 + '_ _ _' + pkt.info
if testcase not in clientprobes:
clientprobes.add(testcase)
print "New Probe Found: " + pkt.addr2 + ' ' + pkt.info
print "\n-----------Client Probes Table-------------\n"
counter = 1
for probe in clientprobes:
[client, ssid] = probe.split('---')
print counter, client, ssid
counter = counter + 1
print "\n--------------------------------------------\n"
sniff(iface = sys.argv[1], count = int(sys.argv[2]), prn = PacketHandler)

Resources