why in the raw data the first byte is in asciii - python-3.x

I've got the followings raw data in bytes :
b'\r\xdc\xc9\x00\x00\x00\x00\x00\x9f\x03\xdf\x00\x9f\x03\xdfI\x82W\x00\x00\x97\x00'
b'\x16\xd9\xc9\x00\x00\x00\x00\x00\x9f\x03\xdf\x00\x9f\x03\xdfI\xf2d\x02\x00\x97\x00'
b'K\xde\xc9\x01\x00\x00\x00\x00\x9f\x03\xdf\x00\x9f\x03\xdfI\x82W\x00\x00\x97\x00'
b':\xda\xc9\x02\x00\x00\x00\x00\x9f\x03\xdf\x00\x9f\x03\xdfI\x82W\x00\x00\x97\x00'
b'B\xda\xc9\x00\x00\x00\x00\x00\x9f\x03\xdf\x00\x9f\x03\xdfI\x82W\x00\x00\x97\x00'
b'\x15\xdb\xc9\x01\x00\x00\x00\x00\x9f\x03\xdf\x00\x9f\x03\xdfI\x82W\x00\x00\x97\x00'
As you can see the first value is in ascii. For example this couple are the same:
b'B\xda\xc9\x00\x00\x00\x00\x00\x9f\x03\xdf\x00\x9f\x03\xdfI\x82W\x00\x00\x97\x00'
Looking into the code of pyshark i see this :
def get_raw_packet(self):
assert "FRAME_RAW" in self, "Packet contains no raw data. In order to contains it, " \
"make sure that use_json and include_raw are set to True " \
"in the Capture object"
raw_packet = b''
byte_values = [''.join(x) for x in zip(self.frame_raw.value[0::2], self.frame_raw.value[1::2])]
for value in byte_values:
raw_packet += binascii.unhexlify(value)
return raw_packet
And my code is the following :
import pyshark
from pyshark.capture.pipe_capture import PipeCapture
import os
FIFO = 'informacion.pcap'
def print_callback(pkt):
print (pkt.get_raw_packet())
with open(FIFO) as fifo:
capture = PipeCapture(pipe=fifo,use_json=True,include_raw=True)
capture.apply_on_packets(print_callback)

Related

Groovy JSONArray : org.json.JSONException: when decoding string with \"

I get no way to parse this Json which seems to be valid for me :
def diffOfApi='''[{"op":"replace","path":"/data/0/messageBody","value":"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<WCS reference=\"336042800000431486\"><PREPARATION_REQUEST requestId=\"WDP_PICKVRAC3\" date=\"2022-02-08 18:33:39\"><CONTAINERS><CONTAINER containerId=\"336042800000431486_21_21121\" handle=\"CREATE\" wmsOrderId=\"WDP_PICKVRAC3\" site=\"CPEEPF\" status=\"CREATED\" referenceDate=\"2022-02-08 18:33:39\" purgeAuthorized=\"false\"><BARCODES><BARCODE main=\"false\" reusable=\"false\">336042800000431486</BARCODE></BARCODES><PACKAGING>TX</PACKAGING><WEIGHT_CHECKING mode=\"NO\"/></CONTAINER></CONTAINERS></PREPARATION_REQUEST></WCS>"}]'''
JSONArray jsonArray = new JSONArray(diffOfApi);
I get this error :
Reason:
org.json.JSONException: Expected a ',' or '}' at 71 [character 72 line 1]
at org.json.JSONTokener.syntaxError(JSONTokener.java:433)
at org.json.JSONObject.<init>(JSONObject.java:229)
at org.json.JSONTokener.nextValue(JSONTokener.java:363)
at org.json.JSONArray.<init>(JSONArray.java:115)
at org.json.JSONArray.<init>(JSONArray.java:144)
at TEST.run(TEST:32)
at com.kms.katalon.core.main.ScriptEngine.run(ScriptEngine.java:194)
at com.kms.katalon.core.main.ScriptEngine.runScriptAsRawText(ScriptEngine.java:119)
at com.kms.katalon.core.main.TestCaseExecutor.runScript(TestCaseExecutor.java:442)
at com.kms.katalon.core.main.TestCaseExecutor.doExecute(TestCaseExecutor.java:433)
at com.kms.katalon.core.main.TestCaseExecutor.processExecutionPhase(TestCaseExecutor.java:412)
at com.kms.katalon.core.main.TestCaseExecutor.accessMainPhase(TestCaseExecutor.java:404)
at com.kms.katalon.core.main.TestCaseExecutor.execute(TestCaseExecutor.java:281)
at com.kms.katalon.core.main.TestCaseMain.runTestCase(TestCaseMain.java:142)
at com.kms.katalon.core.main.TestCaseMain.runTestCase(TestCaseMain.java:133)
at com.kms.katalon.core.main.TestCaseMain$runTestCase$0.call(Unknown Source)
at TempTestCase1644483473725.run(TempTestCase1644483473725.groovy:25)
I think it is due to " encapsulated xml in item value
Any idea to make it working ?
You can replace the \" characters with ' to allow for JSON- and later XML-processing:
import groovy.json.*
import groovy.json.*
def diffOfApi='''[{"op":"replace","path":"/data/0/messageBody","value":"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<WCS reference=\"336042800000431486\"><PREPARATION_REQUEST requestId=\"WDP_PICKVRAC3\" date=\"2022-02-08 18:33:39\"><CONTAINERS><CONTAINER containerId=\"336042800000431486_21_21121\" handle=\"CREATE\" wmsOrderId=\"WDP_PICKVRAC3\" site=\"CPEEPF\" status=\"CREATED\" referenceDate=\"2022-02-08 18:33:39\" purgeAuthorized=\"false\"><BARCODES><BARCODE main=\"false\" reusable=\"false\">336042800000431486</BARCODE></BARCODES><PACKAGING>TX</PACKAGING><WEIGHT_CHECKING mode=\"NO\"/></CONTAINER></CONTAINERS></PREPARATION_REQUEST></WCS>"}]'''
diffOfApi = diffOfApi.replaceAll( /=.{1}([^"]+)"/, "='\$1'" )
def jsonArray = new JsonSlurper().parseText diffOfApi
assert 3 == jsonArray.first().size()
def xml = new XmlSlurper().parseText jsonArray.first().value // root element points to <WCS/>
assert 'WDP_PICKVRAC3' == xml.PREPARATION_REQUEST.#requestId.text()
you have incorrectly encoded json string.
if json contains doublequote in a value " - it must be encoded as \" but inside the groovy/java code each \ also must be encoded with \\
so when you have following in code:
def x = '''{"x":"abc\"def"}'''
then actual x value at runtime would be
{"x":"abc"def"}
that is incorrect json
so, x should be defined as
def x = '''{"x":"abc\\"def"}'''
to have runtime value
{"x":"abc\"def"}

How to decode payload from modbus using NodeJS?

I have a schneider power meter with rs485 surport. I using python with pymodbus to read register and decode payload from it (success). But now I want to do this with NodeJS, I can get raw data but I dont know how to decode it, I tried some method but result wrong!
This my python code:
from pymodbus.client.sync import ModbusSerialClient as ModbusClient
from pymodbus.constants import Endian
from pymodbus.payload import BinaryPayloadDecoder
def validator(instance):
if not instance.isError():
'''.isError() implemented in pymodbus 1.4.0 and above.'''
decoder = BinaryPayloadDecoder.fromRegisters(
instance.registers,
byteorder=Endian.Big, wordorder=Endian.Little
)
return float(decoder.decode_32bit_float())
else:
# Error handling.
return None
validator([5658, 17242]) # Result is 218.1
When I use NodeJS it return buffer and i tried to decode with:
let buf = Buffer.from([0xd6, 0xd4, 0x42, 0x47]);
payload = buf.readFloatBE(0); // It return other float number not 218.1
Can everyone help me ! Thanks !

How can I return a string from a Google BigQuery row iterator object?

My task is to write a Python script that can take results from BigQuery and email them out. I've written a code that can successfully send an email, but I am having trouble including the results of the BigQuery script in the actual email. The query results are correct, but the actual object I am returning from the query (results) always returns as a Nonetype.
For example, the email should look like this:
Hello,
You have the following issues that have been "open" for more than 7 days:
-List issues here from bigquery code
Thanks.
The code reads in contacts from a contacts.txt file, and it reads in the email message template from a message.txt file. I tried to make the bigquery object into a string, but it still results in an error.
from google.cloud import bigquery
import warnings
warnings.filterwarnings("ignore", "Your application has authenticated using end user credentials")
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from string import Template
def query_emailtest():
client = bigquery.Client(project=("analytics-merch-svcs-thd"))
query_job = client.query("""
select dept, project_name, reset, tier, project_status, IssueStatus, division, store_number, top_category,
DATE_DIFF(CURRENT_DATE(), in_review, DAY) as days_in_review
from `analytics-merch-svcs-thd.MPC.RESET_DETAILS`
where in_review IS NOT NULL
AND IssueStatus = "In Review"
AND DATE_DIFF(CURRENT_DATE(), in_review, DAY) > 7
AND ready_for_execution IS NULL
AND project_status = "Active"
AND program_name <> "Capital"
AND program_name <> "SSI - Capital"
LIMIT 50
""")
results = query_job.result() # Waits for job to complete.
return results #THIS IS A NONETYPE
def get_queryresults(results): #created new method to put query results into a for loop and store it in a variable
for i,row in enumerate(results,1):
bq_data = (i , '. ' + str(row.dept) + " " + row.project_name + ", Reset #: " + str(row.reset) + ", Store #: " + str(row.store_number) + ", " + row.IssueStatus + " for " + str(row.days_in_review)+ " days")
print (bq_data)
def get_contacts(filename):
names = []
emails = []
with open(filename, mode='r', encoding='utf-8') as contacts_file:
for a_contact in contacts_file:
names.append(a_contact.split()[0])
emails.append(a_contact.split()[1])
return names, emails
def read_template(filename):
with open(filename, 'r', encoding='utf-8') as template_file:
template_file_content = template_file.read()
return Template(template_file_content)
names, emails = get_contacts('mycontacts.txt') # read contacts
message_template = read_template('message.txt')
results = query_emailtest()
bq_results = get_queryresults(query_emailtest())
import smtplib
# set up the SMTP server
s = smtplib.SMTP(host='smtp-mail.outlook.com', port=587)
s.starttls()
s.login('email', 'password')
# For each contact, send the email:
for name, email in zip(names, emails):
msg = MIMEMultipart() # create a message
# bq_data = get_queryresults(query_emailtest())
# add in the actual person name to the message template
message = message_template.substitute(PERSON_NAME=name.title())
message = message_template.substitute(QUERY_RESULTS=bq_results) #SUBSTITUTE QUERY RESULTS IN MESSAGE TEMPLATE. This is where I am having trouble because the Row Iterator object results in Nonetype.
# setup the parameters of the message
msg['From']='email'
msg['To']='email'
msg['Subject']="This is TEST"
# body = str(get_queryresults(query_emailtest())) #get query results from method to put into message body
# add in the message body
# body = MIMEText(body)
#msg.attach(body)
msg.attach(MIMEText(message, 'plain'))
# query_emailtest()
# get_queryresults(query_emailtest())
# send the message via the server set up earlier.
s.send_message(msg)
del msg
Message template:
Dear ${PERSON_NAME},
Hope you are doing well. Please find the following alert for Issues that have been "In Review" for greater than 7 days.
${QUERY_RESULTS}
If you would like more information, please visit this link that contains a complete dashboard view of the alert.
ISE Services
The BQ result() function returns a generator, so I think you need to change your return to yield from.
I'm far from a python expert, but the following pared-down code worked for me.
from google.cloud import bigquery
import warnings
warnings.filterwarnings("ignore", "Your application has authenticated using end user credentials")
def query_emailtest():
client = bigquery.Client(project=("my_project"))
query_job = client.query("""
select field1, field2 from `my_dataset.my_table` limit 5
""")
results = query_job.result()
yield from results # NOTE THE CHANGE HERE
results = query_emailtest()
for row in results:
print(row.field1, row.field2)

load .npy file from google cloud storage with tensorflow

i'm trying to load .npy files from my google cloud storage to my model i followed this example here Load numpy array in google-cloud-ml job
but i get this error
'utf-8' codec can't decode byte 0x93 in
position 0: invalid start byte
can you help me please ??
here is sample from the code
Here i read the file
with file_io.FileIO(metadata_filename, 'r') as f:
self._metadata = [line.strip().split('|') for line in f]
and here i start processing on it
if self._offset >= len(self._metadata):
self._offset = 0
random.shuffle(self._metadata)
meta = self._metadata[self._offset]
self._offset += 1
text = meta[3]
if self._cmudict and random.random() < _p_cmudict:
text = ' '.join([self._maybe_get_arpabet(word) for word in text.split(' ')])
input_data = np.asarray(text_to_sequence(text, self._cleaner_names), dtype=np.int32)
f = StringIO(file_io.read_file_to_string(
os.path.join('gs://path',meta[0]))
linear_target = tf.Variable(initial_value=np.load(f), name='linear_target')
s = StringIO(file_io.read_file_to_string(
os.path.join('gs://path',meta[1])))
mel_target = tf.Variable(initial_value=np.load(s), name='mel_target')
return (input_data, mel_target, linear_target, len(linear_target))
and this is a sample from the data sample
This is likely because your file doesn't contain utf-8 encoded text.
Its possible, you may need to initialize the file_io.FileIO instance as a binary file using mode = 'rb', or set binary_mode = True in the call to read_file_to_string.
This will cause data that is read to be returned as a sequence of bytes, rather than a string.

rdata field of a DNS-SD PTR packet sent via scapy results in an unknown extended label in wireshark

Currently I'm trying to mock a DNS-SD reply for a query that asked for _printer._tcp.local
What I'm sending with scapy is the following:
send(IP(dst="224.0.0.251")/UDP(sport=5353,dport=5353)/DNS(aa=1, qr=1,rd=0,an=DNSRR(rrname='_printer._tcp.local', type="PTR", rclass=1, ttl=100, rdata="Devon's awesome printer._printer._tcp.local")))
However, when I check in Wireshark I get the following erroneous packet
_printer._tcp.local: type PTR, class IN, <Unknown extended label>
I assume I'm having some wrong parameters in my send function. However, I tried some variations and I can't seem to get it to work properly (I compared with an actual reply from a printer and to me it looks the same).
Could anyone help me out with the correct parameters? Thanks in advance!
This is an outstanding issue as can be seen on scapy's issue tracker.
Therefore, you have to encode the rdata field yourself, as follows:
from scapy.all import *
import struct
label = "Devon's awesome printer._printer._tcp.local"
sublabels = label.split(".") + [""]
label_format = ""
for s in sublabels:
label_format = '%s%dp' % (label_format, len(s) + 1)
label_data = struct.pack(label_format, *sublabels) # see edit for Python 3 below
send(IP(dst="224.0.0.251")/UDP(sport=5353,dport=5353)/DNS(aa=1,qr=1,rd=0,an=DNSRR(rrname='_printer._tcp.local',type="PTR",rclass=1,ttl=100,rdata=label_data)))
EDIT for Python 3:
from scapy.all import *
import struct
label = "Devon's awesome printer._printer._tcp.local"
sublabels = label.split(".") + [""]
label_format = ""
for s in sublabels:
label_format = '%s%dp' % (label_format, len(s) + 1)
label_data = struct.pack(label_format, *(bytes(s, encoding="ascii") for s in sublabels)) # this line was edited for Python 3
send(IP(dst="224.0.0.251")/UDP(sport=5353,dport=5353)/DNS(aa=1,qr=1,rd=0,an=DNSRR(rrname='_printer._tcp.local',type="PTR",rclass=1,ttl=100,rdata=label_data)))

Resources