Getting a OSERROR: [Errno 121] Remote I/O error - python-3.x

I am facing problems with my code that has been working perfectly fine and ran everything that I needed it to. This happens from time to time but this time I don't know what my problem is. I recently tried to place a sampling frequency so I can control how many times my data is running in a second but since I made those changes I had nothing but errors so I deleted the changes that I made and now I have errors although I am using the original code that I was using before hand.
My electrical connection is perfect so this is not the issue. I also am not getting any errors in the terminal while using i2cget -y 1
This is my python code (also using INA219 sensor):
#Importing libraries
import csv
from ina219 import INA219
from ina219 import DeviceRangeError
SHUNT_OHMS = 0.1
read_ina = INA219(SHUNT_OHMS)
read_ina.configure()
def read_all():
data = {}
data['Bus Voltage'] = read_ina.voltage()
data['Bus Current'] = read_ina.current()
data['Power'] = read_ina.power()
data['Shunt Voltage'] = read_ina.shunt_voltage()
return data
with open('SensorData.csv', 'w') as f:
data = read_all()
writer = csv.DictWriter(f,
fieldnames = list (data.keys()))
writer.writeheader()
exit = False
while not exit:
try:
writer.writerow(data)
data = read_all()
except KeyboardInterrupt:
exit = True
It is supposed to create a csv file that shows the voltage and all of that in a loop (in the csv file). The code is pretty straightforward. Can anyone help me fix this issue?
This is the error that I keep facing:
Traceback (most recent call last):
File "/home/pi/Downloads/scripts/Assignment2 CreateCSV/SensorData.py", line 40, in <module>
data = read_all()
File "/home/pi/Downloads/scripts/Assignment2 CreateCSV/SensorData.py", line 20, in read_all
data['Bus Voltage'] = read_ina.voltage()
File "/usr/local/lib/python3.5/dist-packages/ina219.py", line 180, in voltage
value = self._voltage_register()
File "/usr/local/lib/python3.5/dist-packages/ina219.py", line 363, in _voltage_register
register_value = self._read_voltage_register()
File "/usr/local/lib/python3.5/dist-packages/ina219.py", line 367, in _read_voltage_register
return self.__read_register(self.__REG_BUSVOLTAGE)
File "/usr/local/lib/python3.5/dist-packages/ina219.py", line 394, in __read_register
register_value = self._i2c.readU16BE(register)
File "/usr/local/lib/python3.5/dist-packages/Adafruit_GPIO/I2C.py", line 190, in readU16BE
return self.readU16(register, little_endian=False)
File "/usr/local/lib/python3.5/dist-packages/Adafruit_GPIO/I2C.py", line 164, in readU16
result = self._bus.read_word_data(self._address,register) & 0xFFFF
File "/usr/local/lib/python3.5/dist-packages/Adafruit_PureIO/smbus.py", line 226, in read_word_data
ioctl(self._device.fileno(), I2C_RDWR, request)
OSError: [Errno 121] Remote I/O error

Related

I am getting a raise key error in my python script. How can I resolve this?

I am hoping someone can help me with this. After having a nightmare installing numpy on a raspberry pi, I am stuck again!
The gist of what I am trying to do is I have an arduino, that sends numbers (bib race numbers entered by hand) over lora, to the rx of the raspberry pi.
This script is supposed to read the incoming data, - it prints so I can see it in the terminal. Pandas is then supposed to compare the number against a txt/csv file, and if it matches in the bib number column it is supposed to append the matched row to a new file.
Now, The first bit works (capturing the data and printing) and on my windows PC, the 2nd bit works when I was testing with a fixed number rather than incoming data.
I have basically tried my best to mash them together to get the incoming number to compare instead.
I should also state that the error happened after I pressed 3 on the arduino (which then printed on the terminal of the raspberry pi before erroring), so probably why it is keyerror 3
My code is here
#!/usr/bin/env python3
import serial
import csv
import pandas as pd
#import numpy as np
if __name__ == '__main__':
ser = serial.Serial('/dev/ttyS0', 9600, timeout=1)
ser.flush()
while True:
if ser.in_waiting > 0:
line = ser.readline().decode('utf-8').rstrip()
print(line)
with open ("test_data.csv","a") as f:
writer = csv.writer(f,delimiter=",")
writer.writerow([line])
df = pd.read_csv("data.txt")
#out = (line)
filtered_df = df[line]
print('Original Dataframe\n---------------\n',df)
print('\nFiltered Dataframe\n------------------\n',filtered_df)
filtered_df.to_csv("data_amended.txt", mode='a', index=False, header=False)
#print(df.to_string())
And my error is here:
Python 3.7.3 (/usr/bin/python3)
>>> %Run piserialmashupv1.py
3
Traceback (most recent call last):
File "/home/pi/.local/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 3361, in get_loc
return self._engine.get_loc(casted_key)
File "pandas/_libs/index.pyx", line 76, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 5198, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 5206, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: '3'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/pi/piserialmashupv1.py", line 20, in <module>
filtered_df = df[line]
File "/home/pi/.local/lib/python3.7/site-packages/pandas/core/frame.py", line 3455, in __getitem__
indexer = self.columns.get_loc(key)
File "/home/pi/.local/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 3363, in get_loc
raise KeyError(key) from err
KeyError: '3'
>>>
I had been asked to put the first few lines of data.txt
_id,firstname,surname,team,info
1, Peter,Smith,,Red Walk (70 miles- 14 mile walk/run + 56 mile cycle)
2, Samantha,Grey,Team Grey,Blue walk (14 mile walk/run)
3, Gary,Parker,,Red Walk (70 miles- 14 mile walk/run + 56 mile cycle)
I think it must be the way I am referencing the incoming rx number?
Any help very much appreciated!
Dave
I have it working, see the final code below
I know that Pandas just didnt like the way the data was inputting originally.
This fixes it. I also had to ensure it knew it was dealing with an integer when filtering, as the first attempt I didn't, and it couldn't filter the data properly.
``
import serial
import csv
import time
import pandas as pd
if __name__ == '__main__':
ser = serial.Serial('/dev/ttyS0', 9600, timeout=1)
ser.flush()
while True:
if ser.in_waiting > 0:
line = ser.readline().decode('utf-8').rstrip()
print(line)
with open ("test_data.txt","w") as f:
writer = csv.writer(f,delimiter=",")
writer.writerow([line])
time.sleep(0.1)
ser.write("Y".encode())
df = pd.read_csv("data.txt")
out = df['_id'] == int(line)
filtered_df = df[out]
print('Original Dataframe\n---------------\n',df)
print('\nFiltered Dataframe\n---------\n',filtered_df)
filtered_df.to_csv("data_amended.txt", mode='a',
index=False, header=False)
time.sleep(0.1)
``

Multiproceccing + PyMongo lead to [Errno 111]

Good day!
I've just started playing around with pymongo and multiprocessing. I have received a multicore unit for my experiments, which runs Ubuntu 18.04.4 LTS, codename: bionic. Just for the sake of experiment I have tried it both with python 3.8 and python 3.10, unfortunately the results are similar:
>7lvv_E mol:na length:29 DNA (28-MER)
ELSE 7lvv_E
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/usr/lib/python3.8/multiprocessing/pool.py", line 48, in mapstar
return list(map(*args))
File "LoadDataOnSequence.py", line 54, in createCollectionPDB
x = newCol.insert_one(dict2Write)
File "/home/username/.local/lib/python3.8/site-packages/pymongo/collection.py", line 698, in insert_one
self._insert(document,
File "/home/username/.local/lib/python3.8/site-packages/pymongo/collection.py", line 613, in _insert
return self._insert_one(
File "/home/username/.local/lib/python3.8/site-packages/pymongo/collection.py", line 602, in _insert_one
self.__database.client._retryable_write(
File "/home/username/.local/lib/python3.8/site-packages/pymongo/mongo_client.py", line 1497, in _retryable_write
with self._tmp_session(session) as s:
File "/usr/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/home/username/.local/lib/python3.8/site-packages/pymongo/mongo_client.py", line 1829, in _tmp_session
s = self._ensure_session(session)
File "/home/username/.local/lib/python3.8/site-packages/pymongo/mongo_client.py", line 1816, in _ensure_session
return self.__start_session(True, causal_consistency=False)
File "/home/username/.local/lib/python3.8/site-packages/pymongo/mongo_client.py", line 1766, in __start_session
server_session = self._get_server_session()
File "/home/username/.local/lib/python3.8/site-packages/pymongo/mongo_client.py", line 1802, in _get_server_session
return self._topology.get_server_session()
File "/home/username/.local/lib/python3.8/site-packages/pymongo/topology.py", line 496, in get_server_session
self._select_servers_loop(
File "/home/username/.local/lib/python3.8/site-packages/pymongo/topology.py", line 215, in _select_servers_loop
raise ServerSelectionTimeoutError(
pymongo.errors.ServerSelectionTimeoutError: 127.0.0.1:27017: [Errno 111] Connection refused, Timeout: 30s, Topology Description: <TopologyDescription id: 60db2071e53de99692268c6f, topology_type: Single, servers: [<ServerDescription ('127.0.0.1', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('127.0.0.1:27017: [Errno 111] Connection refused')>]>
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "LoadDataOnSequence.py", line 82, in <module>
myPool.map(createCollectionPDB, listFile("datum/pdb_seqres.txt"))
File "/usr/lib/python3.8/multiprocessing/pool.py", line 364, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 771, in get
raise self._value
pymongo.errors.ServerSelectionTimeoutError: 127.0.0.1:27017: [Errno 111] Connection refused, Timeout: 30s, Topology Description: <TopologyDescription id: 60db2071e53de99692268c6f, topology_type: Single, servers: [<ServerDescription ('127.0.0.1', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('127.0.0.1:27017: [Errno 111] Connection refused')>]>
I have been trying multiple times by modifying my code different ways, no luck though.
Also, I have tried both running code from PyCharm via SSH and by creating the local (at multicore machine) folder with all the necessary files.
I count the number of cores and create my MongoClient:
from multiprocessing import *
from pymongo import MongoClient
#Number of cores
x = cpu_count()
print(x)
myClient = MongoClient('mongodb://127.0.0.1:27017/')
I prepare a list to pass, using that function:
def listFile(fileName):
fOpen = open(fileName)
listFile = fOpen.readlines()
arrOfArrs = []
tmp1 = []
for i in listFile:
# print(i)
if i.startswith(">"):
if len(tmp1) > 1:
arrOfArrs.append(tmp1)
tmp1 = []
tmp1.append(i.strip())
else:
tmp1.append(i.strip())
#print(listFile)
return arrOfArrs
That's the way I can prepare a big text file (in reality there's going to be even a larger one, I am just testing using one of the PDB files: https://www.wwpdb.org/ftp/pdb-ftp-sites I use the seqres file, I am not linking the exact file, as it will download immediately). And I suppose everything works till that moment.
Next is the function, which will be used in Pool:
def createCollectionPDB(fP):
lineName = ""
lineFASTA = ""
colName = ""
PDBName = ""
chainIDName = ""
typeOfMol = ""
molLen = ""
proteinName = ""
for i in fP:
print("test", i)
print(lineName)
if ">" in i:
lineName = i.strip()
print("LINE NAME")
colName = lineName.split(" ")[0].strip()[1:]
print("COLNAME", colName)
PDBName = lineName.split("_")[0].strip()
chainIDName = colName.split("_")[-1].strip()
typeOfMol = lineName.split(" ")[1].strip().split(":")[1].strip()
molLen = lineName.split(" ")[2].strip().split(":")[-1].strip()#[3].split(" ")[0].strip()
proteinName = lineName.split(" ")[-1].strip()
print(colName, PDBName, chainIDName, typeOfMol, molLen, proteinName)
else:
print("ELSE", colName)
lineFASTA = i.strip()
dict2Write={"PDB_ID" : PDBName, "Chain_ID" : chainIDName, "Molecule Type" : typeOfMol, "Length" : molLen, "Protein_Name" : proteinName, "FASTA" : lineFASTA}
myNewDB = myClient["MyPrjPrj_PDBs"]
newCol = myNewDB[colName]
x = newCol.insert_one(dict2Write)
print("PDB", x.inserted_id)#'''
That one used to work as well. Finally I multiprocess:
f1 = listFile("datum/pdb_seqres.txt")
myPool = Pool(processes=x)
myPool.map(createCollectionPDB, f1)
myPool.join()
myPool.close()
I have been looking through various solutions, like changing the Python version, trying different (5.0 and 4.x) versions of mongo, as well, as restarting mongo. I have also tried changing the number of processes, which leaves me with pretty much the same error, though stopping at a different line. Another option I've tried was using ssh_pymongo, with no luck as well.
Also it works without multiprocessing, though w/o multiprocessing I use it on a smaller file.
Each process needs to have its own client, therefore you most likely need to create the client in each process instead of creating one prior to invoking multiprocessing.
Forked process: Failure during socket delivery: Broken pipe contains general information on how MongoDB drivers handle forking.

Python Multiprocessing( TypeError: cannot serialize '_io.BufferedReader' object )

I'm trying to make dictionary attack on zip file using Pool to increase speed.
But I face next error in Python 3.6, while it works in Python 2.7:
Traceback (most recent call last):
File "zip_crack.py", line 42, in <module>
main()
File "zip_crack.py", line 28, in main
for result in results:
File "/usr/lib/python3.6/multiprocessing/pool.py", line 761, in next
raise value
File "/usr/lib/python3.6/multiprocessing/pool.py", line 450, in _ handle_tasks
put(task)
File "/usr/lib/python3.6/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/usr/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
TypeError: cannot serialize '_io.BufferedReader' object
I tried to search for same errors but couldn't find answer that can help here.
Code looks like this
def crack(pwd, f):
try:
key = pwd.strip()
f.extractall(pwd=key)
return True
except:
pass
z_file = zipfile.ZipFile("../folder.zip")
with open('words.dic', 'r') as passes:
start = time.time()
lines = passes.readlines()
pool = Pool(50)
results = pool.imap_unordered(partial(crack, f=z_file), lines)
pool.close()
for result in results:
if result:
pool.terminate()
break
pool.join()
I also tried another approach using map
with contextlib.closing(Pool(50)) as pool:
pool.map(partial(crack, f=z_file), lines)
which worked great and found passwords quickly in Python 2.7 but it throws same exception in python 3.6

Don't catch SerialException at pySerial inWaiting()

I work with Python3. My purpose is to write simple script, which may handle connect/disconnect and read data. I use pySerial library.
At initial stage of the script I have
ser = serial.Serial('/dev/rfcomm0')
Later I have the next code:
def readAndPrint():
try:
waitingChar = ser.inWaiting()
except serial.SerialException as e:
print("Got serial exception")
portOpenFlag = False
readException = True
if (waitingChar > 0):
print("Got some data")
data_str = ser.read(ser.inWaiting())
print(data_str)
Everything is fine till I read data, but when I close Bluetooth connection form another side I got
s = fcntl.ioctl(self.fd, TIOCINQ, TIOCM_zero_str)
OSError: [Errno 5] Input/output error
and never actually arrive into except serial.SerialException case.
What is wrong?
EDIT:
This is the traceback:
Traceback (most recent call last):
File "python_scripts/serialTest.py", line 43, in <module>
readAndHandleException()
File "python_scripts/serialTest.py", line 36, in readAndHandleException
readAndPrint()
File "python_scripts/serialTest.py", line 22, in readAndPrint
waitingChar = ser.inWaiting()
File "/usr/lib/python3/dist-packages/serial/serialposix.py", line 435, in inWaiting
s = fcntl.ioctl(self.fd, TIOCINQ, TIOCM_zero_str)
OSError: [Errno 5] Input/output error

Contradicting Errors?

So I'm trying to edit a csv file by writing to a temporary file and eventually replacing the original with the temp file. I'm going to have to edit the csv file multiple times so I need to be able to reference it. I've never used the NamedTemporaryFile command before and I'm running into a lot of difficulties. The most persistent problem I'm having is writing over the edited lines.
This part goes through and writes over rows unless specific values are in a specific column and then it just passes over.
I have this:
office = 3
temp = tempfile.NamedTemporaryFile(delete=False)
with open(inFile, "rb") as oi, temp:
r = csv.reader(oi)
w = csv.writer(temp)
for row in r:
if row[office] == "R00" or row[office] == "ALC" or row[office] == "RMS":
pass
else:
w.writerow(row)
and I get this error:
Traceback (most recent call last):
File "H:\jcatoe\Practice Python\pract.py", line 86, in <module>
cleanOfficeCol()
File "H:\jcatoe\Practice Python\pract.py", line 63, in cleanOfficeCol
for row in r:
_csv.Error: iterator should return strings, not bytes (did you open the file in text mode?)
So I searched for that error and the general consensus was that "rb" needs to be "rt" so I tried that and got this error:
Traceback (most recent call last):
File "H:\jcatoe\Practice Python\pract.py", line 86, in <module>
cleanOfficeCol()
File "H:\jcatoe\Practice Python\pract.py", line 67, in cleanOfficeCol
w.writerow(row)
File "C:\Users\jcatoe\AppData\Local\Programs\Python\Python35-32\lib\tempfile.py", line 483, in func_wrapper
return func(*args, **kwargs)
TypeError: a bytes-like object is required, not 'str'
I'm confused because the errors seem to be saying to do the opposite thing.
If you read the tempfile docs you'll see that by default it's opening the file in 'w+b' mode. If you take a closer look at your errors, you'll see that you're getting one on read, and one on write. What you need to be doing is making sure that you're opening your input and output file in the same mode.
You can do it like this:
import csv
import tempfile
office = 3
temp = tempfile.NamedTemporaryFile(delete=False)
with open(inFile, 'r') as oi, tempfile.NamedTemporaryFile(delete=False, mode='w') as temp:
reader = csv.reader(oi)
writer = csv.writer(temp)
for row in reader:
if row[office] == "R00" or row[office] == "ALC" or row[office] == "RMS":
pass
else:
writer.writerow(row)

Resources