Read a dtoverlay controlled device via Python3? - python-3.x

How to read a dtoverlay controlled device, i.e. sensor via pyhon3?
I can read the device via a simple cat, for example...
# cat /sys/bus/i2c/devices/1-0077/iio\:device0/in_temp_input
27130
So I know the basic setup and such is good, sensor is at address 0x77, it is a BME280 sensor, etc.
I can also read the sensor via the various python3 libraries for such sensors, say the python library from Adafruit.
But I want to use the dtoverlay method of sensor control, i.e. read, and read the sensor from python3. This seemed obvious and straight forward, but apparently not, tried the following code and got the following error.
#!/usr/bin/python3
#
#
import os
#
theSensor=os.open('/sys/bus/i2c/devices/1-0077/iio\:device0/in_temp_input', os.O_RDONLY)
os.lseek(theSensor, 0, os.SEEK_SET)
print(os.read(theSensor, 2))
theSensor.close()
And the error...
# python3 BME280-OverLay.py
Traceback (most recent call last):
File "/root/BME280-OverLay.py", line 17, in <module>
theSensor=os.open('/sys/bus/i2c/devices/1-0077/iio\:device0/in_temp_input', os.O_RDONLY)
FileNotFoundError: [Errno 2] No such file or directory: '/sys/bus/i2c/devices/1-0077/iio\\:device0/in_temp_input'
Is there some trick to reading this specific device path via python3? The simple cat works.

In your initial cat command, you noticed that there's a \ inside the URL. That's an escape character. It might be there because you used autocompletion with the Tab key ; in this case bash adds it automatically, even though in fact cat doesn't need it, but deals with it.
python doesn't (deal with it). You'll have to feed open() with the clear path syntax.
By the way you can use a plain open() call and the with syntax :
with open('/sys/bus/i2c/devices/1-0077/iio:device0/in_temp_input', 'r') as fd:
temp = fd.read()
print(temp)
This way, the file gets closed before the print() call.
PS: the fact that the file you are trying to read is on a virtual filesystem would have no impact.

Related

LIBUSB_ERROR_BUSY [-6] Sending a basic packet with Python 3.9 using LibUSB1

I am developing some software in python 3.9 and I am at the point where I have a device connected to my USB port and would like to send a basic packet to test the interface before I proceed. I am using this example to try and get my interface to work. I am not bothered about speed or byte count. I would like to see any response on the interface (But on reflection Im wondering if usb speed could be the issue):
import usb1
import usb.util
import os
import sys
import libusb
import usb.core
from usb import util
import math
dev = usb.core.find(idVendor=0x11ac,idProduct=0x317d)
with usb1.USBContext() as context:
handle=context.openByVendorIDAndProductID(
0x11ac,
0x317d,)
handle.claimInterface(0)
handle.setInterface(0)
data = bytearray(b"\\xf0\\x0f"* (int(math.ceil(0xb5db91/4.0))))
handle.controlWrite(0x40, 0xb0, 0xb5A6, 0xdb91, b"")
handle.bulkWrite(2,data,timeout=5000)
`
https://github.com/vpelletier/python-libusb1/issues/21
I have had a look in various forums for several days and cannot seem to get an answer that works. Here is the trace: Its worth noting that from time to time, this py file does run without error but does nothing, and I see no traffic traveling to the USB interface.
Can someone please help me configure a working example of how to send a packet to the interface? I have tried various things like detaching the kernel, setting configuration, etc.
For 4 days I have struggled with libusb01 & 10, after discovering libusb1, I have changed my wrapper and got a lot more success
I also see a lot of examples in forums like this one, and I always get the same response. Also Im curious as to where it appears that 0x40 is the endpoint(out)
Traceback (most recent call last):
File "/home/jbgilbert/Desktop/Packets/Backend_replace.py", line 16, in <module>
handle.claimInterface(0)
File "/usr/lib/python3/dist-packages/usb1/__init__.py", line 1213, in claimInterface
libusb1.libusb_claim_interface(self.__handle, interface),
File "/usr/lib/python3/dist-packages/usb1/__init__.py", line 133, in mayRaiseUSBError
__raiseUSBError(value)
File "/usr/lib/python3/dist-packages/usb1/__init__.py", line 125, in raiseUSBError
raise __STATUS_TO_EXCEPTION_DICT.get(value, __USBError)(value)
usb1.USBErrorBusy: LIBUSB_ERROR_BUSY [-6]
My device is a laptop, using lsmod reveals all devices linked to that particular end point, in this case because if the presence of a webcam I was unable to write to an available end point. Despite disabling the driver I had no avail and had to try to code on a machine that had less onboard accessories that proved more successful

PyTorch Fashion-MNIST (ETL)

I'm new to Deep Learning and PyTorch, so please do bear with me if some questions seem silly or I'm not asking in the correct format.
I was watching this video as part of a PyTorch series on Deep Learning: https://www.youtube.com/watch?v=8n-TGaBZnk4 . This video specifically is about ETL (using Fashion-MNIST dataset).
I have a few questions on the video at 7:05.
Question 1: In the Fashion-MNIST subclass constructor we passed it the argument:
‘root’, where the instructor mentioned: this is the location in disk where data is located. Sorry maybe this is a silly question, but is this where the data is located on the source server (from the URL) disk, or is this the path location where you want to save the data on your computer locally?
Question 2: Also for the Fashion-MNIST is the 'root' always the same location path: i.e. './data/FashionMNIST'?
Question 3: If the 'root' defines the location path where the data is located on the source server, then where would it be downloaded on locally? I checked my 'download' folder (I'm using Windows 7 laptop), and couldn't find the files there?
Question 4: The video mentioned that we should check if the data, in subsequent calls, are downloaded already or not (i.e. in the argument we pass download=true).
4(a): What's a good approach to do this? Do we put an if statement in place to check for this? Or is there a smarter way of checking for downloaded data?
4(b): Also what does it mean by "subsequent calls"? Does it mean when we need to call the 'FashionMNIST' constructor again for the test_data download?
Question 5: Finally, I tried running the code below (which is the one in the video) on Spyder IDE (Python 3.5):
import torch
import torchvision
import torchvision.transforms as transforms
train_set = torchvision.datasets.FashionMNIST(
root='./data/FashionMNIST'
,train=True
,download=True
,transform=transforms.Compose([
transforms.ToTensor()
])
)
I got the output:
Traceback (most recent call last):
File "<ipython-input-3-3ac000b9e90a>", line 10, in <module>
transforms.ToTensor()
File "C:\Program Files\Anaconda3\lib\site-packages\torchvision\datasets\mnist.py", line 68, in __init__
self.download()
File "C:\Program Files\Anaconda3\lib\site-packages\torchvision\datasets\mnist.py", line 136, in download
makedir_exist_ok(self.raw_folder)
File "C:\Program Files\Anaconda3\lib\site-packages\torchvision\datasets\utils.py", line 41, in makedir_exist_ok
os.makedirs(dirpath)
File "C:\Program Files\Anaconda3\lib\os.py", line 241, in makedirs
mkdir(name, mode)
FileNotFoundError: [WinError 206] The filename or extension is too long: './data/FashionMNIST\\FashionMNIST\\raw'
Not sure why I got that error at the end. In addition I ran the code on Jupyter Notebook, as per the video, and it worked fine. But I'm wondering why it throws that error in Spyder IDE.
Many thanks in advance.
No genuine question is a silly question, Answering questions one bye one:
Ans 1 & 2 :
root is the path on your local disk where the data will be saved, you can give ny path according to your liking it will not cause an issue.
Ans 3:
The urls etc are defined within the files and the path of the data is all you need to do: in order to look at the urls from where the data is downloaded here is a link.
Ans 4. : download = True merely gives it permission to download if the data doesn't exists the downloader will automatically check if the data already exists, if it exists it will still not download, even if download is set to be true, again it happens in the background you don't have to worry about it.
Ans5 : The issue isn't a torch issue exactly it has more to do with how it is being compiled on in windows, the issue is discussed at length here & here

Python 3 combining file open and read commands - a need to close a file and how?

I am working through "Learn Python 3 the Hard Way" and am making code more concise. Lines 11 to 18 of the program below (line 1 starts at # program: p17.py) are relevant to my question. Opening and reading a file are very easy and it is easy to see how you close the file you open when working with the files. The original section is commented out and I include the concise code on line 16. I commented out the line of code that causes an error (on line 20):
$ python3 p17_aside.py p17_text.txt p17_to_file_3.py
Copying from p17_text.txt to p17_to_file_3.py
This is text.
Traceback (most recent call last):
File "p17_aside.py", line 20, in
indata.close()
AttributeError: 'str' object has no attribute 'close'
Code is below:
# program: p17.py
# This program copies one file to another. It uses the argv function as well
# as exists - from sys and os.path modules respectively
from sys import argv
from os.path import exists
script, from_file, to_file = argv
print(f"Copying from {from_file} to {to_file}")
# we could do these two on one line, how?
#in_file = open(from_file)
#indata = in_file.read()
#print(indata)
# THE ANSWER -
indata = open(from_file).read()
# The next line was used for testing
print(indata)
# indata.close()
So my question is should I just avoid the practice of combining commands as done above or is there a way to properly deal with that situation so files are closed when they should be? Is it necessary to deal with the situation of closing a file at all in this situation?
Context manager and with statement is a comfortable way to make sure your file is closed as needed:
with open(from_file) as fobj:
indata = fobj.read()
Nowadays, you can also use Path-like objects and their read_text and read_bytes methods:
# This assumes Path from pathlib has been imported
indata = Path(from_file).read_text()
The error you were seeing... is because you were not trying to close the file, but str into which you've read its content into. You'd need to assign object returned by open a name, and then read from and close that one:
fobj = open(from_file)
indata = fobj.read()
fobj.close() # This is OK
Strictly speaking, you would not need to close that file as dangling file descriptors would be "clobbered" with the process. Esp. in a short example like this, it would be of relatively little concern.
I hope I got the follow up question in comment correctly to extend on this a bit more.
If you wanted a single command, look at the pathtlib.Path example above.
With open as such, you cannot perform read and close in a single operation and without assigning result of open to a variable. As both read and close would have to be performed on the same object returned by open. If you do:
var = fobj.read()
Now, var refers to content read out of the file (so nothing that you could close, would have a close method).
If you did:
open(from_file).close()
After (but also before; at any point), you would simply open that file (again) and close it immediately. BTW. this returns None, just in case you wanted to get the return value. But it would not affect previously open file handles and file-like objects. It would not serve any practical purpose except for perhaps making sure you can open a file.
But again. It's a good practice to perform the housekeeping, but strictly speaking (and esp. in a short code like this). If you did not close the file and relied on the OS to clean-up after your process. It'd work fine.
How about the following:
# to open the file and read it
indata = open(from_file).read()
print(indata)
# this closes the file - just the opposite of opening and reading
open(from_file).close()

Read NetCDF file from Azure file storage

I have uploaded a file to my Azure file storage account and created a SAS (shared access signature). Let's pretend the file in question is called fileA.nc
Now, with Python3, I am attempting to read fileA.nc:
from netCDF4 import Dataset
url ='https://<my-azure-resource-group>.file.core.windows.net/<some-file-share>/fileA.nc<SAS-token>';
dataset = Dataset(url)
print(dataset.variables.keys())
The above code does not work, instead giving me the following error:
Traceback (most recent call last): File "yadaYadaYada/test.py", line
8, in
dataset = Dataset(url) File "netCDF4/_netCDF4.pyx", line 1848, in netCDF4._netCDF4.Dataset.init (netCDF4/_netCDF4.c:13983)
OSError: NetCDF: Malformed or unexpected Constraint
This is line 8:
dataset = Dataset(url)
I know the URL provided works. If I paste it into the browser, the file downloads...
I have checked the netCDF4 documentation, which says this:
Remote OPeNDAP-hosted datasets can be accessed for reading over
http
if a URL is provided to the Dataset constructor instead of a filename.
However, this requires that the netCDF library be built with OPenDAP
support, via the --enable-dap configure option (added in version
4.0.1).
However, I have no idea how to tell if when Pycharms installed netcdf4, it used the --enable-dap argument, but I cannot imagine why it would not. Besides, if I stick in a url which points to some HTML, I get the HTML in the error dump and so from that I would think netcdf4 is actually trying to load a remote dataset and so the problem is somewhere else.
I'd really appreciate some help here. Maybe someone knows of another Python 3 netCDF library that will allow me to load my datasets from Azure?
UPDATE
Okay, I can now confirm that the python netcdf4 library does come with --OPenDAP enabled:
Hello again, netCDF4 1.0.4 with OpenDAP support is now available in
the conda respoitory on Unix. To install: $ conda install netcdf4
Ilan
I have found a solution. It turns out that you cannot read directly from an Azure File share, even though when you paste the link to a file in the browser, the file begins to download.
What I needed to do was to mount the File Share on my OS. In my case, I was using Windows but this can be done with Linux, too. The following code should be modified accordingly and then put into Command Prompt:
net use <drive-letter>: \\<storage-account-name>.file.core.windows.net\<share-name>
example :
net use z: \\samples.file.core.windows.net\logs
Once the File Share is mounted, you can read from it as if it were an external HDD. You may need to add permission, but I didn't.
Here is the link to the documentation for mounting the File Share: Documentation

shelve archives not openable on other computer

I have saved some objects with shelve. In an other file I was able to restore these objects. But when I copy the archive to an other computer shelve gives me a _gdbm.error: File read error. The packages which hold the classes of the stored objects are directly accessible on both computers (but they are stored in different locations and added with PYTHONPATH). Both machines run on Ubuntu 13.10, one is 32bit and the other 64bit.
Shouldn't these archives be machine independent?
On the 64bit machine I get
>>> import shelve
>>> shelve.open('arch.db')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.3/shelve.py", line 232, in open
return DbfilenameShelf(filename, flag, protocol, writeback)
File "/usr/lib/python3.3/shelve.py", line 216, in __init__
Shelf.__init__(self, dbm.open(filename, flag), protocol, writeback)
File "/usr/lib/python3.3/dbm/__init__.py", line 94, in open
return mod.open(file, flag, mode)
_gdbm.error: File read error
and on the 32bit machine it works.
When I create an archive on the 64bit machine, it is openable on the 32bit machine, but the interactive python prompt crashes:
>>> import shelve
>>> s = shelve.open('arch.db')
>>> for i in s.items(): print(i)
...
gdbm fatal: lseek error
I don't even get a traceback.
It is really annoying, I intended to work on both computers, but at the moment I am bound to my slow 32bit eeepc, because I've already saved a lot into the archive.
The problem is that shelve uses gdbm (provided as dbm.gnu as default backend to store the serialized objects. The files created with gdbm depend on the architecture of the system and are therefore only usable on 32bit or 64bit systems.
There are some tools (gdbmexport or gdbm_dump) which allow you to convert the gdbm files, however, the workflow becomes error prone if you want to access the files from both systems on a regular basis.
Fortunately, python provides different backends: dbm.gnu, dbm.ndbm and dbm.dumb. The latter two are platform independent.
import shelve
import dbm
dbm._defaultmod = dbm.ndbm
db = shelve.open('somename')
The database with the above code can be used on 64bit and 32bit systems.
The default backend has to be set only when you create the file. dbm checks the file type of your database before opening and uses the correct backend.
Please note that the above code changes the default dbm for the whole python process. Another component might break if it relies on gdbm being the default.

Resources