Reading from text a file in a USB device.(C++/Python3) - python-3.x

`
texttobecopied = open('mtp://%5Busb%3A001,015%5D/Internal%20stora/AppInventor/data/Scan_result.txt', 'r').readlines()
//opening the text file which I want to read. This is located in the USB tethered smartphone. The mtp://.... part is the path for that file.
appendFile = open('destinationFile.txt', 'a')
//opened the destination file into which I want to write.
appendfile.write('\n')
appendFile.write(texttobecopied)
//tried to write that text from source file into my destination file .
appendFile.close()
Beginner here.
I need a program that reads text from a .txt file located in my USB tethered phone(Internal storage) and writes that text in a file on my system.
I tried the conventional way by specifying the path (the traditional object.open('path/name'.'r') way) but it didn't work.
Is there a way I can do that? I do not wish to copy the file containing the text, I just need the text inside.

It's the Gnome virtual file system daemon (gvfsd) that works behind your back...
First, the phone's file system has to be mounted somewhere, it's not in the usual places (/mnt/, /media/) but it'd be registered in /etc/mtab, shouldn't it?
In my case it's at the end, and it's mounted in a runtime directory named against my UID,
$ grep "/$UID/" /etc/mtab
gvfsd-fuse /run/user/1000/gvfs fuse.gvfsd-fuse rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
At this point I know the mount point in the computer file system of the phone file system and I can list its content
$ ls /run/user/1000/gvfs
'mtp:host=%5Busb%3A001%2C006%5D'/
I know the contents of my phone so that I can open, using Python, one of the files on my device
$ python -c "open('/run/user/1000/gvfs/mtp:host=%5Busb%3A001%2C006%5D/Internal shared storage/Ringtones/08_River.mp3', 'rb')"
$
As you can see, no errors opening the file.
I hope that you can adapt this recipe to your specific situation and accept my answer.

Related

Batch replacing unidentified Characters in Unix that were created by macOS

On a Linux volume as part of a NAS with many TB of data some files were created from macOS and some of those files uploaded from macOS seem to include characters in filenames that cannot be reproduced via FTP or SMB file protocol. These files will appear as e.g. "picture_name001.jpg". Where the "" probably stands for a colon or slash.
I can search for "" and found out it applies to 2171 files in distributed locations on the volume. Way too much to manually find and correct each file name.
I thought I can connect to the NAS via SSH and simply loop through each directory doing an automated replace of the "" into "_", but this doesn't work because:
for file in **; do mv -- "$file" "${file///_}"; done
this attempt will throw back an error on the first item matching  with:
mv: can't rename '120422_LAXJFK': No such file or directory
So obviously this substitute character displayed as "" is not the way to address the file or directory as it refers to a name that doesn't actually exists in the volume index.
(A) How do I find out if "120422_LAX:JFK" or "120422_LAX/JFK" is meant here, and (B) how do I escape these invalid characters to eventually be able to automatically rename all those names to for example "120422_LAX_JFK"?
Is there for example a way to get a numerical file ID from the name and then instruct to rename the file by number in case its name contains ""?
I think the problem is that behind this "" can be different codes of symbols. When the system can't represent some characters (for example, given encoding is not supported), then it automatically replaced by some default character (in your case it is ""). But actually there is some code of the character, that should be in the name. BUT when you trying to do this for file in **; do mv -- "$file" "${file///_}"; done system can't recognize code, that symbol is "" is stands for.
I think this problem can be solved by changing the encoding of characters (they should be compatible and better the same) on both devices (mac and NAS)
Hope this would help

Output-redirection should recreate the destination-file

I can redirect the output of a process to a file
./prog > a.txt
But if I delete a.txt and do not restart prog, then no more output will get into a.txt. The same is the case if I use the append-redirect >>.
Is there a way to make my redirection recreate the file when it is deleted during the runtime of prog?
Redirection is part of the OS I think and not of prog. So maybe there are some tools or settings.
Thanks!
At the OS level, a file is made up of many components:
the content, stored somewhere on the storage device;
an i-node that keeps all file information except the name;
the name, listed in a directory (also stored on the storage device);
when the file is open, each application that opens it handle memory buffers that keep some of the file content.
All these are linked and the OS keeps their booking.
If you delete the file while it is open by another application (the redirect operator > keeps it open until ./prog completes), only the name is removed from the directory. The other pieces of the puzzle are still there and they keep working until the last application that keeps the file open closes it. This is when the file content is discarded on the storage medium.
If you delete the file, while ./prog keeps running and producing output the file grows and uses space on the storage medium but it cannot be open again because there is no way to access it. Only the programs that have it already open when it was deleted can still access the file until they close it.
Even if you re-create the file, it is a different file that happens to have the same name as the deleted one. ./prog is not affected, its output goes to the old, deleted file.
When its output is redirected, apart from restarting ./prog, there is no way to persuade it to store its output in a different file when a.txt is deleted.
There are several ways to make this happen if ./prog writes itself into a.txt (they all require changing the code of ./prog).
You can use gdb to redirect the output of program to file when original file is deleted.
Refer to this post.
For later references, I give the only excerpt from the post:
Find the files that are opened by the process using /proc/<pid>/fd.
Attach the PID of program to gdb.
Close the file descriptor of the deleted file through gdb session.
Redirect the program output to another file using gdb calls.
Examples
Suppose that PID of program is 19080 and file descriptor of deleted file is 2.
gdb attach 19080
ls -l /proc/19080/fd
gdb> p close(2)
$1 = 0
gdb> p fopen("/tmp/file", "w")
$2 = 20746416
(gdb) p fileno($2)
$3 = 7
gdb> quit
N.B.: If data of the deleted file is required, recover the deleted text file before closing the file handle:
cp -pv /proc/19080/fd/2 recovered_file.txt

Recover Deleted File Stuck In Linux shell process

I have a background process that is running for a long time and using a file to write the logs in it. It`s size has increased too large. I just deleted the file and created a new one with the same name and same permission and ownership but the new file does not get any entry.
Old file is marked as deleted and still being used by the process which can clearly be seen by lsof command.
Plz let me know, is there any way that I can recover that file and.
Your positive response will really be much helpful.
If the file is still open by some process, you can recover it using the /proc filesystem.
First, check the file descriptor number under which that file is opened in that process. If the file is opened in a process with PID X, use the lsof command as follows:
lsof -p X
This will show a list of files that are currently opened by X. The 4th column shows the file descriptors and the last column shows the name of the mount point and file system where the file lives (ignore the u, r and other flags after the file descriptor number, they just indicate whether the file is opened for reading, writing, etc.)
If the file descriptor number is Y, you can access its contents in /proc/X/fd/Y. So, something like this would recover it:
cp /proc/X/fd/Y /tmp/recovered_file

Program to list files of a process in Linux

I need a program to list all the file that are accessed/opened by a process in Linux.
It should work like this,
o/p: The full path of the files that the process is accessing.
Don't want to use 'lsof' utility or any other utility.
Is there anyway to achieve this programmatically?
If you want just the files which are accessible thru opened file descriptors by process of pid 1234, list the /proc/1234/fd/ directory (most of the entries are symlinks). You'll also get additional details thru /proc/1234/fdinfo/
Try
ls -l /proc/self/fd/
to get an idea of what these files contain.
Programatically you could use readdir(3) after opendir(3) on these directories (and also readlink(2), at least for entries in /proc/1234/fd/ ....). See also proc(5)
Notice that /proc/ is Linux specific. Some other Unixes have it (e.g. Solaris), with very different contents, properties, semantics.
If you care also about files which have been opened and closed in the past by some process, it is much more difficult. See also inotify(7) and ptrace(2)...
To convert a file path to a "canonical" absolute fiile path, use realpath(3).

Mounting Raw Disk Image File in Nautilus

I'm running Ubuntu 10.10. As part of SD card creation, I have a script that successfully creates a raw disk image file, correctly formatted with an ext2 file system. I have built SD cards from the raw disk image file with dd.
Now, I'd like to mount it and browse the files using Nautilus.
I know I can use mount -o,loop to mount it to a mount point. I would like to get it mounted by gnome to an automatically created /media/xxx mount point. I have used partprobe /dev/loopn to get the file noticed. It appears in my Places menu and if I select it from there, nautilus opens the disk, just fine.
What I would like to do is get my script to kick nautilus so its file browser window opens the image file's root without having to select it from the Places menu.
You could also use gvfs-mount.
List mountable devices
gvfs-mount -li
Mount device file found above from label
gvfs-mount -d /dev/sdaX
Nautilus uses the same underlying library (gvfs)
After that you can use
nautilus /media/LABEL
if you know the path to the directory you can use gnome-open like:
gnome-open /media/xxx
You can use gnome-disk-image-mounter (probably with option --writable) to mount an image using Gnome, which itself will be available by Nautilus.
If you want a graphical application like Nautilus to browse the files, why do you not configure it to mount images by itself?
With 'right-click|Properties|Open With' you can just use gnome-disk-image-mounter to do the task you want, including opening the folder.
See my answer to another question to open the image writeable.

Resources