From WSL, how can I determine if a file is hidden, according to Windows? - python-3.x

I'm running Python 3.5 on the Windows Subsystem for Linux on Windows 10, Ubuntu 16.04. When finding files, such as using os.walk, I want to filter out hidden files (and directories) such as "desktop.ini" and "Thumbs.db". But they look like regular files from Ubuntu.
Because I'm running Ubuntu, ctypes.windll doesn't load so those solutions aren't an option.

You will need to parse the output with something like sed or awk but the command cmd.exe /c dir /S /A:H will do what you are after.
Display all files with the hidden attribute /A:H in the current directory and recursively /S below it.
EDIT
the following is the output when i run the command from inside my windows user directory
➜ cd /mnt/c/Users/damo
➜ cmd.exe /c dir /A:H
Volume in drive C is OSDisk
Volume Serial Number is B8E3-7234
Directory of C:\Users\damo
25/11/2019 10:04 <DIR> AppData
20/02/2020 08:16 <DIR> IntelGraphicsProfiles
25/11/2019 15:42 <DIR> MicrosoftEdgeBackups
17/02/2020 10:04 7,864,320 NTUSER.DAT
25/11/2019 10:04 696,320 ntuser.dat.LOG1
20/02/2020 08:16 1,048,576 NTUSER.DAT{c17b7660-0d10-11ea-a41b-88b111e240a6}.TxR.0.regtrans-ms
25/11/2019 10:50 524,288 NTUSER.DAT{c17b7661-0d10-11ea-a41b-88b111e240a6}.TMContainer00000000000000000001.regtrans-ms
25/11/2019 10:50 524,288 NTUSER.DAT{c17b7661-0d10-11ea-a41b-88b111e240a6}.TMContainer00000000000000000002.regtrans-ms
25/11/2019 10:04 20 ntuser.ini
17/02/2020 10:08 21,126 ntuser.pol
12 File(s) 13,824,666 bytes
3 Dir(s) 325,835,501,568 bytes free
➜
Notice in this example i did not include the /S and i ran this from directly inside WSL.
Could you also check the version of CMD that you are using form WSL by issuing the following which cmd.exe which for me returns /mnt/c/Windows/System32/cmd.exe

Related

tar command with -zxvf not extracting contents as expected

(ubuntu 18.04)
I'm attempting to extract an odbc driver from a tarball and following these instructions with command:
tar --directory=/opt -zxvf /SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux.tar.gz
This results in the following output:
root#08ba33ec2cfb:/# tar --directory=/opt -zxvf SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux.tar.gzSimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/GoogleBigQueryODBC.did
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/docs/
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/docs/release-notes.txt
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/docs/Simba Google BigQuery ODBC Connector Install and Configuration Guide.pdf
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/docs/OEM ODBC Driver Installation Instructions.pdf
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/setup/
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/setup/simba.googlebigqueryodbc.ini
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/setup/odbc.ini
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/setup/odbcinst.ini
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/SimbaODBCDriverforGoogleBigQuery32_2.4.6.1015.tar.gz
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015.tar.gz
The guide linked to above says:
The Simba Google BigQuery ODBC Connector files are installed in the
/opt/simba/googlebigqueryodbc directory
Not for me, but I do see:
ls -l /opt/
total 8
drwxr-xr-x 1 1000 1001 4096 Apr 26 00:39 SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux
And:
ls -l /opt/SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/
total 52324
-rwxr-xr-x 1 1000 1001 400 Apr 26 00:39 GoogleBigQueryODBC.did
-rw-rw-rw- 1 1000 1001 26688770 Apr 26 00:39 SimbaODBCDriverforGoogleBigQuery32_2.4.6.1015.tar.gz
-rw-rw-rw- 1 1000 1001 26876705 Apr 26 00:39 SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015.tar.gz
drwxr-xr-x 1 1000 1001 4096 Apr 26 00:39 docs
drwxr-xr-x 1 1000 1001 4096 Apr 26 00:39 setup
I was specifically looking for the .so driver file. All the above is on a docker container. I tried extracting the tarball locally on Ubuntu 18.04 (Same as my Docker container) and when I use Ubuntu desktop gui to extract by double clicking the tar.gz file and then clicking 'extract', I do indeed see the expected files.
It seems my tar command (tar --directory=/opt -zxvf /SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux.tar.gz) is not extracting the tarball as expected.
How can I extract the contents of the tarball properly? The tarball in question is the linux one on this link.
[edit]
Adding screens of contents of the tarball per comments. I had to click down two levels of nesting to arrive at 'stuff':
The instructions you linked to do not match the contents of the file I found from here. The first .tar.gz contains two other .tar.gz files. I looked into the 64 bit one and it has:
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/ErrorMessages/
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/ErrorMessages/en-US/
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/ErrorMessages/en-US/SimbaBigQueryODBCMessages.xml
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/ErrorMessages/en-US/ODBCMessages.xml
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/ErrorMessages/en-US/SQLEngineMessages.xml
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/ErrorMessages/en-US/DSMessages.xml
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/ErrorMessages/en-US/DSCURLHTTPClientMessages.xml
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/third-party-licenses.txt
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/lib/
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/lib/libgooglebigqueryodbc_sb64.so
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/lib/cacerts.pem
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/lib/EULA.txt
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/Tools/
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/Tools/get_refresh_token.sh
Your .so is in the lib directory. Based on the instructions it looks like you need to extract this file (or the 32 bit if appropriate) and rename, in this case SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015 to simba/googlebigqueryodbc. The tar command is doing what it is told but the instructions are way off.

How to use rsync properly to keep all file permissions and ownership?

I am trying to use rsync to backup some data from one computer (PopOS! 21.04) to another (Rocky 8.4). But no matter which flags I use with rsync, file permissions and ownership never seem to be saved.
What I do, is run this command locally on PopOS:
sudo rsync -avz /home/user1/test/ root#192.168.10.11:/root/ttt/
And the result I get something link this:
[root#rocky_clone0 ~]# ls -ld ttt/
drwxrwxr-x. 2 user23 user23 32 Dec 17 2021 ttt/
[root#rocky_clone0 ~]# ls -l ttt/
total 8
-rw-rw-r--. 1 user23 user23 57 Dec 17 2021 test1
-rw-rw-r--. 1 user23 user23 29 Dec 17 2021 test2
So all the file ownership change to user23, which is the only regular user on Rocky. I don't understand how this happens, with rsync I am connecting to root on the remote host, but as the result files are copied as user23. Why isn't -a flag work properly in this case?
I have also tried these flags:
sudo rsync -avz --rsync-path="sudo -u user23 rsync -a" /home/user1/test root#192.168.10.11:/home/user23/rrr
This command couldn't copy to the root directory, so I had to change the remote destination to user23's home folder. But the result is the same.
If someone could explain to me what am I doing wrong, and how to backup files with rsync so that permissions and ownership stay the same as on the local computer I would very much appreciate it.
Have a look at how the (target)filesystem is mounted on the Rocky(target) system.
Some mounted filesystems (such as many FUSE mounts) do not support the classical unix permissions, and simply use the name of the user who mounted the filesystem as owner/group.
Any attempt to chown/chmod/etc (either by you or by rsync) will just silently be ignored, but appear to "succeed" (no errors reported).

NodeJS ZIP symlink and can't read them on Windows 10

I'm using Archiver in a NodeJS environment (running on linux) to create a ZIP with a structure like this:
/root
/documents
/doc1.pdf
/doc2.pdf
/doc3.pdf
/clientA
/doc1.pdf < symlink to ../documents/doc1.pdf
/clientB
/doc3.pdf < symlink to ../documents/doc3.pdf
Using these functions of ArchiverJS:
archiverInstance.append(filestream, {name: '/root/documents/doc1.pdf'})
archiverInstance.symlink('/root/clientA/doc1.pdf', '../documents/doc1.pdf')
When I download this ZIP on linux, I can open the symlinks.
# linux ubuntu 19.04
ls -l ~/root/clientA
lrwxrwxrwx 1 usr usr 28 oct 11 11:51 doc1.pdf -> ../documents/doc1.pdf`
But when I download this ZIP on Windows 10, symlinks are broken, using the standard "Extract" button from the windows explorer.
# windows 10
cd root/clientA
dir
10/11/2019 02:49 AM <DIR> .
10/11/2019 02:49 AM <DIR> ..
10/11/2019 02:49 28 doc1.pdf < click on it = PDF corrupted
1 File(s) 28 bytes
2 Dir(s)
Why this does not work on Windows 10? And is there an alternative to make it work?
Thanks

How to access the desktop with cygwin

I want access to my windows 10 desktop with Cygwin. I tried this
$cd cygdrive/c/
$ls -al
But I can't see no one folder named "Desktop".
As you discovered the Desktop for any user is located in the disk at
C:\Users\[username]\Desktop
and its equivalent for Cygwin is
$ cygpath -u 'C:\Users\[username]\Desktop'
/cygdrive/c/Users/[username]/Desktop
It is NOT a Cygwin specific issue. Also with Windows Commond Prompt there is no
Desktop folder at the root of the C: disk structure
Microsoft Windows [Version 10.0.17134.472]
(c) 2018 Microsoft Corporation. Alle Rechte vorbehalten.
C:\Users\Marco>cd \
C:\>dir
Datenträger in Laufwerk C: ist Windows
Volumeseriennummer: 98EE-C713
Verzeichnis von C:\
15.07.2018 12:13 <DIR> inetpub
19.06.2018 16:34 <DIR> Intel
12.04.2018 00:38 <DIR> PerfLogs
05.10.2018 12:15 <DIR> Program Files
30.12.2018 04:27 <DIR> Program Files (x86)
18.07.2018 16:39 <DIR> SWSetup
18.07.2018 16:51 <DIR> temp
19.09.2018 10:44 <DIR> Users
03.01.2019 23:07 <DIR> Windows
0 Datei(en), 0 Bytes
9 Verzeichnis(se), 174.164.725.760 Bytes frei
C:\>
Explorer is showing a Virtual structure putting together the User data folders and the Disks at the same level.
I think that OneDrive is automatically "eating" the Desktop folder. Look here:
/cygdrive/c/users/[user-name]/OneDrive/Desktop.
For me it worked.....

Ftp backup folder download as files,, i cant find .tar.gz files

i create backup folder in ftp server , and send all my .tar.gz file into /backup folder
using (put file.tar.gz backup)
while i retrieve backup,, i get backup folder as backup files. ,, how to convert the file to folder ..
ftp server
ls
227 Entering Passive Mode (10,21,131,105,76,56)
150 Accepted data connection
drwxr-xr-x 6 100 ftpgroup 7 Oct 20 19:57 .
drwxr-xr-x 6 100 ftpgroup 7 Oct 20 19:57 ..
-r-------- 1 100 ftpgroup 84 Oct 21 11:15 .banner
drwxrwxrwx 3 100 ftpgroup 4 Oct 20 18:28 backup
drwxrwxrwx 2 100 ftpgroup 3 Oct 20 19:45 dailybackup
drwxrwxr-x 2 100 ftpgroup 3 Oct 20 19:57 hi5songs
drwxrwxr-x 2 100 ftpgroup 3 Oct 20 19:49 whole
226-Options: -a -l
226 7 matches total
i tried :
ftp> mget backup``
mget .? y
227 Entering Passive Mode (10,21,131,105,62,8)
550 I can only retrieve regular files
mget ..? y
Warning: embedded .. in .. (changing to !!)
227 Entering Passive Mode (10,21,131,105,46,39)
550 Can't open !!: No such file or directory
mget backup? y
227 Entering Passive Mode (10,21,131,105,72,24)
550 I can only retrieve regular files
mget cpanelbackup? y
227 Entering Passive Mode (10,21,131,105,73,69)
550 Can't open cpanelbackup: No such file or directory
while
i use (get backup home)
it successfully retrieve but as files shown below
server:
'root#azar [/home]# ls
./ backup.2* .cpan/ dailybackup hi5songs.4 oldeserver
../ backup.3* cPanelInstall/ hi5songs/ hi5songs.5 oldserver/
0_README_BEFORE_DELETING_VIRTFS backup.4* .cpanm/ hi5songs.1 home quota.user
backup/ backup.5* .cpcpan/ hi5songs.2 latest virtfs/
backup.1* .banner cpeasyapache/ hi5songs.3 lost+found/ whole'
i got that backup with green color executable file like backup.1* ( note: i cant open those file and extract those files) what to do
how to get my .tar.gz file back
please guide me,,
advance thanks,,
Updated Answer
If you want to get all files from /some/place on your server, to /home/here on your local machine, you would either do this:
cd /home/here # change directory before starting FTP
ftp server ... # connect
cd /some/place # go to desired folder on server
bi # ensure no funny business with line-endings
mget * # get all files
or you can change directory locally, within FTP like this:
ftp server ... # connect
cd /some/place # go to desired folder on server
lcd /home/here # LOCALLY change directory to where you want the files to 'land'
bi # ensure no funny business
mget * # get all files
Original Answer
I cannot understand your question at all, but you are doing some things wrong.
You cannot use GET or MGET to get a folder (directory) like you are trying to do with mget backup. You can only GET a file. Now your file may be a tar-file with more than one file in it, but it is still a file.
If you are getting tar-files and binary files, you should use BINARY mode to ensure line-end characters that may occur in binary files are not translated between Windows and Unix line-endings. So, as a matter of course, you should issue BI command before you get files.
If you have several files in your backup directory, you should probably do cd backup then bi
then mget *

Resources