train_data=torchvision.datasets.CIFAR100(root='',....,download=True)
When I download CIFAR100 datasets and select the root,are the dataset downloaded in the root that I selected?
I cannot find any file or images.
How can I see the file?
The root argument your pass to torchvision.datasets.CIFAR100 is relative to your current working directory. It will download the data there:
>>> torchvision.datasets.CIFAR100(root='.', download=True)
In the current directory (since root='.'), you will find the .tar.gz and uncompressed directory cifar-100-python/, containing the dataset:
>>> ls -al
drwxr-xr-x 2 1000 4096 Feb 20 2010 cifar-100-python/
-rw-r--r-- 1 root 169001437 Jan 21 16:02 cifar-100-python.tar.gz
Related
(ubuntu 18.04)
I'm attempting to extract an odbc driver from a tarball and following these instructions with command:
tar --directory=/opt -zxvf /SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux.tar.gz
This results in the following output:
root#08ba33ec2cfb:/# tar --directory=/opt -zxvf SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux.tar.gzSimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/GoogleBigQueryODBC.did
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/docs/
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/docs/release-notes.txt
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/docs/Simba Google BigQuery ODBC Connector Install and Configuration Guide.pdf
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/docs/OEM ODBC Driver Installation Instructions.pdf
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/setup/
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/setup/simba.googlebigqueryodbc.ini
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/setup/odbc.ini
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/setup/odbcinst.ini
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/SimbaODBCDriverforGoogleBigQuery32_2.4.6.1015.tar.gz
SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015.tar.gz
The guide linked to above says:
The Simba Google BigQuery ODBC Connector files are installed in the
/opt/simba/googlebigqueryodbc directory
Not for me, but I do see:
ls -l /opt/
total 8
drwxr-xr-x 1 1000 1001 4096 Apr 26 00:39 SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux
And:
ls -l /opt/SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux/
total 52324
-rwxr-xr-x 1 1000 1001 400 Apr 26 00:39 GoogleBigQueryODBC.did
-rw-rw-rw- 1 1000 1001 26688770 Apr 26 00:39 SimbaODBCDriverforGoogleBigQuery32_2.4.6.1015.tar.gz
-rw-rw-rw- 1 1000 1001 26876705 Apr 26 00:39 SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015.tar.gz
drwxr-xr-x 1 1000 1001 4096 Apr 26 00:39 docs
drwxr-xr-x 1 1000 1001 4096 Apr 26 00:39 setup
I was specifically looking for the .so driver file. All the above is on a docker container. I tried extracting the tarball locally on Ubuntu 18.04 (Same as my Docker container) and when I use Ubuntu desktop gui to extract by double clicking the tar.gz file and then clicking 'extract', I do indeed see the expected files.
It seems my tar command (tar --directory=/opt -zxvf /SimbaODBCDriverforGoogleBigQuery_2.4.6.1015-Linux.tar.gz) is not extracting the tarball as expected.
How can I extract the contents of the tarball properly? The tarball in question is the linux one on this link.
[edit]
Adding screens of contents of the tarball per comments. I had to click down two levels of nesting to arrive at 'stuff':
The instructions you linked to do not match the contents of the file I found from here. The first .tar.gz contains two other .tar.gz files. I looked into the 64 bit one and it has:
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/ErrorMessages/
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/ErrorMessages/en-US/
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/ErrorMessages/en-US/SimbaBigQueryODBCMessages.xml
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/ErrorMessages/en-US/ODBCMessages.xml
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/ErrorMessages/en-US/SQLEngineMessages.xml
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/ErrorMessages/en-US/DSMessages.xml
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/ErrorMessages/en-US/DSCURLHTTPClientMessages.xml
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/third-party-licenses.txt
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/lib/
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/lib/libgooglebigqueryodbc_sb64.so
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/lib/cacerts.pem
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/lib/EULA.txt
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/Tools/
SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015/Tools/get_refresh_token.sh
Your .so is in the lib directory. Based on the instructions it looks like you need to extract this file (or the 32 bit if appropriate) and rename, in this case SimbaODBCDriverforGoogleBigQuery64_2.4.6.1015 to simba/googlebigqueryodbc. The tar command is doing what it is told but the instructions are way off.
I want to zip a set of directories and files on my centos 8 VM.
There are 3 directories and 1 file which I want to zip in such a way that only env.conf file will move to /etc/env.txt after unzipping it and remaining directories will be unzipped at current location.
Is there any way to achieve this.
drwxr-xr-x. 9 root root 114 Feb 25 12:40 config
-rw-r--r--. 1 root root 340 Feb 25 09:01 env.conf
drwxr-xr-x. 9 root root 4096 Feb 28 05:11 platform
drwxr-xr-x. 2 root root 135 Feb 28 07:49 install
I don't think this is possible. in fact this is considered a vulnerability if you could do that.
Imagine you download a zip file from some website. and after you unzip it in a temp folder. It registers itself as a service by writing a file in /etc somewhere, and gets control over your pc.
Example: zip-slip
You could however create a one-liner that extracts and moves the file wherever you want like this:
unzip <filename> && mv env.conf /etc/env.txt
I am trying to copy directories (& files) recursively from one directory to another.
I tried the following -
rsync -avz <source> <target>
cp -ruT <source> <taret>
Both were successful. but, when i try to compare the sizes using (du -c), the empty directories seem to have mismatch in size.
In target directory
drwxrwxr-x 2 abc devl 4096 Jun 9 01:25 .
drwxrwxr-x 4 abc devl 4096 Jul 20 07:46 ..
In source directory
drwxrwxr-x 2 prod ops 2 Jun 9 01:25 .
drwxrwxr-x 4 prod ops 36 Jul 20 07:46 ..
Is there a special way to handle this? diff -qr doesn't show any differences though.
Thanks for your help.
Are both folders on the same volume? If not chances are that the sector size for those volumes are different and in turn the inode sizes differ. In case of diff it's just looking at whenever or not the directory exists and if it contains the corresponding files. It's similar in how diff doesn't include permission differences because those might be pretty system specific.
A pretty comprehensive answer can be found here: Why size reporting for directories is different than other files?
I have made an rpm package that install a program and one of the folders it needs to copy a file to is a symbolic link since the program the symbolic link is pointing to may change over time so it is easier to maintain the building of the rpm package by copying the file to the symbolic link rather then to the hard coded path. However, I get the error
cp: cannot overwrite directory with non-directory
when the rpm package tried to copy the file to the symbolic link folder. Why does this happen, and is there anything I can do to work around this error other then making the files to be copied to the folder the symbolic link points to? I am running RHEL 6.6 as of note.
That error generally means something like you having told cp to treat the target as a normal file (the -T argument).
$ ls -lR
.:
total 16
drwxr-xr-x 2 root root 4096 Feb 6 09:46 dir
-rw-r--r-- 1 root root 0 Feb 6 09:45 file
lrwxrwxrwx 1 root root 3 Feb 6 09:45 symdir -> dir
./dir:
total 0
$ cp -T file symdir
cp: cannot overwrite non-directory `symdir' with non-directory
$ ls -lR
.:
total 16
drwxr-xr-x 2 root root 4096 Feb 6 09:46 dir
-rw-r--r-- 1 root root 0 Feb 6 09:45 file
lrwxrwxrwx 1 root root 3 Feb 6 09:45 symdir -> dir
./dir:
total 0
$ cp file symdir
$ ls -lR
.:
total 16
drwxr-xr-x 2 root root 4096 Feb 6 09:46 dir
-rw-r--r-- 1 root root 0 Feb 6 09:45 file
lrwxrwxrwx 1 root root 3 Feb 6 09:45 symdir -> dir
./dir:
total 4
-rw-r--r-- 1 root root 0 Feb 6 09:46 file
I'm trying to copy data over hdfs. but none of the commands not working for me.
I followed an online tutorial to install a single node cluster. it got installed correctly because $jsp command showing me all the 6 jobs. but when I'm trying to copy a file over to hdfs its showing me error.
the command I'm running is
hduser#naren-Vostro-3560:~$ hdfs dfs -copyFromLocal /home/nare/Desktop/data/first.txt /app/hadoop/tmp
Error
14/12/30 02:18:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
copyFromLocal: '/app/hadoop/tmp': No such file or directory
I have given all the permissions to input file (first.txt)
naren#naren-Vostro-3560:~$ ls -al /home/naren/Desktop/data
total 3612
drwxrwxr-x 2 naren naren 4096 Dec 30 01:40 .
drwxr-xr-x 3 naren naren 4096 Dec 30 01:40 ..
-rwxrwxrwx 1 naren naren 674570 Dec 30 01:37 first.txt
-rwxrwxrwx 1 naren naren 1423803 Dec 30 01:39 second.txt
-rwxrwxrwx 1 naren naren 1573151 Dec 30 01:40 third.txt
the permissions to the hdfs folder also looks right to me
hduser#naren-Vostro-3560:~$ ls -l /app/hadoop
total 4
drwxr-x--- 5 hduser hadoop 4096 Dec 26 01:22 tmp
I'm new to hadoop and linux and got stuck here.
also i tried creating new directory with
hduser#naren-Vostro-3560:~$ hadoop fs -mkdir -p /user/hduser/sample
and it didn't creat any directory for me.
Please let me know where i'm going wrong.
Thanks in Advance!!
Hadoop Version: Hadoop 2.5.2
OS: Ubuntu 14.04
You need to make sure /app/hadoop is created in hdfs, not the local fs. In your check for the directory you use ls -l which checks on the local filesystem which is separate from the hdfs namespace. Try hadoop fs -ls /app/hadoop. If it's not there then create it with hadoop fs -mkdir. – snkherv