How can I download a folder from google drive or dropbox using command in linux? - linux

I am trying to download a folder using command in linux shell using dropbox or google drive link. The download works but it is not saved as a folder, after it is downloaded I cannot access it using 'cd ..' command. So the folder is downloaded but when I use cd .. , I get the message that the file is not a directory.
How can I download a folder and access it? I am also executing this in virtual machine.

I do not know which method you are using to download the directory. In order to download a directory, you need to recursively download all the files in it or, create a tar or zip for the directory.
You can consider using gdown.
Please also read the detailed explanation from the following post: wget/curl large file from google drive

Related

Unable to Unzip An uploaded file in Amazon Linux AMI

I'm trying to unzip a file I uploaded to an Amazon Linux AMI and Amazon Linux 2 AMI . I keep getting this error-
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive. unzip: cannot find zipfile
directory in one of sendy.zip or
sendy.zip.zip, and cannot find sendy.zip.ZIP, period.
I compressed the contents of the directory on my mac by right clicking the folder and clicking compress. I then uploaded the zip file to github and used the command:
wget https://github.com/crownofqueen/sendy2/blob/master/sendy.zip
to upload the zip file to the server.
I have not had any issues when using digital ocean/Linode/etc.
The contents of the files are HTML documents.
The URL you are using with wget is pointing to an HTML page.
On that page you will find a Download link. You should wget from that download link:
https://github.com/crownofqueen/sendy2/raw/master/sendy.zip
Note that it includes raw in the URL. That means it is the contents of the file, as opposed to a web page that shows the file.

NodeJS archive manager

I need to get the content of archives and then I want to uncompress the selected one - but I dont want to uncompress the archives to know what's in it. I'd like to list and uncompress at least zip and rar, but (if that's possible) I don't want to be limited to only these two.
Can you advise good npm modules or other projects to achieve this?
Here's what I came up with:
zip
I found node-zip can only unzip files, but not list archive content.
rar
The best solution seems node-rar, but I can't install it on Windows.
node-uncompress This does what it says: It's an "Command-line wrapper for uncompressing various file types." So there is again no possibility to list archive content.
Currently I try to get node-uncompress to list files and hopefully it must never run cross-platform.
Solution:
I am now using 7zip with the node module node-7z instead of trying to get every archive working on its own. The corresponding site is: https://www.npmjs.com/package/node-7z
This library uses the OS independent archive manager 7zip. On Windows 7za is used. "7za.exe (a = alone) is a standalone version of 7-Zip". I've tested it on Windows and Ubuntu and it works great.
Update:
At Windows: Somehow I just got it working by adding 7za to the Path variables - not by adding 7za.exe to the "the same directory of your package.json file." like the description says.
Update 2:
On Windows 7za, that's referred in the node-7z post, cannot handle .rar-archives. So I'm using the "casual" 7-zip instead of 7za.exe. I just renamed the commanline 7z.exe to 7za.exe and added the 7-zip folder to the Path Variables.

Using tar -zcvf against a folder creates an empty compressed file

I am ssh'ed into an Acquia server trying to download some files. I need to backup these files for local development (to get user uploaded images mainly).
I am using the following command:
tar -zcvf ~/download/stage-files_3-19-2015_1344.tar.gz files/
I have read/write access to the download folder. I created that folder. I am in the parent folder of "files". And permissions to that folder are 777.
I was able to run this the other day with no issues. So I am very confused as to why this is happening now.
Actually I just figured this darn thing out. Must have run out of disk space because once I removed a prior compressed backup of the files it started running just fine. Dang disk quotas. Sorry guys.

Downloading snort rules

I have downloaded snort rules from the website but instead of getting a zipped folder, I get a single file which cannot be opened by windows. I also tried using 7zip to extract the file regardless its a single file but it just replicates itself.
anyone know how I can resolve this or a get snort rules zipped folder?
It's a gunzipped tar ball (tar.gz) (reference). You need to unzip it first, you can use 7-zip on windows just right click on it then > 7-zip > Open Archive. The archive will have a .tar file (community-rules.tar) just right click on this and hit Open. This should create a folder "community-rules" with a few files inside. The rules file is the one called "community.rules", all of the rules are in this file. If you open it with wordpad you should be able to see all of the rules.
If you're on linux/unix/mac you can just run the command:
tar xzvf community-rules.tar.gz

uploading all files in the folder using ftp

I'm trying to copy a folder from my computer to my android phone using ftp. After I login, I try put * or put *.mp3 but it doesn't work well.
I am using command line in ubuntu linux
You want mput rather than put.

Resources