How to download a folder to my local PC from Google Cloud console - linux

I have a folder I want to download from Google Cloud Console using the Linux Ubuntu command terminal. I have logged in to my SSH console and so far I can only list the contents of my files as follows.
cd /var/www/html/staging
Now I want to download all the files from that staging folder.

Sorry, if I'm missing the point. Anyway, I came here seeking a way to download files from Google Cloud Console. I didn't have the ability to create an additional bucket as the author above suggested. But I accidently noticed that there is a button for exactly what I needed.
Seek keebab-style menu button. In the appearing dropdown you should find Download button.

If you mean cloud shell, then I typically use the gcp storage tool suite.
In summary, I transfer from cloud shell to gcp storage, then from storage to my workstation.
First, have the Google cloud ask installed on your system.
Make a bucket to transfer it into with gsutil mb gs://MySweetBucket
From within cloud shell, Move the file I to the bucket. gsutil cp /path/to/file gs://MySweetBucket/
On your local system pull the file down. gsutil cp gs://MySweetBucket/filename
Done!

Related

Output to local file using cloud shell in vscode on a MAC

I am new to VSCode on a Mac and I am using the Cloud Shell connected to Azure where I can run all my commands without issue. The problem I have is that if I want to use the export-csv command to export the information to a file I don't know how to point the output file to the Desktop of my Mac.
Is this possible or am I barking up the wrong tree?
When you are using cloud shell all data is executed remotely on azure terminal so the cloud is saved to a blob storage, you can run export-csv command then download the data from the blob storage.
for more details :
https://learn.microsoft.com/en-us/azure/cloud-shell/persisting-shell-storage

Upload youtube-dl transcript into Google Cloud storage

I am using Youtube-dl to download transcript, it works fine on my machine (local server) where I provide the __Dirname into the Options params to upload files. But I want to use Google Cloud functions, so how can I substitute __dirname with Cloud storage ??
Thank you !!
Upload from Youtube-dl it's not possible. To upload files into Google Cloud storage is possible if you upload a file already in your disk.
You will need to download the file from whichever program you mention (as mentioned in the comments, you can download it to a temporal folder), upload the file to GCS and then delete it from your temporal folder.
What you can actually do? you can for example run a script inside of a Google Cloud Instance with a gsutil command to upload the files into a bucket.

What is the meaning of each part of this luminoth command?

I am trying to train on a dataset using luminosity. However, as my computer has a poor GPU I am planning to use glcoud. It seems that luminoth has gcloud integration according to the doc(https://media.readthedocs.org/pdf/luminoth/latest/luminoth.pdf).
Here is what I have done.
Create a Google Cloud project.
Install Google Cloud SDK on your machine.
gcloud auth login
Enable the following APIs:
• Compute Engine
• Cloud Machine Learning Engine
• Google Cloud Storage
I did it through the webconsole.
Now here is where I am stuck.
5. Upload your dataset’s TFRecord files to a Cloud Storage bucket:
the command for this is;
gsutil -o GSUtil:parallel_composite_upload_threshold=150M cp -r /path/to/dataset/˓→tfrecords gs://your_bucket/path
I have the tfrecords file in my local drive and the data that I need to train on. However, I am not sure what each command in gsutil is trying to say. For /path/to/dataset/ do I simply input the directory my data is in? And I have uploaded the files to a bucket. Do I simply provide the path for it?
Additionally, I am currently getting
does not have permission to access project (or it may not exist)
Apologies if this may be a stupid question.

Unzip file uploaded to Azure Websites

I uploaded a zip of my Wordpress site to an Azure website. When I try to FTP in with Winscp it works, but I can't use unzip transfer.zip in the command interface.
How do I unzip my zip file that is now on the server?
This is possible using the Azure portal's console.
Navigate to portal.azure.com.
Locate your website (Browse->Websites, click on name of the website you uploaded your ZIP file too)
Scroll down until you see the "Console" button. By default, this is on the righthand side about three-quarters of the way to the bottom of the icon list (or "blade" in Azure portal parlance). The icon looks like this:
Navigate to the directory you uploaded your ZIP to using the cd command, just as you would on a typical console or shell window.
Execute unzip archive.zip where archive.zip is the name of your ZIP file.
Note that as of today, the unzip command will not output any progress reports; it runs silently. So, if you have a large archive that takes some time unzip, it may appear as if the command has hung, but in fact it is working and you just need to wait.
Update Sep 2018:
The unzip command outputs its progress to the console, e.g.:
Archive: archive.zip
creating: archive/
inflating: archive/203439A9-33EE-480B-9151-80E72F7F2148_PPM01.xml
creating: archive/bin/
inflating: archive/bin/some.dll
One way is to upload the command line version of 7-Zip, it's a standalone .EXE file.
Next, from the Azure Preview Portal Azure portal (2014), navigate to your Website, click on the console tile and type the unzip command:
7za x thezipfile.zip
An alternative to the portal is to use the console from Kudu. Insert "SCM" between your Website name and azurewebsites.net like this to launch Kudu:
yoursitename.scm.azurewebsites.net
One advantage in using Kudu is that you can upload files directly in the browser just by drag&dropping them.
Pretty cool.
So it seams that the recent answers to this question is obsolete
because console button is not available anymore in azure portal so there is no way to access unzip this way.
You have to use kudu site instead in order to access console and run unzip command.
Simply navigate to https://your-web-app-name.scm.azurewebsites.net and click CMD
Or just navigate https://your-web-app-name.scm.azurewebsites.net/DebugConsole
Then do your unzip -filename
unzip by info-zip.org is currently available on AZURE in the console.
As indicated it produces a red error message: Bad Request, but appears to work correctly nevertheless.
I contacted the supplier and they said:
I know approximately nothing about Microsoft Azure, and even less about
exactly what you did, but I took a quick look at the UnZip source, and
didn't see anything like a "Bad Request" message. My first guess
would be that UnZip seems to be working correctly because it is
working correctly, and that the "Bad Request" message comes from
somewhere else.
Since all answers above explain how to unzip files in a Web App running on Windows, let me add how to unzip in a Linux environment.
Head to https://mysite.scm.azurewebsites.net and select "Bash" ("SSH" is much better/faster to use, but doesn't support unzip or 7za to date):
Now simply type unzip myfile.zip and the files inflates. The process is also printed to the console:
Hope this helps.
There is now unzip logic built in to the Kudu interface. Drag to the correct spot in Kudu and the file will upload and unzip automatically.

linux (CLI) download files via shared dropbox (folder`) link without a account

I was thinking to use dropbox to upload my source code of a web-application. For this folder i would create a shared link. This link i like to use to download all the latest source files on my test server (instead of using s/FTP).
Now i know you can use dropbox with linux by installing their version, but it requires to create account. I don't want to use a account, and for sure don't want to use my own account.
Is there anyway to use a shared (folder) link, and download all the files in that folder command-line, without a account (maybe something like wget) ? There is no need for live-syncing, it would be fine to trigger the download with some bash script.
Thanks.
If you're ok with your links being public (which i think is not a good idea) , then you can just create a file with a list of links to your files and then create a bash script to loop over each line of the file get the link with wget
If you want to use authentication, you'll have to register for a Dropbox API key and then create a script (in python,ruby or java etc) to authenticate and get the files.
If you don't have a specific need for dropbox, i'll recommend you use git (or similar). With git you'll just have to create the repository on your server and clone it on your desktop. Then you can just edit your files and push it to the server.... it's so much easier.
Rogier, github has become the norm for hosting code. There are other options (Sourceforge, Google Code, Beanstalk) or you can set up a private git repository on your own computer.
Somewhere deep in my browser history there's an article about how to do that.
However a little googling turned up http://news.ycombinator.com/item?id=1652414. Let me know if you can't find some satisfactory instructions on your own of how to set up a git repo on your computer.

Resources