Shiny Server and R: How can I start the download of a file through the URL? - linux

I want to use the linux wget command on several URLs. In Shiny, right clicking the download button or link gives the following info:
../session/../download/downloadData?w=
This can be used with the linux wget command to download the file if the page is open.
Is it possible to begin a Shiny download using the URL link without knowing the session data?
My goal is to do something like this:
wget "http://home:3838/../#apples=3" -O /home/../apples.csv
wget "http://home:3838/../#pears=3" -O /home/../pears.csv
and so on.
I already know how to add parameters but I do not know how to actuate the download.

Related

Download file served by a page using wget

I am trying to download a file served by a particular page in linux using wget. Note that I don't want to download the html page itself, but the .tiff file that is downloaded when the page loads. If I use
wget http://www.dli.gov.in/scripts/FullindexDefault.htm?path1=/data7/upload/0180/365&first=35&last=479&barcode=2030020017599
then it downloads the webpage instead of the tiff file served by it. How do I do this?
If you use chrome you can view the .tiff file's url .If you get it, you just command this:
wegt the-url-of-.tiff
Bingo!
You can try an alternative i.e cURL command. Its usage is: curl -O url.
Make sure about permissions and other stuff like that.

install eclipse via terminal

I am trying to install eclipse in a linux box via terminal using below command but it doesn't work.
wget "http://www.eclipse.org/downloads/download.php?file=/technology/epp/downloads/release/luna/SR2/eclipse-jee-luna-SR2-linux-gtk-x86_64.tar.gz&mirror_id=454"
When it gets downloaded, I see this file name which is wrong?
download.php?file=%2Ftechnology%2Fepp%2Fdownloads%2Frelease%2Fluna%2FSR2%2Feclipse-jee-luna-SR2-linux-gtk-x86_64.tar.gz
Instead it should be - eclipse-jee-luna-SR2-linux-gtk-x86_64.tar.gz What's wrong?
I renamed the file to correct name and tried untarring it but I get an error as shown below:
tar -xvzf eclipse-jee-luna-SR2-linux-gtk-x86_64.tar.gz
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
What's wrong?
Using wget for eclipse Java
command is :
wget http://www.mirrorservice.org/sites/download.eclipse.org/eclipseMirror/technology/epp/downloads/release/2022-03/202203101200/eclipse-java-2022-03-R-linux-gtk-aarch64.tar.gz
You can change the release of eclipse in the URI. See this link for more information. I hope that's useful for you.
Any URL which includes download.php will have this problem with wget, even the so called "Direct Link".
With more recent download pages however, there is a way to find the actual URLs, and that's to look inside the xml file that download.php uses for mirror selection.
For example, if we go to the Eclipse IDE 2020-03 page, most of the URLs are download.php links, but in the "Other options for this file" side bar, we can see an xml link.
If we navigate there with our web browser and then look at the page source, we can see the actual file URL used by every mirror. Depending on the Web Browser used, the page may be blank, in which case we need to look at the Page Source to see the raw xml and thus the URLs.
For me, in the UK, the UK mirror service site would be my best option:
<mirror url="http://www.mirrorservice.org/sites/download.eclipse.org/eclipseMirror/oomph/epp/2020-03/R/eclipse-inst-win64.exe" label="[United Kingdom] UK Mirror Service (http)" />
These (actually) direct URLs do work with wget.
Looks like issue with the download link form eclipse website itself.
Please try with the below URL (I have tested it in Ubuntu 14 and its working)
http://eclipse.stu.edu.tw/technology/epp/downloads/release/luna/SR2/eclipse-jee-luna-SR2-linux-gtk.tar.gz
Using a Debian variant?
sudo apt-get install eclipse
Otherwise, I think you just copied the link from the main Eclipse download page, which is a link to another page which grabs the fastest mirror for the download of the file.
For example, the link it gave me is this, which downloads fine
http://mirror.cc.columbia.edu/pub/software/eclipse/technology/epp/downloads/release/luna/SR2/eclipse-jee-luna-SR2-linux-gtk-x86_64.tar.gz
After that, the answer is mostly contained on AskUbuntu: How to install Eclipse?
If you're Using Ubuntu 20.04.4 LTS
$ wget file-link
Downloaded file might look like this :- download.php?file=%2Foomph%2Fepp%2F2022-06%2FR%2Feclipse-inst-jre-linux64.tar.gz
Execute following command:
1.First Move to download directory.
2.Execute tar -xvf eclipse-inst-jre-linux64.tar.gz
3.Move to Dir - cd eclipse-installer
4. Execute ./eclipse-inst. It will ask you which dev mode you want to install please select and click next.
Once installed successfully you will get can see launch icon.
Use a console version of oomph installer:
Download Console Oomph Installer, choose the appropriate download for your target platform (zip or tar.gz), example for Linux:
wget -O installer.tar.gz https://search.maven.org/remotecontent?filepath=com/github/a-langer/org.eclipse.oomph.console.product/1.0.1/org.eclipse.oomph.console.product-1.0.1-linux.gtk.x86_64.tar.gz
Extract archive and change current directory:
tar -xvzf installer.tar.gz
cd eclipse-installer/
Install "Eclipse IDE for Java Developers":
./eclipse-inst -nosplash -application org.eclipse.oomph.console.application -vmargs \
-Doomph.installation.location="$PWD/ide" \
-Doomph.product.id="epp.package.java"
Wait for the installation to complete, last version of Eclipse will be installed in "$PWD/ide".
LATEST
|############################################################|100%
More examples see in https://github.com/a-langer/eclipse-oomph-console.

Download embedded links in a html page using wget

I am trying to download the package from https://github.com/justintime/nagios-plugins/downloads using wget, but what I am getting is the html file of the link I mentioned not the package. I tried this command:
wget -r -l 1 https://github.com/justintime/nagios-plugins/downloads
Is there any way to download the package from the above link?
wget is giving you what you asked for. You dint specify the package link. You are specifying the page link. Right click on the Download button of the required package on the page and select Copy Link Address and specify that address to wget
This works
wget -r -l 1 https://github.com/justintime/nagios-plugins/zipball/master

How do I download all these files in one go with wget?

I want to download all these RPMs from SourceForge in one go with wget:
Link
How do I do this?
Seeing how for example HeaNet is one of the SF mirrors hosting this project (and many others), you could find out where SF redirects you, specifically:
http://ftp.heanet.ie/mirrors/sourceforge/h/project/hp/hphp/CentOS%205%2064bit/SRPM/
... and download that entire directory with the -r option (probably should use "no parent" switch, too).
One of the two ways:
Create a script that parses the html file and gets the links that ends withs *.rpm, and download those links using wget $URL
Or start copy & pasting those urls and use:
wget $URL from the console.

Can i use wget to download multiple files from linux terminal

Suppose i have a directory accessible via http e,g
Http://www.abc.com/pdf/books
Inside the folder i have many pdf files
Can i use something like
wget http://www.abc.com/pdf/books/*
wget -r -l1 -A.pdf http://www.abc.com/pdf/books
from wget man page:
Wget can follow links in HTML and XHTML pages and create local versions of remote web sites, fully recreating the directory structure of the original site. This is
sometimes referred to as ``recursive downloading.'' While doing that, Wget respects the Robot Exclusion Standard (/robots.txt). Wget can be instructed to convert the
links in downloaded HTML files to the local files for offline viewing.
and
Recursive Retrieval Options
-r
--recursive
Turn on recursive retrieving.
-l depth
--level=depth
Specify recursion maximum depth level depth. The default maximum depth is 5.
It depends on the webserver and the configuration of the server. Strictly speaking the URL is not a directory path, so the http://something/books/* is meaningless.
However if the web server implements the path of http://something/books to be a index page listing all the books on the site, then you can play around with the recursive option and spider options and wget will be happy to follow any links which is in the http://something/books index page.

Resources