Importance of the --no-check-certificate option of wget - linux

Some times wget will refuse to download the specified file. Adding the --no-check-certificate, I am often able to download the file anyway.
1) Briefly, what is this certificate which wget checks by default? How does it perform this check?
2) Does the need of --no-check-certificate for some particular URL vary from machine to machine? That is, if I'm able to download some file using wget www.website/file, can I be sure that my friend using some other machine can do the same, also without the --no-check-certificate option?

When hitting a website that is secure (https) wget will attempt to validate the certificate. In order to trust certificates, wget would need access to a certificate store which is essentially an SSL directory to store trusted certs (see here for more info: https://wiki.openwrt.org/doc/howto/wget-ssl-certs) This can be bypassed as you have seen by using the --no-check-certificate option
Using the --no-check-certificate option should work regardless of which machine you are using wget from.That option is specific to the wget program itself and is not machine dependent

Related

wget --no-check-certificate mirror

I am trying to use wget to mirror a full website I have created. I am wondering why this bit in the terminal won't work:
wget --mirror-no-check-certificate [my site goes here]
I am also trying to get a site that doesn't contain a robotics.txt so if anyone knows a workaround that would be great.
thanks in advance guys.
This doesn't work because there is no option called --mirror-no-check-certificate.
You may have intended to use the two separate options --mirror and --no-check-certificate:
wget --mirror --no-check-certificate [your site goes here]

Shiny Server and R: How can I start the download of a file through the URL?

I want to use the linux wget command on several URLs. In Shiny, right clicking the download button or link gives the following info:
../session/../download/downloadData?w=
This can be used with the linux wget command to download the file if the page is open.
Is it possible to begin a Shiny download using the URL link without knowing the session data?
My goal is to do something like this:
wget "http://home:3838/../#apples=3" -O /home/../apples.csv
wget "http://home:3838/../#pears=3" -O /home/../pears.csv
and so on.
I already know how to add parameters but I do not know how to actuate the download.

Export SVN repository over FTP to a remote server

I'm using following command to export my repository to a local path:
svn export --force svn://localhost/repo_name /share/Web/projects/project_name
Is there any, quite easy (Linux newbie here) way to do the same over FTP protocol, to export repository to a remote server?
Last parameter of svn export AFAIK have to be a local path and AFAIK this command does not support giving paths in form of URLs, like for example:
ftp://user:pass#server:path/
So, I thing there should be some script hired here to do the job.
I have asked some people about that, and was advised that the easiest way is to export repository to a local path, transfer it to an FTP server and then purge local path. Unfortunately I failed after first step (extract to local path! :) So, the support question is, if it can be done on-the-fly, or really have to be split into two steps: export + ftp transfer?
Someone also advised me to setup local SVN client on remote server and do simple checkout / update from my repository. But this is solution possible only if everything else fails. As I want to extract pure repository structure, without SVN files, which I would get, when go this way.
BTW: I'm using QNAP TS-210, a simple NAS device, with very limited Linux on board. So, many command-line commands as good as GUI are not available to me.
EDIT: This is second question in my "chain". Even, if you help me to succeed here, I won't be able to automate this job (as I'm willing to) without your help in question "SVN: Force svn daemon to run under different user". Can someone also take a look there, please? Thank you!
Well, if you're using Linux, you should be able to mount an ftpfs. I believe there was a module in the Linux kernel for this. Then I think you would also need FUSE.
Basically, if you can mount an ftpfs, you can write your svn export directly to the mounted folder.
not sure about FTP, but SSH would be a lot easier, and should have better compression. An example of sending your repo over SSH may look like:
svnadmin dump /path/to/repository |ssh -C username#servername 'svnadmin -q load /path/to/repository/on/server'
URL i found that info was on Martin Ankerl's site
[update]
based on the comment from #trejder on the question, to do an export over ssh, my recomendation would be as follows:
svn export to a folder locally, then use the following command:
cd && tar czv src | ssh example.com 'tar xz'
where src is the folder you exported to, and example.com is the server.
this will take the files in the source folder, tar and gzip them and send them over ssh, then on ssh, extract the files directly to the machine....
I wrote this a while back - maybe it would be of some use here: exup

How do I download all these files in one go with wget?

I want to download all these RPMs from SourceForge in one go with wget:
Link
How do I do this?
Seeing how for example HeaNet is one of the SF mirrors hosting this project (and many others), you could find out where SF redirects you, specifically:
http://ftp.heanet.ie/mirrors/sourceforge/h/project/hp/hphp/CentOS%205%2064bit/SRPM/
... and download that entire directory with the -r option (probably should use "no parent" switch, too).
One of the two ways:
Create a script that parses the html file and gets the links that ends withs *.rpm, and download those links using wget $URL
Or start copy & pasting those urls and use:
wget $URL from the console.

Can i use wget to download multiple files from linux terminal

Suppose i have a directory accessible via http e,g
Http://www.abc.com/pdf/books
Inside the folder i have many pdf files
Can i use something like
wget http://www.abc.com/pdf/books/*
wget -r -l1 -A.pdf http://www.abc.com/pdf/books
from wget man page:
Wget can follow links in HTML and XHTML pages and create local versions of remote web sites, fully recreating the directory structure of the original site. This is
sometimes referred to as ``recursive downloading.'' While doing that, Wget respects the Robot Exclusion Standard (/robots.txt). Wget can be instructed to convert the
links in downloaded HTML files to the local files for offline viewing.
and
Recursive Retrieval Options
-r
--recursive
Turn on recursive retrieving.
-l depth
--level=depth
Specify recursion maximum depth level depth. The default maximum depth is 5.
It depends on the webserver and the configuration of the server. Strictly speaking the URL is not a directory path, so the http://something/books/* is meaningless.
However if the web server implements the path of http://something/books to be a index page listing all the books on the site, then you can play around with the recursive option and spider options and wget will be happy to follow any links which is in the http://something/books index page.

Resources