cannot wget from uniprot, what does the ssv3 error mean? - linux

EDITED
I am fairly new to bioinformatics, I have a script to download sequence data from uniprot, when I use the wget command however I am getting an error message for some of the weblinks:
OpenSSLL error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure
Unable to establish SSL connection.
So, some of the sequence data downloads (3 in total) and then I get this error message for each of the others.
I am using a virtual box with the ubuntu OS, and so using a bash script.
I have tried installing an updated version of wget but I get a message saying the latest version is installed. Has anyone got any suggestions?
As requested, script is as follows:
VAR=$(cat uniprot_id.txt)
URL="https://www.uniprot.org/uniprot/"
for i in ${VAR}
do
echo "(o) Downloading Uniprot entry: ${i}"
wget ${URL}${i}.fasta
echo "(o) Done downloading ${i}"
cat ${i}.fasta >> muscle_input.fasta
rm ${i}.fasta
done
to confirm, the weblinks that I create using this script are all valid so they open up the sequence data I require when I click the weblinks.

I also had trouble with this. Would you be able to share what changes you made to the SSL configuration please?
In the meantime, because I trust the URL domain, I was able to workaround the issue with this line:
wget ${URL}${i}.fasta --no-check-certificate
I think I need to update domain certificates somewhere, but I'm working on that.

Related

Ubuntu 18, proxy not working on terminal but work on browser

(related and perhaps more simple problem to solve: proxy authentication by MSCHAPv2)
Summary: I am using a Ubuntu 18, the proxy is working with web-browser but not with terminal applications (wget, curl or apt update). Any clues? Seems the problem is to interpretate a proxy's "PAC file"... Is it? How to translate to Linux's proxy variables? ... Or the problem is simple: my proxy-config (see step-by-step procedure below) was wrong?
Details:
By terminal env | grep -i proxy we obtain
https_proxy=http://user:pass#pac._ProxyDomain_/proxy.pac:8080
http_proxy=http://user:pass#pac._ProxyDomain_/proxy.pac:8080
no_proxy=localhost,127.0.0.0/8,::1
NO_PROXY=localhost,127.0.0.0/8,::1
ftp_proxy=http://user:pass#pac._ProxyDomain_/proxy.pac:8080
and browser (Firefox) is working fine for any URL, but:
wget http://google.com say Resolving pac._ProxyDomain_ (pac._ProxyDomain_)... etc.etc.0.26 connecting to pac._ProxyDomain_ (pac._ProxyDomain_)|etc.etc.0.26|:80... conected.
Proxy request has been sent, waiting for response ... 403 Forbidden
2019-07-25 12:52:19 ERROR 403: Forbidden.
curl http://google.com say "curl: (5) Could not resolve proxy: pac._ProxyDomain_/proxy.pac"
Notes
(recent news here: purge exported proxy changes something and not tested all again...)
The proxy configuration procedures that I used (there are some plug-and-play PAC file generator? I need a PAC file?)
Config procedures used
All machine was running, with a direct non-proxy internet connection... Them the machine goes to the LAN with the proxy.
Add lines of "export *_proxy" (http, https and ftp) in my ~/.profile. The URL definitions are in the form http_proxy="http://user:pwd#etc" (supposing that is correct, because testesd before with user:pwd#http://pac.domain/proxy.pac syntax and Firefox promped proxy-login)(if the current proxy-password is using # character, need to change?)
Add lines of "export *_proxy" in my ~root/.profile.(need it?)
(can reboot and test with echo $http_proxy)
visudo procedure described here
reboot and navigate by Firefox without need of login, direct (good is working!). Testing env | grep -i proxy, it shows all correct values as expected.
Testing wget and curl as the begin of this report, proxy bug.
Testing sudo apt update, bug.
... after it more one step, supponing that for apt not exist a file, created by sudo nano /etc/apt/apt.conf.d/80proxy and add 3 lines for Acquire::*::proxy "value"; with value http://user:pass#pac._ProxyDomain_/proxy.pac:8080. where pass is etc%23etc, url-encoded.
Summary of tests performed
CONTEXT-1.1
(this was a problem but now ignoring it to focus on more relevant one)
After (the proxied) cable connection and proxy configurations in the system. (see above section "Config procedures used"). Proxy-password with special character.
curl http://google.com say "curl: (5) Could not resolve proxy..."
When change all .profile from %23 to # the error on wget changes, but curl not. Wget changes to "Error parsing proxy URL http://user:pass#pac._ProxyDomain_/proxy.pac:8080: Bad port number"
PS: when used $ on password the system (something in the internal export http_proxy command or use of http_proxy confused it with a variable).
CONTEXT-1.2
Same as context-1.1 above, but password with no special character. Good and clean proxy-password.
curl http://google.com say "curl: (5) Could not resolve proxy..."
CONTEXT-2
After (the proxied) cable connection and no proxy configurations in the system (but confirmed that connection is working on browser after automatic popup form login).
curl -x 192.168.0.1:8080 http://google.com "curl: (7) Failed to connect..."
curl --verbose -x "http://user:pass#pac._proxyDomain_/proxy.pac" http://google.com say "curl: (5) Could not resolve proxy..."
Other configs in use
As #Roadowl suggested to check:
files ~/.netrc and ~root/.netrc not exists
file more /etc/wgetrc exists, but all commented, exept by passive_ftp = on

How do I export JIRA issues to xls?

As title says. I'm trying to export JIRA issues to a xls file through my command line.
this is what I'm entering:
wget -O export.xls https://myjiraurl.com/sr/jira.issueviews:searchrequest-excel-all-fields/10300/SearchRequest-10300.xls?tempMax=1000
But I keep getting this error
Resolving myjiraurl.com (myjiraurl.com)... 10.64.80.92
Connecting to myjiraurl.com (myjiraurl.com)|10.64.80.92|:443... connected.
ERROR: cannot verify myjiraurl.com's certificate, issued by ‘/C=US/O=MyCorporation/CN=Intel Intranet Basic Issuing CA 2B’:
Self-signed certificate encountered.
To connect to myjiraurl.com insecurely, use `--no-check-certificate'.
I've also tried adding
&os_username=<myuserid>&os_password=<mypassword>
how it still gives me the same thing. I've tried directly plugging the url into the address bar and it exports a xls file back to be just fine. tips?
Try:
wget --user=userid --password=secret --no-check-certificate -O export.xls "https://myjiraurl.com/sr/jira.issueviews:searchrequest-excel-all-fields/10300/SearchRequest-10300.xls?tempMax=1000"

Downloading file using scp from a remote server. But returning error

I tried downloading a file from my bluehost account, I'm trying to do it remotely with bash like this
$ scp user#domain.com:/public_html/directory/file.php ~/Desktop/file.php
But it returns this response:
declare -x CLASSPATH=".:/usr/local/jdk/lib/classes.zip"
I don't know if this is an error, but nothing has been copied.
Seems like the ~/.bashrc file on the remote server is broken. Try to use a standard ~/.bashrc for your system (for testing) and try again

NODEJS - SFTP - Handling process output

At the moment I'm trying to use node-sftp in order to provide my nodejs script with the ability to SFTP with a private key.
That module appears to be broken since v 0.6 of node (tty.open is no longer a method).
So i've tried to use a child process and spawn my sftp command.
Now the connection appears to work fine (I checked the ftp servers logs # /var/log/auth.log)
I can also see some output in the Node window...
Permanently added '46.x.x.x' (RSA) to the list of known hosts.
Connected to 46.x.x.x.
Changing to: /home/deploy/somefolder
When I connect directly via the command line using the following command it ends up with a prompt like sftp>, which is waiting for my FTP commands
sftp -o Port=22 -o PasswordAuthentication=no -o IdentityFile=private_key -o UserKnownHostsFile=/v/null -o StrictHostKeyChecking=no -o BatchMode=yes deploy#46.x.x.x:/home/deploy/somefolder
Does anyone have any suggestions on where I might be going wrong?
Its hard to say without a little more detail, but I would take a look at:
https://github.com/chjj/pty.js/
This will emulate a tty device that you can read and write to.
If you can provide some additional code you have tried, we may be able to point you in a better direction.
You could also try cloning the node-sftp module from: https://github.com/ajaxorg/node-sftp.git and using the library directly instead of from npm, it looks like the latest version in github has support for node versions newer that 0.6

Download non web accessible file with wget

Is it possible to download a file in say /home/... using wget to my local machine? I'm pretty newbish on the bash shell side so perhaps this is just a matter of using the options correctly. What I've gleaned is that something like this should work, but my test aren't downloading the file locally but placeing them within the folder i'm using wget in
root#mysite [/home/username/public_html/themes/themename/images]# wget -O "tester.png"
"http://www.mysite.com/themes/themename/images/previous.png"
--2011-09-08 14:28:49-- http://www.mysite.com/themes/themename/images/previous.png
Resolving www.mysite.com... 173.193.xxx.xxx
Connecting to www.mysite.com|173.193.xxx.xxx|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 352 [image/png]
Saving to: `tester.png'
100%[==============================================================================================>] 352 --.-K/s in 0s
2011-09-08 14:28:49 (84.3 MB/s) - `tester.png' saved [352/352]
Perhaps the above is a bad example but I can't seem how to figure out how to use wget (or some other command) to get something from a non web accessable directory (its a backup file) is wget the correct command for this?
wget uses the http (or ftp) protocol to transfer it's files, so no, you can't use it to transfer anything which is not availible through those services. What you should do is use scp. It uses ssh, and you can use it to get any file (which you have the permission to read, that is).
Say you want /home/myuser/test.file from the computer mycomp, and you want to save it as test.newext. Then you'd invoke it like this:
scp myuser#mycomp:/home/myuser/test.file test.newext
You can do a lot of other nifty stuff with scp so read the manual for more possibilities!
This belongs on superuser, but you want to use scp to copy the file to your local machine.
When a file isn't web accessible, you cant' get it with wget.

Resources