curl command not executing via shell script in bash - linux

I'm learning shell scripting! for the same I've tried downloading the facebook page using curl on ubuntu terminal.
t.sh content
vi#vi-Dell-7537(Desktop) $ cat t.sh
curlCmd="curl \"https://www.facebook.com/vivekkumar27june88\""
echo $curlCmd
($curlCmd) > ~/Desktop/fb.html
Getting error when running the script as
vi#vi-Dell-7537(Desktop) $ ./t.sh
curl "https://www.facebook.com/vivekkumar27june88"
curl: (1) Protocol "https not supported or disabled in libcurl
But if the run the command directly then it is working fine.
vi#vi-Dell-7537(Desktop) $ curl "https://www.facebook.com/vivekkumar27june88"
<!DOCTYPE html>
<html lang="hi" id="facebook" class="no_js">
<head><meta chars.....
I will appreciate if anyone let me know the mistake I am doing in the script.
I've verified that curl library have ssl enabled.

A command embedded within a parenthesis runs as a sub-shell so your environment variables will be missing.
Try eval:
curlCmd="curl 'https://www.facebook.com/vivekkumar27june88' > ~/Desktop/fb.html"
eval $curlCmd

Create your script t.sh as this single line only:
curl -k "https://www.facebook.com/vivekkumar27june88" -o ~/Desktop/fb.html
As per man curl:
-k, --insecure
(SSL) This option explicitly allows curl to perform "insecure" SSL connections transfers.
All SSL connections are attempted to be made secure by using the CA certificate bundle
installed by default. This makes all connections considered "insecure" fail unless -k,
--insecure is used.
-o file
Store output in the given filename.

As #Chepner said, go read BashFAQ #50: I'm trying to put a command in a variable, but the complex cases always fail!. To summarize, how you should do things like this depends on what your goal is.
If you don't need to store the command, don't! Storing commands is difficult to get right, so if you don't need to, just skip that mess and execute it directly:
curl "https://www.facebook.com/vivekkumar27june88" > ~/Desktop/fb.html
If you want to hide the details of the command, or are going to use it a lot and don't want to write it out each time, use a function:
curlCmd() {
curl "https://www.facebook.com/vivekkumar27june88"
}
curlCmd > ~/Desktop/fb.html
If you need to build the command piece-by-piece, use an array instead of a plain string variable:
curlCmd=(curl "https://www.facebook.com/vivekkumar27june88")
for header in "${extraHeaders[#]}"; do
curlCmd+=(-H "$header") # Add header options to the command
done
if [[ "$useSilentMode" = true ]]; then
curlCmd+=(-s)
fi
"${curlCmd[#]}" > ~/Desktop/fb.html # This is the standard idiom to expand an array
If you want to print the command, the best way to do it is usually with set -x:
set -x
curl "https://www.facebook.com/vivekkumar27june88" > ~/Desktop/fb.html
set +x
...but you can also do something similar with the array approach if you need to:
printf "%q " "${curlCmd[#]}" # Print the array, quoting as needed
printf "\n"
"${curlCmd[#]}" > ~/Desktop/fb.html

Install following softwares in ubuntu 14.04
sudo apt-get install php5-curl
sudo apt-get install curl
then run sudo service apache2 restart
check your phpinfo() is enable with curl "cURL support: enabled"
Then check your command in shell script
RESULT=curl -L "http://sitename.com/dashboard/?show=api&action=queue_proc&key=$JOBID" 2>/dev/null
echo $RESULT
You will get response;
Thanks you.

Related

wget recursion and file extraction

I'm trying to use wget to elegantly & politely download all the pdfs from a website. The pdfs live in various sub-directories under the starting URL. It appears that the -A pdf option is conflicting with the -r option. But I'm not a wget expert! This command:
wget -nd -np -r site/path
faithfully traverses the entire site downloading everything downstream of path (not polite!). This command:
wget -nd -np -r -A pdf site/path
finishes immediately having downloaded nothing. Running that same command in debug mode:
wget -nd -np -r -A pdf -d site/path
reveals that the sub-directories are ignored with the debug message:
Deciding whether to enqueue "https://site/path/subdir1". https://site/path/subdir1 (subdir1) does not match acc/rej rules. Decided NOT to load it.
I think this means that the sub directories did not satisfy the "pdf" filter and were excluded. Is there a way to get wget to recurse into sub directories (of random depth) and only download pdfs (into a single local dir)? Or does wget need to download everything and then I need to manually filter for pdfs afterward?
UPDATE: thanks to everyone for their ideas. The solution was to use a two step approach including a modified version of this: http://mindspill.net/computing/linux-notes/generate-list-of-urls-using-wget/
UPDATE: thanks to everyone for their ideas. The solution was to use a two step approach including a modified version of this: http://mindspill.net/computing/linux-notes/generate-list-of-urls-using-wget/
Try this
1)the “-l” switch specifies to wget to go one level down from the primary URL specified. You could obviously switch that to how ever many levels down in the links you want to follow.
wget -r -l1 -A.pdf http://www.example.com/page-with-pdfs.htm
refer man wget for more details
if the above doesn't work,try this
verify that the TOS of the web site permit to crawl it. Then, one solution is :
mech-dump --links 'http://example.com' |
grep pdf$ |
sed 's/\s+/%20/g' |
xargs -I% wget http://example.com/%
The mech-dump command comes with Perl's module WWW::Mechanize (libwww-mechanize-perl package on debian & debian likes distros
for installing mech-dump
sudo apt-get update -y
sudo apt-get install -y libwww-mechanize-shell-perl
github repo https://github.com/libwww-perl/WWW-Mechanize
I haven't tested this, but you cans still give a try, what i think is you still need to find a way to get all URLs of a website and pipe to any of the solutions I have given.
You will need to have wget and lynx installed:
sudo apt-get install wget lynx
Prepare a script name it however you want for this example pdflinkextractor
#!/bin/bash
WEBSITE="$1"
echo "Getting link list..."
lynx -cache=0 -dump -listonly "$WEBSITE" | grep ".*\.pdf$" | awk '{print $2}' | tee pdflinks.txt
echo "Downloading..."
wget -P pdflinkextractor_files/ -i pdflinks.txt
to run the file
chmod 700 pdfextractor
$ ./pdflinkextractor http://www.pdfscripting.com/public/Free-Sample-PDF-Files-with-scripts.cfm

How to sudo run a local script over ssh

I try to sudo run a local script over ssh,
ssh $HOST < script.sh
and I tried
ssh -t $HOST "sudo -s && bash" < script.sh
Actually, I searched a lot in google, find some similar questions, however, I don't find a solution which can sudo run a local script.
Reading the error message of
$ ssh -t $HOST "sudo -s && bash" < script.sh
Pseudo-terminal will not be allocated because stdin is not a terminal.
makes it pretty clear what's going wrong here.
You can't use the ssh parameter -t (which sudo needs to ask for a password) whilst redirecting your script to bash's stdin of your remote session.
If it is acceptable for you, you could transfer the local script via scp to your remote machine and then execute the script without the need of I/O redirection:
scp script.sh $HOST:/tmp/ && ssh -t $HOST "sudo -s bash /tmp/script.sh"
Another way to fix your issue is to use sudo in non-interactive mode -n but for this you need to set NOPASSWD within the remote machine's sudoers file for the executing user. Then you can use
ssh $HOST "sudo -n -s bash" < script.sh
To make Edward Itrich's answer more scalable and geared towards frequent use, you can set up a system where you only run a one line script that can be quickly ported to any host, file or command in the following manner:
Create a script in your Scripts directory if you have one by changing the name you want the script to be (I use this format frequently to change 1 word for my script name and create the file, set permissions and open for editing):
newscript="runlocalscriptonremotehost.sh"
touch $newscript && chmod +x $newscript && nano $newscript
In nano fill out the script as follows placing the directory and name information of the script you want to run remotely in the variable lines of runlocalscriptonremotehost.sh(only need to edit lines 1-3):
HOSTtoCONTROL="sudoadmin#192.168.0.254"
PATHtoSCRIPT="/home/username/Scripts/"
SCRIPTname="scripttorunremotely.sh"
scp $PATHtoSCRIPT$SCRIPTname $HOSTtoCONTROL:/tmp/ && ssh -t $HOSTtoCONTROL "sudo -s bash /tmp/$SCRIPTname"
Then just run:
sh ./runlocalscriptonremotehost.sh
Keep runlocalscriptonremotehost.sh open in a tabbed text editor for quick updating, go ahead and create a bash alias for the script and you have yourself an app-ified version of this frequently used operation.
First of all divide your objective in 2 parts. 1) ssh to the host. 2) run the command you want as sudo. After you are certain that you can 1) access the host and 2) have sudo privileges then you can combine the two commands with &&. What x_cmd && y_cmd does is that the y_cmd gets executed after x_cmd has exited successfully.

Piping different values into bash command [duplicate]

How can you make SSH read the password from stdin, which it doesn't do by default?
based on this post you can do:
Create a command which open a ssh session using SSH_ASKPASS (seek SSH_ASKPASS on man ssh)
$ cat > ssh_session <<EOF
export SSH_ASKPASS="/path/to/script_returning_pass"
setsid ssh "your_user"#"your_host"
EOF
NOTE: To avoid ssh to try to ask on tty we use setsid
Create a script which returns your password (note echo "echo)
$ echo "echo your_ssh_password" > /path/to/script_returning_pass
Make them executable
$ chmod +x ssh_session
$ chmod +x /path/to/script_returning_pass
try it
$ ./ssh_session
Keep in mind that ssh stands for secure shell, and if you store your user, host and password in plain text files you are misleading the tool an creating a possible security gap
You can use sshpass which is for example in the offical debian repositories. Example:
$ apt-get install sshpass
$ sshpass -p 'password' ssh username#server
You can't with most SSH clients. You can work around it with by using SSH API's, like Paramiko for Python. Be careful not to overrule all security policies.
Distilling this answer leaves a simple and generic script:
#!/bin/bash
[[ $1 =~ password: ]] && cat || SSH_ASKPASS="$0" DISPLAY=nothing:0 exec setsid "$#"
Save it as pass, do a chmod +x pass and then use it like this:
$ echo mypass | pass ssh user#host ...
If its first argument contains password: then it passes its input to its output (cat) otherwise it launches whatver was presented after setting itself as the SSH_ASKPASS program.
When ssh encounters both SSH_ASKPASS AND DISPLAY set, it will launch the program referred to by SSH_ASKPASS, passing it the prompt user#host's password:
An old post reviving...
I found this one while looking for a solution to the exact same problem, I found something and I hope someone will one day find it useful:
Install ssh-askpass program (apt-get, yum ...)
Set the SSH_ASKPASS variable (export SSH_ASKPASS=/usr/bin/ssh-askpass)
From a terminal open a new ssh connection without an undefined TERMINAL variable (setsid ssh user#host)
This looks simple enough to be secure but did not check yet (just using in a local secure context).
Here we are.
FreeBSD mailing list recommends the expect library.
If you need a programmatic ssh login, you really ought to be using public key logins, however -- obviously there are a lot fewer security holes this way as compared to using an external library to pass a password through stdin.
a better sshpass alternative is :
https://github.com/clarkwang/passh
I got problems with sshpass, if ssh server is not added to my known_hosts sshpass will not show me any message, passh do not have this problem.
I'm not sure the reason you need this functionality but it seems you can get this behavior with ssh-keygen.
It allows you to login to a server without using a password by having a private RSA key on your computer and a public RSA key on the server.
http://www.linuxproblem.org/art_9.html

Amazon Linux curl

bash -c "$(curl -s https://install.prediction.io/install.sh)"
I run the above script on and ec2 instance running Amazon linux to install prediction IO. Nothing happens no error. Does anyone know whats going on?
That command downloads a file in silent mode $(curl -s https://install.prediction.io/install.sh) and then run the file with bash -c.
you should run the command in separated steps to check what it is going on
curl -O https://install.prediction.io/install.sh
chmod +x install.sh
./install.sh
First, check whether the like https://install.prediction.io/install.sh is accessible from your Amazon EC2.
curl -v https://install.prediction.io/install.sh
If its working, try this
curl -s https://install.prediction.io/install.sh | bash

Ubuntu: Using curl to download an image

I want to download an image accessible from this link: https://www.python.org/static/apple-touch-icon-144x144-precomposed.png into my local system. Now, I'm aware that the curl command can be used to download remote files through the terminal. So, I entered the following in my terminal in order to download the image into my local system:
curl https://www.python.org/static/apple-touch-icon-144x144-precomposed.png
However, this doesn't seem to work, so obviously there is some other way to download images from the Internet using curl. What is the correct way to download images using this command?
curl without any options will perform a GET request. It will simply return the data from the URI specified. Not retrieve the file itself to your local machine.
When you do,
$ curl https://www.python.org/static/apple-touch-icon-144x144-precomposed.png
You will receive binary data:
|�>�$! <R�HP#T*�Pm�Z��jU֖��ZP+UAUQ#�
��{X\� K���>0c�yF[i�}4�!�V̧�H_�)nO#�;I��vg^_ ��-Hm$$N0.
���%Y[�L�U3�_^9��P�T�0'u8�l�4 ...
In order to save this, you can use:
$ curl https://www.python.org/static/apple-touch-icon-144x144-precomposed.png > image.png
to store that raw image data inside of a file.
An easier way though, is just to use wget.
$ wget https://www.python.org/static/apple-touch-icon-144x144-precomposed.png
$ ls
.
..
apple-touch-icon-144x144-precomposed.png
For those who don't have nor want to install wget, curl -O (capital "o", not a zero) will do the same thing as wget. E.g. my old netbook doesn't have wget, and is a 2.68 MB install that I don't need.
curl -O https://www.python.org/static/apple-touch-icon-144x144-precomposed.png
If you want to keep the original name — use uppercase -O
curl -O https://www.python.org/static/apple-touch-icon-144x144-precomposed.png
If you want to save remote file with a different name — use lowercase -o
curl -o myPic.png https://www.python.org/static/apple-touch-icon-144x144-precomposed.png
Create a new file called files.txt and paste the URLs one per line. Then run the following command.
xargs -n 1 curl -O < files.txt
source: https://www.abeautifulsite.net/downloading-a-list-of-urls-automatically
For ones who got permission denied for saving operation, here is the command that worked for me:
$ curl https://www.python.org/static/apple-touch-icon-144x144-precomposed.png --output py.png
try this
$ curl https://www.python.org/static/apple-touch-icon-144x144-precomposed.png > precomposed.png

Resources