how do get response of "GET / " request which is passed using popen communicate with underlying openssl - get

I'm newbie to subprocess module - I would like to use openssl commands to connect with secure server url(for eg : wikipedia) in this case, I could able to connect with web-server and handshake is success.
But while requesting for GET / or HEAD request, i'm unable to receive or process the output.
Obtained output :
SSL handshake has read 3708 bytes and written 405 bytes
Verification: OK
New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384
Server public key is 256 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
import time
import subprocess
cmd ="C:\\cygwin64\\bin\\openssl.exe s_client -connect wikipedia.org:443"
popen = subprocess.Popen(cmd,stdin=subprocess.PIPE,universal_newlines=True,stdout=subprocess.PIPE,,stderr=subprocess.PIPE)
input1 = "echo -en HEAD / HTTP/1.1\r\nHost: www.wikipedia.org\r\n\r\n"
(test_stdout,test_stderr)= popen.communicate(input=input1)
popen.wait()
print(test_stdout)
print(f"return value of popen %r "% popen.returncode)
popen.terminate()
Updated code with openssl -quiet or -ign_eof:
cmd1 ="C:\\cygwin64\\bin\\openssl.exe s_client -connect www.wikipedia.org:443 -servername www.wikipedia.org -ign_eof"
popen = subprocess.Popen(cmd3,stdin=subprocess.PIPE,universal_newlines=True,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
input1 ="echo HEAD / HTTP /1.1\r\nHost: www.wikipedia.org\r\n"
print(input1)
(test_stdout,test_stderr)= popen.communicate(input=input1)
print("value of popen")
print(test_stdout)
print(f"return value of popen %r "% popen.returncode)
popen.terminate()
Please find different results :
1/ Left - results of Py charm
2/ Right - results of executing in openssl s_client command

As far as I can tell, the script is ok and does what it is supposed to do:
It gives you the server ciphers and TLS config.
But judging by your text, what you want to do is something different:
You want to send some HTTP requests and display their responses.
In order to do that, you will have to replace your cmd with some curl https://www.wikipedia.org or wget -O- https://www.wikipedia.org and feed that into your subprocess.Popen command, instead of the openssl cmd.

Related

Get/fetch a file with a bash script using /dev/tcp over https without using curl, wget, etc

I try to read/fetch this file:
https://blockchain-office.com/file.txt with a bash script over dev/tcp without using curl,wget, etc..
I found this example:
exec 3<>/dev/tcp/www.google.com/80
echo -e "GET / HTTP/1.1\r\nhost: http://www.google.com\r\nConnection: close\r\n\r\n" >&3
cat <&3
I change this to my needs like:
exec 3<>/dev/tcp/www.blockchain-office.com/80
echo -e "GET / HTTP/1.1\r\nhost: http://www.blockchain-office.com\r\nConnection: close\r\n\r\n" >&3
cat <&3
When i try to run i receive:
400 Bad Request
Your browser sent a request that this server could not understand
I think this is because strict ssl/only https connections is on.
So i change it to :
exec 3<>/dev/tcp/www.blockchain-office.com/443
echo -e "GET / HTTP/1.1\r\nhost: https://www.blockchain-office.com\r\nConnection: close\r\n\r\n" >&3
cat <&3
When i try to run i receive:
400 Bad Request
Your browser sent a request that this server could not understand.
Reason: You're speaking plain HTTP to an SSL-enabled server port.
Instead use the HTTPS scheme to access this URL, please.
So i even can't get a normal connection without get the file!
All this post's does not fit, looks like ssl/tls is the problem only http/80 works, if i don't use curl, wget, lynx, openssl, etc...:
how to download a file using just bash and nothing else (no curl, wget, perl, etc.)
Using /dev/tcp instead of wget
How to get a response from any URL?
Read file over HTTP in Shell
I need a solution to get/read/fetch a normal txt file from a domain over https only with /dev/tcp no other tools like curl, and output in my terminal or save in a variable without wget, etc.., is it possible and how, or is it there an other solution over the terminal with the standard terminal utilities?
You can use openssl s_client to perform the equivalent operation but delegate the SSL part:
#!/bin/sh
host='blockchain-office.com'
port=443
path='/file.txt'
crlf="$(printf '\r\n_')"
crlf="${crlf%?}"
{
printf '%s\r\n' \
"GET ${path} HTTP/1.1" \
"host: ${host}" \
'Connection: close' \
''
} |
openssl s_client -quiet -connect "${host}:${port}" 2 >/dev/null | {
# Skip headers by reading up until encountering a blank line
while IFS="${crlf}" read -r line && [ -n "$line" ]; do :; done
# Output the raw body content
cat
}
Instead of cat to output the raw body, you may want to check some headers like Content-Type, Content-Transfer-Encoding and even maybe navigate and handle recursive MIME chunks, then decode the raw content to something.
After all the comments and research, the answer is no, we can't get/fetch files using only the standard tools with the shell like /dev/tcp because we can't handle ssl/tls without handle the complete handshake.
It is only possbile with the http/80.
i dont think bash's /dev/tcp supports ssl/tls
If you use /dev/tcp for a http/https connection you have to manage the complete handshake including ssl/tls, http headers, chunks and more. Or you use curl/wget that manage it for you.
then shell is the wrong tool because it is not capable of performing any of the SSL handshake without using external resources/commands. Now relieve and use what you want and can from what I show you here as the cleanest and most portable POSIX-shell grammar implementation of a minimal HTTP session through SSL. And then maybe it is time to consider alternative options (not using HTTPS, using languages with built-in or standard library SSL support).
We will use curl, wget and openssl on seperate docker containers now.
I think there are still some requirements in the future to see if we keep only one of them or all of them.
We will use the script from #Léa Gris in a docker container too.

Concatenation of variable to end of URL in cURL for linux shell script

(I have made several edits to this in response to comments)
My goal is to have a shell script that goes to an external server, gets a batch of files from a directory, prints them on a thermal receipt printer, then deletes the same files from the remote server (so that when a cron job runs they don't get printed again). My problem is in the concatenation of a variable to the end of the URL for cURL. The rest of the script is working but I will show the entire script for clarity
I've done several searches for solutions and many seem to involve more complex situations, e.g. this one that tries to solve for a hidden carriage return since the variable is appended to the middle of the URL (c.f. Bash curl and variable in the middle of the url). I tried that solution (and several others) and they didn't work. My situation is simpler (I think) so maybe those answers added unnecessary complications and that's my problem? Anyways...
Here's the full code with placeholders for sensitive info:
!/bin/bash
# step 1 - change to the directory where we want to temporarily store tickets
cd Tickets
# step 2 - get all the tickets in the target directory on the external server and put them in the current temporary local directory
wget --secure-protocol TLSv1_2 --user=<placeholder> --password='<placeholder>d' ftps://<placeholder>/public_html/tickets/*.txt
# step 3 - print each of the tickets in the current temporary local directory
for i in *.txt
do lp $i
done
# step 4 - reach out to the target directory and delete each of the files that we previously downloaded (not the entire directory; might have new files)
for i in *.txt
do curl --verbose --ftp-ssl --user <placeholder>:<placeholder> 'ftp://<placeholder>/public_html/tickets' -Q "DELE /public_html/tickets/$i"
done
# empty the current local directory where we temporarily stored files during the execution of this script
for i in *.txt
do rm $i
done
# should be done now.
I have used all of the following variations for step 4:
for i in *.txt
do curl --ftp-ssl --user (myftpid):(mypasswd) ftp://(myhostname)/public_html/tickets/ -Q 'DELE /public_html/tickets/'$i
done
for i in *.txt
do curl --ftp-ssl --user (myftpid):(mypasswd) ftp://(myhostname)/public_html/tickets/ -Q 'DELE /public_html/tickets/'${i}
done
for i in *.txt
do curl --ftp-ssl --user (myftpid):(mypasswd) ftp://(myhostname)/public_html/tickets/ -Q 'DELE /public_html/tickets/'"$i"
done
for i in *.txt
do curl --ftp-ssl --user (myftpid):(mypasswd) ftp://(myhostname)/public_html/tickets/ -Q 'DELE /public_html/tickets/'"${i}"
done
Output for all four of those is:
curl: (21) QUOT command filed with 550
I was able to confirm that the code works without a variable by testing this:
curl --ftp-ssl --user <placeholder>:<placeholder> ftp://<placeholder>/public_html/tickets/ -Q 'DELE /public_html/tickets/14.txt'
*** EDIT **
I re-read the comments and I think I initially misunderstood some of them. I was able to use echo in front of the curl command to see the output with the variable. This was very helpful, thanks #Bodo. The suggestion from #NationBoneless for the verbose tag was also useful and yielded the following:
< 220-You are user number 1 of 50 allowed.
< 220-Local time is now 18:34. Server port: 21.
< 220-This is a private system - No anonymous login
< 220-IPv6 connections are also welcome on this server.
< 220 You will be disconnected after 30 minutes of inactivity.
AUTH SSL
< 500 This security scheme is not implemented
AUTH TLS
< 234 AUTH TLS OK.
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / <placeholder>
* Server certificate:
* subject: CN=<placeholder>
* start date: Nov 16 00:00:00 2020 GMT
* expire date: Nov 16 23:59:59 2021 GMT
* subjectAltName: host "<placeholder>" matched cert's "<placeholder>"
* issuer: C=US; ST=TX; L=Houston; O=cPanel, Inc.; CN=cPanel, Inc. Certification Authority
* SSL certificate verify ok.
USER <placeholder>
< 331 User <placeholder> OK. Password required
PASS <placeholder>
< 230 OK. Current restricted directory is /
PBSZ 0
< 200 PBSZ=0
PROT P
< 200 Data protection level set to "private"
PWD
< 257 "/" is your current location
* Entry path is '/'
DELE /public_html/tickets/14.txt
* ftp_perform ends with SECONDARY: 0
< 550 Could not delete /public_html/tickets/14.txt: No such file or directory
* QUOT command failed with 550
* Closing connection 0
I would think something like this would work. When I tested it locally the curl command seemed to work correctly:
for i in *.txt
do curl --ftp-ssl --user (myftpid):(mypasswd) ftp://(myhostname)/public_html/tickets/ -Q "DELE /public_html/tickets/$i"
done
First, I want to thank #Bodo and #NationBoneless. I would not have figured this out without their comments. Thank you very much to both of you.
My problem had two parts - one was the problem with concatenation (which I knew from the start) but also in how I was setting up my URL path for curl (which I didn't realize until later).
The correct form for the concatenation is:
curl --ftp-ssl --user myID:mypassword 'ftp://host.mydomain.com' -Q "DELE /public_html/tickets/$i"
Some salient points that I learned along the way and which might be useful to others with similarly limited experience include:
Try exactly what the commenters tell you to try, even if you think you understand their reasoning and you think they're on the wrong track - they probably aren't wrong.
Use double quotes for shell scripts if you plan to have a variable inside the quotes. Single quotes behave very differently and don't play nice with variables.
echo is your friend; it was extremely helpful for debugging because
it showed me that my variable was being parsed as $i instead of as the value of $i because of those pesky single quotes. Thanks #Bodo
the --verbose flag is very helpful. It let me see that curl was
connecting to the server and that my script was failing because it
couldn't find the file to delete. My URL path was wrong and I didn't
realize it. Thanks #NationBoneless
I originally put the full path in the ftp: section and then again
after -Q but that doesn't work. After much trial and error I got it
to work with just ftp:hostname before the -Q and the rest of the
path after the -Q.
This next part might not apply to anyone else but the server I'm
working on is using WHM/cPanel. The domain to which I'm trying to
connect is actually not the domain I am using in my code. I had to
connect to host.myserverdomain.com (the domain that I use for root
admin stuff in WHM) instead of ftp.thisprojectdomain.com (the domain
on which I'm actually working). Apparently WHM/cPanel can be set up
many ways and that's how my data center set up mine. Not sure how
common this is but if you're tearing your hair out, try switching to
the WHM domain.
For those wondering why my main script uses both wget and curl; I
started with wget because I thought I was more familiar with it. wget
can't delete files on the server. I switched to curl for deleting the
files but never switched the first statement to curl because it
worked. There's probably a way to use curl instead of wget; I leave
that exercise to the comments section as I'll stick with what works.

Send email using linux script

I have problem with sending email from console. When i run my script, it stucks in the first command, and next commands not executing. What can i do?
#!/bin/bash
openssl s_client -starttls smtp -crlf -connect smtp.gmail.com:587
auth plain
(((((here is my login:pass))))
mail from: <test#gmail.com>
rcpt to: <test2#gmail.com>
data
my mail
.
quit
The first command is a shell command, the rest should be input to that command, but you've written them as a part of the shell script. What you're trying to do could be done with a here-document:
openssl s_client ... <<EOF
auth plain
...
EOF
However, trying to send email using openssl s_client? You're gonna' have a bad time. You're probably not going to get this to work at all. Use an MTA or (maybe even better) an MUA/MSA. If you need a lightweight MTA try esmtp-run. Then you can set up your username/password in esmtprc (or any other MTA/MDA configuration). If you feel the need to do this close to the metal:
/usr/sbin/sendmail -ti <<EOF
To: myself#gmail.com
From: myself#gmail.com
Subject: Test email
Date: Fri, 23 Mar 2018 22:26:38 0000 (GMT)
This is a test...
--
Myself
EOF

How can I easily check if a POP3 or SMTP connection is valid with perl / a shell script?

I have setup an SSH tunnel to my mail server as follows:
ssh -o ServerAliveInterval=60 -f me\#mydomain.com -L 63110:mail.mydomain.com:110 -N
ssh -o ServerAliveInterval=60 -f me\#mydomain.com -L 63325:mail.mydomain.com:25 -N
I can send/receive mail for a while, but after a period of inactivity, my mail client reports that it doesn't get a valid greeting from the mail server.
I have a perl script that checks every minute to make sure that the ssh tunnel is running (via ps) and that the port is open (using IO::Socket::PortState qw(check_ports)), but I would like to check whether or not I get a valid greeting as well.
What would be the best way to do this either in perl or a shell script (running Ubuntu 12.04)?
To answer the question, notwithstanding the setup of the OP, THE tool to use is swaks aka the Swiss Army Knife of SMTP. You can get it from here: http://jetmore.org/john/code/swaks/
Typically to test your smtp you would use a command like this: swaks --server mail.example.com --from ben.holness#example.com --to ben.holness#somewhere.example.com
It will then show you all the dialogue between the smtp client and the server, making it really easy to pinpoint the source of possible problems.

How would you test an SSL connection?

I'm experimenting with OpenSSL on my network application and I want to test if the data sent is encrypted and can't be seen by eavesdropper.
What tools can you use to check? Could this be done programmatically so it could be placed in a unit test?
openssl has an s_client, which is a quick and dirty generic client that you can use to test the server connection. It'll show the server certificate and negotiated encryption scheme.
I found this guide very helpful. These are some of the tools that he used:
$ openssl s_client -connect mail.prefetch.net:443 -state -nbio 2>&1 | grep "^SSL"
$ ssldump -a -A -H -i en0
$ ssldump -a -A -H -k rsa.key -i en0
$ ssldump -a -A -H -k rsa.key -i en0 host fred and port 443
check out wire shark http://www.wireshark.org/
and tcp dump http://en.wikipedia.org/wiki/Tcpdump
Not sure about integrating these into unit tests. They will let you look at a very low level whats going on at the network level.
Perhaps for the unit test determine what the stream looks like unencrypted and make sure the encrypted stream is not similar
Franci Penov made an answer to one of my questions "Log Post Parameters sent to a website", suggesting I take a look at Fiddler: http://www.fiddler2.com/fiddler2/
I tried it and it works beautifully, if you're interested in viewing HTTP requests. :)
Yeah - Wire Shark (http://www.wireshark.org/) is pretty cool (filters, reports, stats).
As to testing you could do it as a part of integration tests (there are some command line options in wireshark)
For a quick check you can use Wireshark (formerly known as Ethereal) to see if your data is transmitted in plain-text or not.
As mentioned before http://www.wireshark.org/, you can also use cain & able to redirect the traffic to a 3rd machine and anylze the protocol from there.

Resources