Check for errors in TLS implementation - security

I'm developing a small TLS client, which is used together with SMTP. The handshake is working well until my client sends the encrypted finished message. So Client Hello, Server Hello, Certificate, Server Hello Done, Client Key Exchange and Change Cipher Spec are working. As chiper suite I'm using TLS_RSA_WITH_AES_256_CBC_SHA256.
My problem is, that I receive the "Bad Record MAC" alert from the server after sending the finished message. But I have no idea where I can start to search for the error. I've double checked all my functions and readed the RFC twice.
In my opionion one of the following points can cause the "Bad Record MAC" alert:
The master-secret is wrong.
The client_write_MAC_key is wrong.
The client_write_encryption_key is wrong.
The P_hash function is wrong.
The PRF function is wrong.
The MAC function is wrong.
The AES encryption doesn't work correctly.
The hash of the Finished Message is wrong.
Has anyone an idea what I can do to find the issue. Are there any tools to check if the master-secret, MAC and key calculations are correct? Or is it possible to decrypt the content with Wireshark? Note that I'm not having the private key of the server.

I've setted up an local test server with OpenSSL as Steffen Ullrich suggested. I've done a detailed analytics of the "debug output" which was provided by Wireshark.
I was able to solve a problem with my padding. But I'm still getting a Bad Record MAC alert. Like the debug output says, the MAC is incorrect (message ssl_decrypt_record: mac failed).
Some more information about the connection and the debug output:
The pre-master-secret, master-secret and all the keys (client write MAC key, server write MAC key, client write key, server write key, client write IV, server write IV) are correct. I've compared these from my software with them of the debug output. So everything until the Encrypted Finished Message is correct.
The server / Wireshark can decrypt the Encrypted Finished Message successfuly.
The server / Wireshark detects the padding correctly.
The server / Wireshark "skips" the IV and only shows the Finished Message and the MAC in the line Plaintext[64]:
In my opinion only two things can cause this error:
The MAC is calculated wrong.
The hash (verify_data) from the handshake messages are wrong.
My Encrypted Finished Message has a full length of 80 bytes and the following structure:
struct
{
// TLS record
ContentType type;
ProtocolVersion version;
uint16 length;
// TLS handshake and content (encrypted)
uint8 IV[16];
struct
{
HandshakeType msg_type;
uint24 length;
uint8 verify_data[12];
} content;
uint8 MAC[32];
uint8 padding[15];
uint8 paddingLength;
} FinishedMessage;
The content looks for example like the following:
00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f
0000 16 03 03 00 50 XX XX XX XX XX XX XX XX XX XX XX
0010 XX XX XX XX XX 14 00 00 0C YY YY YY YY YY YY YY
0020 YY YY YY YY YY ZZ ZZ ZZ ZZ ZZ ZZ ZZ ZZ ZZ ZZ ZZ
0030 ZZ ZZ ZZ ZZ ZZ ZZ ZZ ZZ ZZ ZZ ZZ ZZ ZZ ZZ ZZ ZZ
0040 ZZ ZZ ZZ ZZ ZZ 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F 0F
0050 0F 0F 0F 0F 0F
Where XX represents the randomly generated IV and YY the verify_data and ZZ the MAC of the message like described in the RFC.
The verify_data contains the first 12 bytes of the SHA-256 hash. The hash is computed by the messages Client Hello, Server Hello, Certificate (from Server), Server Hello Done and Client Key Exchange. For hashing the full messages without the record header (first 5 bytes) are used.
The MAC is computed by the client write MAC key and the message. As sequence number the number 0 is used. The fragment which is used for calculating the MAC only contains the msg_type, length and verify_data (see structure content above).
Has anyone an idea how I can find what is wrong with my MAC?

Related

Same Azure Function, same input, different tenant, different output

Stumped as to where to go next with this issue.
My node.js azure function takes in a file represented as base64. It does some processing on the file and returns the processed file in the response:
module.exports = async function (context, req) {
const buf = Buffer.from(req.body.file, 'base64')
/// do processing
const file = filebuf.toString('base64')
context.res.status(200).json({ file })
}
So in my personal Azure tenant this works fine. In my clients azure tenant, the file is corrupted. This is with the same exact file as input.
Not sure if its interesting but the first few bytes of the different tenants processed files are as so:
{
"file": "UEsDBAoAAAAAAGh+r1SoFh7uRAwAAEQMAAA..."
}
{
"file": "UEsDBAoAAAAAANh8r1SoFh7uRAwAAEQMAAA..."
}
For some reason, around the 14th char is different and I can't think why that would be. Both function apps are running on Linux and the runtime version of the apps are the same.
Thanks
EDIT:
Following the first answer and my new understanding that the different byte is to be expected I am still not closer to understanding why one file is corrupt and another is not.
I now notice that the content length of the body is different:
Non-corrupt:
"headers":{"Transfer-Encoding":"chunked","Vary":"Accept-Encoding","Strict-Transport-Security":"max-age=31536000; includeSubDomains","x-ms-apihub-cached-response":"true","x-ms-apihub-obo":"true","Cache-Control":"private","Date":"Mon, 16 May 2022 08:25:38 GMT","Content-Type":"application/json; charset=utf-8","Content-Length":"2926356"}
Corrupt:
"headers":{"Transfer-Encoding":"chunked","Request-Context":"appId=cid-v1:XXX","x-ms-apihub-cached-response":"true","x-ms-apihub-obo":"true","Date":"Mon, 16 May 2022 08:24:48 GMT","Content-Type":"application/json; charset=utf-8","Content-Length":"2926368"}
Where could these extra bytes have come from? I wondering if it might be between different versions of Node but would be surprised.
The 2 bytes which are different are part of the ZIP file header and describe the modification time.
Let's take a look at the first partial base64 string
base64
UEsDBAoAAAAAAGh+r1SoFh7uRAwAAEQMAAA
hex
50 4b 03 04 0a 00 00 00 00 00 68 7e af 54 a8 16 1e ee 44 0c 00 00 44 0c 00 00
The first 4 bytes 50 4b 03 04 are the ZIP file signature, 0a 00 the version, 00 00 the flags, 00 00 the compression method and finally 68 7e the file modification time. This is in MS-DOS format which translates to 13:03:15. Needless to say the different time values in there do not indicate a corrupted file.
More information on the ZIP format: https://users.cs.jmu.edu/buchhofp/forensics/formats/pkzip.html

How does jwt implement RSA256 signature verification in nodejs

I was trying to understand the internal working of how JWT performs RS256 signature verification. The signature algorithm works on following basic steps:
Hash the original data
Encrypt the hash with RSA private key
And for verification it follows the following steps:
Decrypt using RSA public key
Match the hash generated from decryption with the SHA256 hash of original message.
While trying to test this on one of the jwt
eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWUsImlhdCI6MTUxNjIzOTAyMn0.POstGetfAytaZS82wHcjoTyoqhMyxXiWdR7Nn7A29DNSl0EiXLdwJ6xC6AfgZWF1bOsS_TuYI3OG85AmiExREkrS6tDfTQ2B3WXlrr-wp5AokiRbz3_oB4OxG-W9KcEEbDRcZc0nH3L7LzYptiy1PtAylQGxHTWZXtGz4ht0bAecBgmpdgXMguEIcoqPJ1n3pIWk_dUZegpqx0Lka21H6XxUTxiy8OcaarA8zdnPUnV6AmNP3ecFawIFYdvJB_cm-GvpCSbr8G8y_Mllj8f4x9nBH8pQux89_6gUY618iYv7tuPWBFfEbLxtF2pZS6YC1aSfLQxeNe8djT9YjpvRZA
I found that the hash obtained from signature contains some extra characters.
E.g. the SHA256 of original message in case of above jwt in hex encoding is
8041fb8cba9e4f8cc1483790b05262841f27fdcb211bc039ddf8864374db5f53
but the hash obtained from signature of above jwt after decryption is
3031300d0609608648016503040201050004208041fb8cba9e4f8cc1483790b05262841f27fdcb211bc039ddf8864374db5f53
Which has 3031300d060960864801650304020105000420 extra characters infront of the hash.
What do these characters represent and shouldn't the hash obtained from message and signature be identical?
rfc7518 3.3 defines JWS algorithms RS256,384,512:
This section defines the use of the RSASSA-PKCS1-v1_5 digital
signature algorithm as defined in Section 8.2 of RFC 3447 [RFC3447]
(commonly known as PKCS #1), using SHA-2 [SHS] hash functions.
and rfc3447 8.2 defines RSASS-PKCS1-v1_5
RSASSA-PKCS1-v1_5 combines the RSASP1 and RSAVP1 primitives with the
EMSA-PKCS1-v1_5 encoding method. ....
where EMSA-PKCS1-v1_5 is defined in rfc3447 9.2 as:
1. Apply the hash function to the message M to produce a hash value
H:
H = Hash(M).
If the hash function outputs "message too long," output "message
too long" and stop.
2. Encode the algorithm ID for the hash function and the hash value
into an ASN.1 value of type DigestInfo (see Appendix A.2.4) with
the Distinguished Encoding Rules (DER), where the type DigestInfo
has the syntax
DigestInfo ::= SEQUENCE {
digestAlgorithm AlgorithmIdentifier,
digest OCTET STRING
}
The first field identifies the hash function and the second
contains the hash value. Let T be the DER encoding of the
DigestInfo value (see the notes below) and let tLen be the length
in octets of T.
3. If emLen < tLen + 11, output "intended encoded message length too
short" and stop.
4. Generate an octet string PS consisting of emLen - tLen - 3 octets
with hexadecimal value 0xff. The length of PS will be at least 8
octets.
5. Concatenate PS, the DER encoding T, and other padding to form the
encoded message EM as
EM = 0x00 || 0x01 || PS || 0x00 || T.
6. Output EM. [added: which is then modexp'ed with d by RSASP1 to
sign, or matched to the value modexp'ed with e by RSAVP1 to verify]
Notes.
1. For the six hash functions mentioned in Appendix B.1, the DER
encoding T of the DigestInfo value is equal to the following:
MD2: (0x)30 20 30 0c 06 08 2a 86 48 86 f7 0d 02 02 05 00 04
10 || H.
MD5: (0x)30 20 30 0c 06 08 2a 86 48 86 f7 0d 02 05 05 00 04
10 || H.
SHA-1: (0x)30 21 30 09 06 05 2b 0e 03 02 1a 05 00 04 14 || H.
SHA-256: (0x)30 31 30 0d 06 09 60 86 48 01 65 03 04 02 01 05 00
04 20 || H.
SHA-384: (0x)30 41 30 0d 06 09 60 86 48 01 65 03 04 02 02 05 00
04 30 || H.
SHA-512: (0x)30 51 30 0d 06 09 60 86 48 01 65 03 04 02 03 05 00
04 40 || H.
The prefix you discovered corresponds exactly to that specified in Note 1 for the encoding in step 2 of a DigestInfo structure for a SHA-256 hash, as expected.
Note rfc3447=PKCS1v2.1 has been superseded by rfc8017=PKCS1v2.2, but the only relevant change in this area is the addition of the SHA512/224 and SHA512/256 hashes, which JWS doesn't use.
Describing signing and verifying as 'encrypting' and 'decrypting' the hash (really, the encoding aka padding of the hash) is considered obsolete. It was used originally, decades ago, and only for RSA not other signature algorithms, because of the mathematical similarity between the modexp operations used for encrypting and decrypting vs signing and verifying, but it was found that thinking of these as the same or interchangeable resulted in system implementations that were vulnerable and broken. In particular see rfc3447 5.2:
The main mathematical operation in each primitive is
exponentiation, as in the encryption and decryption primitives of
Section 5.1. RSASP1 and RSAVP1 are the same as RSADP and RSAEP
except for the names of their input and output arguments; they are
distinguished as they are intended for different purposes.
nodejs uses this obsolete terminology because it uses OpenSSL which via its predecessor SSLeay dates back to the early 1990s when this mistake was still common.
However, that isn't really a programming/development issue and is more on topic for crypto.SX and security.SX; see some of the links I collected at https://security.stackexchange.com/questions/159282/can-openssl-decrypt-the-encrypted-signature-in-an-amazon-alexa-request#159289 .

Parity bit for MIFARE Classic authentication

I have learned that the MIFARE Classic authentication has a weakness about the parity bit. But I wonder how the reader sends the parity bit to the tag?
For example, here is the trace of a failed authentication attempt:
reader:26
tag: 02 00
reader:93 20
tag: c1 08 41 6a e2
reader:93 70 c1 08 41 6a e2 e4 7c
tag: 18 37 cd
reader:60 00 f5 7b
tag: ab cd 19 49
reader:59 d5 92 0f 15 b9 d5 53
tag: a //error code: 0x5
I know after the anti-collision, the tag will send NT(32-bit) as challenge to the reader, and the reader responds with the challenge {NR}(32-bit) and {AR}(32-bit). But I don't know where the 8-bit parity bit is in the above example. Which are the parity bits?
The example trace that you posted in your question either does not contain information about parity bits or all parity bits were valid (according to ISO/IEC 14443-3).
E.g. when the communication traces shows that the reader sends 60 00 f5 7b, the actual data that is sent over the RF interface would be (P is the parity bit):
b1 ... b8 P b1 ... b8 P b1 ... b8 P b1 ... b8 P
S 0000 0110 1 0000 0000 1 1010 1111 1 1101 1110 1 E
Parity bits are sent after each 8th byte (i.e. after each octet) and makes sure that all 9 bits contain an odd number of binary ones (odd parity). Therefore, it forms a 1-bit checksum over that byte. In your trace only bytes (but not the parity bits in between them) are shown.
The vulnerability regarding parity bits in MIFARE Classic is that parity bits are encrypted together with the actual data (cf. de Koning Gans, Hoepman,
and Garcia (2008): A Practical Attack on the MIFARE Classic, in CARDIS 2008, LNCS 5189, pp. 267-282, Springer).
Consequently, when you look at the communication trace without considering encryption, there may be parity errors according to the ISO/IEC 14443-3 parity calculation rule since the encrypted parity bit might not match the parity bit for the raw data stream. Tools like the Proxmark III would indicate such observed parity errors as exclamation marks ("!") after the corresponding bytes in the communication trace.

Multi-records Reading APDU command structure?

I have a card support T0 protocol with an applet is installed on it. The host send a "multi-records reading" command to get records data. Records are read which are specified by record identifiers in this data field of this command. These are steps that I did:
Select DF
Send a command to read sequence of records
00 B2 00 06 16 73 0A 51 02 40 01 54 04 00 10 00 04 73 08 51 02 40 02 54 02 00 01 00
The meaning of the command is as bellow:
INS = 'B2': read record(s)
P1 = '00': references the current record (ISO 7814-4, 7.3.3, table 48)
P2 = '07' = '00000 110' :
'0000' indicate current short EF id (ISO 7814-4, 7.3.2, table 47)
'111' mean read all records from the last up to P1 (ISO 7814-4, 7.3.3, table 49)
Le = 16 : data length
Data field follow the BER-TLV, for example:
73 0A 51 02 40 01 54 04 00 10 00 04
Tag'73' indicate that sequence of bytes above consist hierarchy data object structure in data filed (length = '0A')
Tag'51' reference to 2-byte EF identifier = '40 01'
Tag'54' reference to one or more record identifiers, in this case are '00 10' and '00 04'
Le = '00'
This is expect respond from card:
53 |length of data| record data| 53| length of data| record data|......
I test this command with the card, the card return 'Unknown Error' message.
Could you tell me what is wrong with the command? Am I misunderstood at any points?
Thanks.
This cannot be answered without knowing the actual implementation. 6F00 - the status word indicating an unknown error - should only be returned when the implementation has an internal error. For Java Card implementations - for instance - the 6F00 is returned for uncaught exceptions in the process method handling the APDU's.
But just like the rest of ISO/IEC 7816-4, nothing is set in stone. It's not even defined when a specific error should be returned, so even the above is unsure. ISO/IEC 7816-4 is thoroughly useless in that regard.
Thank you for your answer. My problem is solved
Actually the return SW = 61 XY is not error message. According to ISO 7814-3 it mean:
Process completed normally (SW2 encodes N x , i.e., the number of extra data bytes still available). In cases 1 and 3, the
card should not use such a value. In cases 2 and 4, for transferring response data bytes, the card shall be ready to receive
a GET RESPONSE command with P3 set to the minimum of N x and N e .
So just need to send a GET RESPONSE command to get response data:
00 C0 00 00 XY
XY: the number of extra data bytes still available

How to send a string to server using s_client

How to use s_client of openssl to send a short string to the server?
I have read the s_client manual but didn't find any usable flags.
Or is there any other ways to achieve this?
Does anyone know how to use s_client of openssl to send a short string to the server?
You can echo it in. Below, I used a GET withHTTP/1.0 and tweeter rudely refused my request:
HTTP/1.0 400 Bad Request
Content-Length: 0
The -ign_eof keeps the connection open to read the response.
Tweeter uses Verisign as the CA. You can fetch VeriSign Class 3 Primary CA - G5 from here, and then use it as an argument with -CAfile to ensure the chain verifies.
Here are the OpenSSL docs on s_client(1).
$ echo -e "GET / HTTP/1.0\r\n" | openssl s_client -connect twitter.com:443 -CAfile PCA-3G5.pem -ign_eof
CONNECTED(00000003)
depth=2 C = US, O = "VeriSign, Inc.", OU = VeriSign Trust Network, OU = "(c) 2006 VeriSign, Inc. - For authorized use only", CN = VeriSign Class 3 Public Primary Certification Authority - G5
verify return:1
depth=1 C = US, O = "VeriSign, Inc.", OU = VeriSign Trust Network, OU = Terms of use at https://www.verisign.com/rpa (c)06, CN = VeriSign Class 3 Extended Validation SSL CA
verify return:1
depth=0 1.3.6.1.4.1.311.60.2.1.3 = US, 1.3.6.1.4.1.311.60.2.1.2 = Delaware, businessCategory = Private Organization, serialNumber = 4337446, C = US, postalCode = 94103-1307, ST = California, L = San Francisco, street = 1355 Market St, O = "Twitter, Inc.", OU = Twitter Security, CN = twitter.com
verify return:1
---
Certificate chain
0 s:/1.3.6.1.4.1.311.60.2.1.3=US/1.3.6.1.4.1.311.60.2.1.2=Delaware/businessCategory=Private Organization/serialNumber=4337446/C=US/postalCode=94103-1307/ST=California/L=San Francisco/street=1355 Market St/O=Twitter, Inc./OU=Twitter Security/CN=twitter.com
i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL CA
1 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL CA
i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIGCjCCBPKgAwIBAgIQNC7E3U4iWJiTy5wznxxGDDANBgkqhkiG9w0BAQsFADCB
ujELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQL
ExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTswOQYDVQQLEzJUZXJtcyBvZiB1c2Ug
YXQgaHR0cHM6Ly93d3cudmVyaXNpZ24uY29tL3JwYSAoYykwNjE0MDIGA1UEAxMr
VmVyaVNpZ24gQ2xhc3MgMyBFeHRlbmRlZCBWYWxpZGF0aW9uIFNTTCBDQTAeFw0x
NDA0MDgwMDAwMDBaFw0xNjA1MDkyMzU5NTlaMIIBEjETMBEGCysGAQQBgjc8AgED
EwJVUzEZMBcGCysGAQQBgjc8AgECEwhEZWxhd2FyZTEdMBsGA1UEDxMUUHJpdmF0
ZSBPcmdhbml6YXRpb24xEDAOBgNVBAUTBzQzMzc0NDYxCzAJBgNVBAYTAlVTMRMw
EQYDVQQRFAo5NDEwMy0xMzA3MRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQH
FA1TYW4gRnJhbmNpc2NvMRcwFQYDVQQJFA4xMzU1IE1hcmtldCBTdDEWMBQGA1UE
ChQNVHdpdHRlciwgSW5jLjEZMBcGA1UECxQQVHdpdHRlciBTZWN1cml0eTEUMBIG
A1UEAxQLdHdpdHRlci5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
AQDGh6KKnZPGMiX5lC8Me9pev8CWscg5cIifrEV5WVW2v8sCoFlBDdnnIw26tqku
c/uL2aJoB8LAxllik0eNWRq9Q4YDRNuW05YIRQMEgRmAZvIKBpcLCcsB7YVLY7Jp
FBWGOEiUmL2c6GTc2NQWa723Jqwc6VgXZjDiL+wpHKUYc5RZz4IO3SdJ/Xlq38te
t5ZffCp/LBhhYNNUMStjEGdQwZCcnWF1DuaZ5TgxzdSGBSdFhssh5aCQO65TjQVn
cx4g8LNhXHNO7UxNIbEtuPmc0J8amPRHA0euHrgp2wBRZ0tL8Xphm0be3uAoA3DE
dO/d4wTMBNn5pN13Z9kYhzW/AgMBAAGjggGvMIIBqzAnBgNVHREEIDAeggt0d2l0
dGVyLmNvbYIPd3d3LnR3aXR0ZXIuY29tMAkGA1UdEwQCMAAwDgYDVR0PAQH/BAQD
AgWgMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjBEBgNVHSAEPTA7MDkG
C2CGSAGG+EUBBxcGMCowKAYIKwYBBQUHAgEWHGh0dHBzOi8vd3d3LnZlcmlzaWdu
LmNvbS9jcHMwHQYDVR0OBBYEFBdQCycEZZ1SEbOZAgDknEo1GgKeMB8GA1UdIwQY
MBaAFPyKULqeuSVae1WFT5UAY4/pWGtDMEIGA1UdHwQ7MDkwN6A1oDOGMWh0dHA6
Ly9FVlNlY3VyZS1jcmwudmVyaXNpZ24uY29tL0VWU2VjdXJlMjAwNi5jcmwwfAYI
KwYBBQUHAQEEcDBuMC0GCCsGAQUFBzABhiFodHRwOi8vRVZTZWN1cmUtb2NzcC52
ZXJpc2lnbi5jb20wPQYIKwYBBQUHMAKGMWh0dHA6Ly9FVlNlY3VyZS1haWEudmVy
aXNpZ24uY29tL0VWU2VjdXJlMjAwNi5jZXIwDQYJKoZIhvcNAQELBQADggEBAIfv
GC219t5efjKTRhtwn2lEtWaQpsYlSyeN8KxLnVJlHRb5z1j8oJER6JmjeqelTvZ1
RlxZ3A8cQdrntpeh9kFkU3wMK6TjaMKYDrIaFzV1M4FXsqBbJ6vAb2GX1lDu7oxS
/FrNXH9ebFclDDlXJ3FWGbABxf+xpeowr9y+Md5xgQ0Lpm82NZFei9zuJayJYh4B
ZbIJ4D9XBa73ZqIBvGXixeahHIvBA9Q0T92gQDJoMXuQ3RkVt63lL4EoJaEQNr1f
JDn2Wb0MFtKokWF1zb86glCCb7nvc/vbotiWzZt+a49XSByZ1F17I49F/N5cU9ZU
DQzsM2ToLAU2fwA6h/U=
-----END CERTIFICATE-----
subject=/1.3.6.1.4.1.311.60.2.1.3=US/1.3.6.1.4.1.311.60.2.1.2=Delaware/businessCategory=Private Organization/serialNumber=4337446/C=US/postalCode=94103-1307/ST=California/L=San Francisco/street=1355 Market St/O=Twitter, Inc./OU=Twitter Security/CN=twitter.com
issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL CA
---
No client certificate CA names sent
---
SSL handshake has read 3724 bytes and written 446 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES128-GCM-SHA256
Session-ID: 53BE6F30E6C52AAFFC01EAD8D5938C78...
Session-ID-ctx:
Master-Key: 87810BE6303E8EB831EC63E243D4C6E7...
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
TLS session ticket lifetime hint: 129600 (seconds)
TLS session ticket:
0000 - 95 93 d8 f3 27 2f 4c 11-ab 14 ee 04 46 e3 a8 e5 ....'/L.....F...
0010 - 7f 35 16 07 6d 5e 80 7c-fa 1d cd 78 39 7e 82 0b .5..m^.|...x9~..
0020 - 1d ee d6 99 2d d2 03 db-ab b8 37 5d f5 a5 28 62 ....-.....7]..(b
0030 - 3b f6 c7 c3 dc 7c 77 de-0f 60 d8 4c 8c f6 8e 8b ;....|w..`.L....
0040 - c8 8e 65 68 96 ec 27 f1-26 5d 4c 25 49 fd c0 ca ..eh..'.&]L%I...
0050 - c5 86 00 19 f1 26 5a 3e-fd df ca 12 a9 f8 17 bb .....&Z>........
0060 - 77 b8 5b 1c 58 1a 6b 16-d1 16 e0 d9 e8 b2 bf 92 w.[.X.k.........
0070 - 44 07 60 17 a0 11 23 52-3a 14 d0 79 85 a3 ae 8d D.`...#R:..y....
0080 - 17 d1 b8 44 d7 c3 3e ab-67 4c 7a c0 d6 cd 7e fe ...D..>.gLz...~.
0090 - b7 95 56 69 8f 5f 3e ee-2a c1 f9 0e 46 75 a6 79 ..Vi._>.*...Fu.y
Start Time: 1398724229
Timeout : 300 (sec)
Verify return code: 0 (ok)
---
HTTP/1.0 400 Bad Request
Content-Length: 0
closed
echo "YOUR TEXT HERE" | openssl s_client -connect host:port
The openssl s_client application can be used to send data to the web server with some creative bash scripting.
connectToHost="localhost:8443" ; \
dataToPost='{login: "Hello-world", password:"secret"}'; \
dataLen=$(expr length "${dataToPost}" ) ; \
( printf "POST / HTTP/1.1\n" ; \
printf "Host: %s\n" "${connectToHost}"; \
printf "Content-Length: %d\n" "$((dataLen+1))" ; \
printf "Content-Type: application/x-www-form-urlencoded\r\n\r\n" ; \
printf "%s\n" "${dataToPost}" ; \
sleep 1.5 ) | openssl s_client -connect ${connectToHost}
The printf and sleep statements are grouped together with parentheses to keep the pipe open until s_client has had a chance to send the request and receive a response. This delays the end-of-file signal so the -ign_eof option is not needed. The disadvantage of this approach is that the request will always take the specified amount of time regardless the server.
There are many ways to get the length of a string in bash. Using expr is a more verbose than other methods but it easier understand than ${#dataToPost}. The value of dataLen was incremented to include the newline character a few lines later. This is just to make things look pretty.
The first line of the HTTP request header is the action, POST.
The Host directive is often required. Populate this accordingly from the s_client -connect parameter.
The end of HTTP header section is always denoted by \r\n\r\n.
One final note. I may have overused line continuation characters but it is nice to have a single command when manually testing or running inside of a Dockerfile.

Resources