How to pass cert and key using h2load - security

I've been trying to run an h2load test on an endpoint.
when executing a curl command it works well, here is an example:
curl --insecure -X POST https://xxx.xxx.xxx.xxx:18093/endpoint --cacert myCaCrtFile.crt --cert myCertFile.crt --key myKey.key -d ./data.json
but I haven't find a way to pass my cert, cacert and key through h2load, any suggestions?

Related

How does urllib.request differ from curl or httpx in behaviour? Getting a 401 in a request to the Google Container Registry

I am currently working on some code to interact with images on the Google Container Registry. I have working code both using plain curl and also httpx. I am trying to build a package without 3rd party dependencies. My curiosity is around a particular endpoint from which I get a successful response in curl and httpx but a 401 Unauthorized using urllib.request.
The bash script that demonstrates what I'm trying to achieve is the following. It retrieves an access token from the registry API, then uses that token to verify that the API indeed runs version 2 and tries to access a particular Docker image configuration. I'm afraid that in order to test this, you will need access to a private GCR image and a digest for one of the tags.
#!/usr/bin/env bash
set -eu
token=$(gcloud auth print-access-token)
image=...
digest=sha256:...
get_token() {
curl -sSL \
-G \
--http1.1 \
-H "Authorization: Bearer ${token}" \
-H "Accept: application/vnd.docker.distribution.manifest.v2+json" \
--data-urlencode "scope=repository:$1:pull" \
--data-urlencode "service=gcr.io" \
"https://gcr.io/v2/token" | jq -r '.token'
}
echo "---"
echo "Retrieving access token."
access_token=$(get_token ${image})
echo
echo "---"
echo "Testing version 2 capability with access token."
curl -sSL \
--http1.1 \
-o /dev/null \
-w "%{http_code}" \
-H "Authorization: Bearer ${access_token}" \
-H "Accept: application/vnd.docker.distribution.manifest.v2+json" \
https://gcr.io/v2/
echo
echo "---"
echo "Retrieving image configuration with access token."
curl -vL \
--http1.1 \
-o /dev/null \
-w "%{http_code}" \
-H "Authorization: Bearer ${access_token}" \
-H "Accept: application/vnd.docker.distribution.manifest.v2+json" \
"https://gcr.io/v2/${image}/blobs/${digest}"
I additionally created two Jupyter notebooks demonstrating my solutions in httpx and bare urllib.request. The httpx one works perfectly while somehow urllib fails on the image configuration request. I'm running out of ideas trying to spot the difference. If you run the notebook yourself, you will see that the called URL contains a token as a query parameter (is this a security issue?). When I open that link I can actually successfully download the data myself. Maybe urllib still passes along the Authorization header with the Bearer token making that last call fail with 401 Unauthorized?
Any insights are greatly appreciated.
I did some investigation and I believe the difference is that the last call to "https://gcr.io/v2/${image}/blobs/${digest}" actually contains a redirect. Inspecting the curl and httpx calls showed me that both do not include the Authorization header in the second, redirected request, whereas in the way that I set up the urllib.request in the notebook, this header is always included. It's a bit odd that this leads to a 401 but now I know how to address it.
Edit: I can now confirm that by building a urllib.request.Request instance and unlike in the linked notebook, add the authorization header with the request's add_unredirected_header method, everything works as expected.

curl POST and GET response

I'm trying to write a simple one-liner that will take a .crt and pass it to the CRT checker as SSLShopper.com. I can POST the data, but all I get back is headers and the HTTP response. The form on their site seems simple enough, just an AJAX call that returns the result. This is what I have so far:
curl -L -i -X POST -k "https://www.sslshopper.com/assets/snippets/sslshopper/ajax/ajax_decode.php" --data-binary #test.crt
Is there any way to POST and GET at the same time?
It seems you need to send the data in form-url-encoded format with following parameters :
cert_text={CERT_CONTENT}
decode_type=certificate
You need also X-Requested-With: XMLHttpRequest & Connection: keep-alive headers :
cert_content=$(cat test.crt)
curl 'https://www.sslshopper.com/assets/snippets/sslshopper/ajax/ajax_decode.php' \
-H 'X-Requested-With: XMLHttpRequest' \
-H 'Connection: keep-alive' \
--data-urlencode "cert_text=$cert_content" \
--data-urlencode "decode_type=certificate"
But for this task, you don't need to call some endpoint to check a certificate, as it's specified in https://www.sslshopper.com/certificate-decoder.html, you can use openssl directly :
openssl x509 -in test.crt -noout -subject -enddate -startdate -issuer -serial

How to encrypt password for cURL command in shell script. -u option cannot be used

I am using cURL command in a shell script. If I use curl with -u login:password option, we can have access to these login and password as they are visible to anyone.
Is there way to make password not clear in script file (or encrypt and decrypt it)?
An example based on Base64:
curl -X GET -k -H 'Authorization: Basic dGVzdDpwYXNzd29yZA==' -i 'https://yoursite.com'
Base64 decoded: test:password
Base64 encoded: dGVzdDpwYXNzd29yZA==

QualysGuard API - SYNTAX issue - parameter rsa_private_key

I need to know the RSA key format to be sent to Qualys for updating an authentication record via Qualys API.
Below technique gives me "parameter rsa_private_key has invalid value" error:
$ test2="$(<~/.ssh/id_rsa)"
$ curl -H "X-Requested-With: Curl" -u "cenga_vg:ZZZZ" -X "POST" -d "action=update&ids=YYYY&rsa_private_key=$test2" "https://qualysapi.qualys.com/api/2.0/fo/auth/unix/" -D headers
Error:
parameter rsa_private_key has invalid value: improper RSA private key format
========================================
curl -H "X-Requested-With: Curl" -u "cenga_vg:ZZZZ" -X "POST" --data-urlencode "action=update&ids=YYYY&rsa_private_key=$test2" "https://qualysapi.qualys.com/api/2.0/fo/auth/unix/" -D headers

LDAP search user based on certificate in Linux command line

I want to search a user using ldapsearch, but the hosting provider gave me a certificate from the CA. I added that certificate in my ldapconf.
Before executing the ldapsearch command I am running openssl as follows
openssl s_client -connect hostname -CAfile /certificate.pem
After connecting via openssl, I execute the following command in another terminal
ldapsearch -h hostname -p portno -D uid=mailid#domain.con, dc=global,dc=example,dc=net
Now I want to know, is there any way to use the certificate while executing the ldapsearch command?
This should be doable by performing:
env LDAPTLS_CACERT=/certificate.pem ldapsearch -h hostname -p portno -D uid=mailid#domain.con, dc=global,dc=example,dc=net
although, I'd use:
env LDAPTLS_CACERT=/certificate.pem ldapsearch -H ldaps://hostname:portno/ -D uid=mailid#domain.con, dc=global,dc=example,dc=net
to ensure that it tries with ldaps, rather than heuristics.
If you're getting errors still, you can add -ZZ which will give better error messages.
An obvious gotcha is using an expired cert, the second most obvious gotcha is not using the same name in the request as you've got in the certificate. You can read the server cert using openssl s_client -connect hostname:portno - there will be a line reading something like:
subject=/C=IE/CN=hostname.domain.local
you have to ensure that the ldapsearch request's hostname matches the hostname as listed in the CN=... item. If it doesn't match then you'll not be able to connect (this is simple cert validation, if there are alternative names then you can try: openssl x509 -text -noout -in /certificate.pem | grep DNS)
A final caveat is that Mac OSX does not respect the LDAPTLS_CACERT environment variable. You have to import the cert into the keychain (I don't know of a workaround for OSX in this case).

Resources