SSL Cert Not Trusted By All Browsers - azure

We have a wildcard certificate from DigiCert that is installed on our aks instance, and it works fine for IE and Chrome, but firefox has huge issues with it, not trusting the site. When I run the site through an SSL Checker, it indicates that
The certificate is not trusted in all web browsers. You may need to install an Intermediate/chain certificate to link it to a trusted root certificate.
These are the instructions followed to install the certs originally:
Install the SSL Certificate into Each Namespace
Export the certificate from the pfx file
You will need openssl for this. This is the best resource I could find for installing and using it in Windows 10.
openssl pkcs12 -in filename.pfx -clcerts -nokeys -out cert.txt
Open .txt file and remove the header (i.e. keep from -----BEGIN CERTIFICATE----- through to the bottom)
Export the private key from the pfx file
openssl pkcs12 -in filename.pfx -nocerts -out key.txt
Open .txt file and remove the header (i.e. keep from -----BEGIN ENCRYPTED PRIVATE KEY----- through to the bottom)
Remove the passphrase from the private key
openssl rsa -in key.txt -out server.txt
Create the secrets
Connect to the kube through azure's cli, then run the command:
az aks get-credentials -g aks-rg -n clustername
to merge the kube to your kubectl cli.
If you need to remove previously installed certs, you should run the following commands:
kubectl delete secret clustername-tls --namespace dev
kubectl delete secret clustername-tls --namespace test
kubectl delete secret clustername-tls --namespace uat
kubectl delete secret clustername-tls --namespace prod
To create the new certs:
kubectl create secret tls clustername-tls --key server.txt --cert cert.txt --namespace dev
kubectl create secret tls clustername-tls --key server.txt --cert cert.txt --namespace test
kubectl create secret tls clustername-tls --key server.txt --cert cert.txt --namespace uat
kubectl create secret tls clustername-tls --key server.txt --cert cert.txt --namespace prod
What was missed to correctly install the intermediate cert?

are you using this secret as tls-secret in ingress.
you have to implement ingress with ingress controller and you have to use your secret there inside the path.
you can follow this guide to set up the ingress-nginx controller with cert-manager.
ingress nginx will work as the load balancer and expose the application to internet. certmanager will work as managing the ssl and tls certificate.
cert-manager automatically generate the ssl certifiacate and manage it.
please follow this : https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes

Related

Error "ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1239:" while creating the tls secret using pfx cert

I am trying to create the kubernetes secret using PFX certificate that are stored in the keyvalut
I have download the secrets and the certificates to store the secrets
I have followed the below steps to store the secrets but i am getting the below error
#To get the password of the certificate
az keyvault secret show --name $secret_name --vault-name $keyvault_name -o tsv --query value)
#download the certificate
az keyvault certificate download --file $cert_pfx --name $cert_name --vault-name $keyvault_name
#convert pfx to key
Error:
0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1239:
#140546015532944:error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error:tasn_dec.c:405:Type=PKCS12
Any idea? Thanks for the help :)
I tried to reproduce the same in my work environment and got the expected results as shown in below
I have the keyvault with newly created secrets and set the polices to access the secrets
az keyvault set-policy -n "keyvault-name" --spn "service-principal-id" --secret-permissions get
I have created the certificate with the name "policycert"
az keyvault certificate create --vault-name newcertkv --name policycert -p "$(az keyvault certificate get-default-policy -o json)"
Added the polices to access the certificate
az keyvault set-policy -n newcertkv --spn "spn-id" --certificate-permissions get
Generated the private keyvault and certificate
openssl genrsa 2048 > pkeys123.key
openssl req -new -x509 -nodes -sha256 -days 365 -key pkeys123.key -out newcert123.cert
After enter the command it will ask the name and cert related documents
I generated the password using below command
openssl pkcs12 -export -out certificate.pfx -inkey pkeys123.key -in newcert123.cert
I have conerted this into pfx
$fileContentBytes = get-content ‘newcert123.pfx' -AsByteStream
[System.Convert]::ToBase64String($fileContentBytes) | Out-File ‘pfx-encoded-bytes.pem
The secrets need to be set to PKCS file format
az keyvault secret set --vault-name newcertkv --name newcertsecret --file pfx-encoded-bytes.pem --description "application/x-pkcs12"
I have downloaded the certificate and converted the pfx to pem and then PFX to key
az keyvault certificate download --file newcert123.pem --name my-certificate --vault-name newcertkv
openssl pkcs12 -in newcert123.pfx -clcerts -nokeys -out newcert123.pem -password pass:0000
openssl pkcs12 -in newcert123.pfx -nocerts -out newcert123.key -password pass:XXXX

Include Letsencrypt Root certificate in Azure Application Gateway

I'm trying to folllow Azure Tutorial on how to get Api Management under a vnet and accessible through an application gateway (WAF). I'm stuck trying to upload the root cert into application gateway. It says that the "Data for certificate is invalid", apparently Azure Application gateway doesn’t like Letsencrypt certs.
My certs are:
mydomain.com.br
api.mydomain.com.br
developer.mydomain.com.br
managemnet.mydomain.com.br
I have used acmesh to generate all certs:
./acme.sh --issue -d mydomain.com.br --dns dns_gd --server letsencrypt
./acme.sh --issue -d api.mydomain.com.br --dns dns_gd --server letsencrypt
./acme.sh --issue -d developer.mydomain.com.br ---dns dns_gd --server letsencrypt
./acme.sh --issue -d management.mydomain.com.br --dns dns_gd --server letsencrypt
Vnet, Subnets, Security Groups and Api Management are all created successfully, all good except for the part I need to create the application gateway:
$appgwName = "apim-app-gw"
$appgw = New-AzApplicationGateway -Name $appgwName -ResourceGroupName $resGroupName -Location $location `
-BackendAddressPools $apimGatewayBackendPool,$apimPortalBackendPool,$apimManagementBackendPool `
-BackendHttpSettingsCollection $apimPoolGatewaySetting, $apimPoolPortalSetting, $apimPoolManagementSetting `
-FrontendIpConfigurations $fipconfig01 -GatewayIpConfigurations $gipconfig -FrontendPorts $fp01 `
-HttpListeners $gatewayListener,$portalListener,$managementListener `
-RequestRoutingRules $gatewayRule,$portalRule,$managementRule `
-Sku $sku -WebApplicationFirewallConfig $config -SslCertificates $certGateway,$certPortal,$certManagement `
-TrustedRootCertificate $trustedRootCert -Probes $apimGatewayProbe,$apimPortalProbe,$apimManagementProbe
The last line is where I need to inform the path to my .cer file. I have tried to add the mydomain.com.br.cer and fullchain.cer, no luck. Tried to use openssl to create a Base64 file, also no luck:
sudo openssl x509 -inform PEM -in /etc/letsencrypt/live/mydomain.com/fullchain.pem -outform DER -out trustedrootDER.cer
openssl x509 -inform der -in trustedrootDER.cer -out trustedroot.cer
I even created a VM running Windows to try this approach, no joy.
The solution from the Architecture point of view is pretty simple, but the certs problem makes it troublesome:
Any direction here is much appreciated!
Thanks
Why you want to add the Lets Encrypt Root CA cert on your application gateway?
From my understanding the Root CA from Lets Encrypt is ISRG Root X1 and this one should be already trusted by Clients (Browsers).You only want to add the Root CA if you have self signed certificates.
Here is a workflow with storing the certs in Azure Key Vault: https://techblog.buzyka.de/2021/02/make-lets-encrypt-certificates-love.html
Another Workflow here describes adding certs with ACME challenges: https://intelequia.com/blog/post/1012/automating-azure-application-gateway-ssl-certificate-renewals-with-let-s-encrypt-and-azure-automation

Gitlab runner IP Sans issue during registration

I have Virtual Box with Gitlab instance and I'm trying to register on the same machine gitlab-runner, during that I'm getting issue about IP Sans
VM: https://bitnami.com/stack/gitlab/virtual-machine
Process
I think verifying certificate is successful (please correct)
Also what I have done also
added "subjectAltName=IP:192.168.8.6" to /etc/ssl/openssl.cnf
Generated cert and key in /etc/gitlab-runner
Copied these 2 to: /etc/gitlab/trusted-certs/
Doing also solution from below also doesn't help
Gitlab-CI runner: ignore self-signed certificate
Any ideas how I can further debug? Any help appreciated
From this post
Step1 edit ssl configuration on the GitLab server (not the runner)
+sudo nano /etc/pki/tls/openssl.cnf
# find line
[ v3_ca ]
subjectAltName=IP:192.168.1.1 <---- Add this line. 192.168.1.1 is your GitLab server IP.
Step2 Re-generate self-signed certificate on the GitLab server (not the runner)
cd /etc/gitlab/ssl
sudo openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout /etc/gitlab/ssl/192.168.1.1.key -out /etc/gitlab/ssl/192.168.1.1.crt
sudo openssl dhparam -out /etc/gitlab/ssl/dhparam.pem 2048
sudo gitlab-ctl restart
Step3 Copy the new CA to the GitLab CI runner in /etc/gitlab-runner/certs/
Step4 Register your runner
gitlab-runner register --tls-ca-file="$CERTIFICATE"
this work for me.
For those errors like:
x509: certificate is not valid for any names, but wanted to match gitlab.example.com
x509: certificate relies on legacy Common Name field, use SANs instead
...
I am running gitlab server 15.7.1 docker container from my laptop (following The Official Install Guide - With docker-compose), and installed a gitlab runner at that laptop host too.
In my case, the self-signed certificate should be re-requested manually according with the following steps:
Entering the running gitlab container:
docker compose exec web bash
In container, copy the openssl.cnf to /etc/gitlab/ssl so that I can edit it from host machine:
cp /opt/gitlab/embedded/ssl/openssl.cnf /etc/gitlab/ssl/
At host, Modify openssl.cnf to add new line into v3_ca section:
subjectAltName=DNS:gitlab.example.com
NOTE that a DNS name needed instead of IP
In container, copy back the file:
cp /etc/gitlab/ssl/openssl.cnf /opt/gitlab/embedded/ssl/
In container, recreate x509 req and restart gitlab services to sign the gitlab server certificate again:
cd /etc/gitlab/ssl
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout /etc/gitlab/ssl/gitlab.local.key -out /etc/gitlab/ssl/gitlab.local.crt
openssl dhparam -out /etc/gitlab/ssl/dhparam.pem 2048
gitlab-ctl restart
Now, gitlab-runner register should be ok.
gitlab-runner register --tls-ca-file="$CERTIFICATE"
Lucky to anyone.

Easiest way to create and upload a self-signed certificate to Azure App Service

I'm looking for some CLI commands or a script of some sort that I can execute to do the following in one step
create a self-signed certificate
upload it to my Azure App Service
create an app setting param with the certificate thumbprint as value
Has anyone done this before?
1. Create a self-signed certificate
If you want to create a self-signed certificate, we can use OpenSSL to implement it. For more details, please refer to here and here
a. Create the certificate key and the signing (csr).
openssl req -new -x509 \
-newkey rsa:2048 \
-sha256 \
-days 3650 \
-nodes \
-out example.crt \
-keyout example.key \
-subj "/C=SI/ST=Ljubljana/L=Ljubljana/O=Security/OU=IT Department/CN=www.example.com"
The fields, specified in -subj line are listed below:
C= - Country name. The two-letter ISO abbreviation.
ST= - State or Province name.
L= - Locality Name. The name of the city where you are located.
O= - The full name of your organization.
OU= - Organizational Unit.
CN= - The fully qualified domain name
b. Generate the certificate
openssl pkcs12 \
-inkey example.key \
-in example.crt \
-export -out example.pfx \
-password pass:<your password>
2. Upload it to my Azure App Service and save the certificate thumbprint in Azure App service app settings
Regarding the issue, please refer to the following code
az login
# Upload the SSL certificate and get the thumbprint.
thumbprint=$(az webapp config ssl upload --certificate-file $pfxPath \
--certificate-password $pfxPassword --name $webappname --resource-group $resourceGroup \
--query thumbprint --output tsv)
# save thumbprint
az webapp config appsettings set --name $appName --resource-group myResourceGroup \
--settings "Certificate _Thumbprint=$thumbprint"

HTTPS access to Azure ubuntu Virtual Machine

I have a Ubuntu VM running on Microsoft azure.
Currently I can access it using HTTP, but not with HTTPS.
In the network interface, inbound port rule, 443 is already allowed.
I already added a certificate into the VM, by creating a key vault and a certificate, prepare it for deployment following this documentation:
az keyvault update -n <keyvaultname> -g <resourcegroupname> --set properties.enabledForDeployment=true
then added the certificate following this answer.
In Azure CLI:
$secret=$(az keyvault secret list-versions \
--vault-name <keyvaultname> \
--name <certname> \
--query "[?attributes.enabled].id" --output tsv)
$vm_secret=$(az vm secret format --secret "$secret")
az vm update -n <vmname> -g <keyvaultname> --set osProfile.secrets="$vm_secret"
I got the following error:
Unable to build a model: Cannot deserialize as [VaultSecretGroup] an object of type <class 'str'>, DeserializationError: Cannot deserialize as [VaultSecretGroup] an object of type <class 'str'>
However, when I do az vm show -g <resourcegroupname> -n <vmname> after that, in the osProfile, the secrets already contained the secret I added
"secrets": [
{
"sourceVault": {
"id": "/subscriptions/<subsID>/resourceGroups/<resourcegroupName>/providers/Microsoft.KeyVault/vaults/sit-key-vault"
},
"vaultCertificates": [
{
"certificateStore": null,
"certificateUrl": "https://<keyvaultname>.vault.azure.net/secrets/<certname>/<certhash>"
}
]
}
],
When accessing using HTTPS, I failed. I can access it using HTTP but chrome still shows the "Not secure" mark next to the address.
What did I miss?
I also checked answer from similar question, but could not find "Enable Direct Server Return" anywhere in the VM control panel page.
As far as I known, we can follow these following steps to configure SSL for nginx server.
Add SSl cert
$secret=$(az keyvault secret list-versions --vault-name "keyvault_name" --name "cert name" --query "[?attributes.enabled].id" --output tsv)
$vm_secret=$(az vm secret format --secrets "$secret")
az vm update -n “VM name” -g “resource group name” --set osProfile.secrets="$vm_secret"
Install Nginx
sudo apt-get update
sudo apt-get install nginx
Configure SSL Cert
#get cert name
find /var/lib/waagent/ -name "*.prv" | cut -c -57
#paste cert
mkdir /etc/nginx/ssl
cp “your cert name” /etc/nginx/ssl/mycert.cer
cp “your cert name” /etc/nginx/ssl/mycert.prv
#change nginx configuration file
sudo nano /etc/nginx/sites-available/default
PS: add the next content in the file
server {
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/mycert.cert;
ssl_certificate_key /etc/nginx/ssl/mycert.prv;
}
service nginx restart

Resources