I am trying to get my Ghost Blog (https://ghost.org/) running in a Azure Docker Container.
In the Ghost documentation they describe how to configure the SSL connection for a MySQL database. In Azure I have a flexible MySQL server and I have downloaded the PEM certificate file. I have exported it to a text file, according to the Ghost Container (documentation)
https://ghost.org/docs/config/#configuration-options
In the documentation they tell me to use this command for the export to text:
(you can get the single line string with awk '{printf "%s\n", $0}' DigiCertGlobalRootCA.pem)
My Ghost Docker configuration looks like this now:
database__connection__ssl__ca: -----BEGIN CERTIFICATE-----\nMIIDrzCCApegAwI ... etc ... d4=\n-----END CERTIFICATE-----\n
But every time when I try to start the container now I get the following error
m Ghost server started in 3.906s
2023-02-19T11:09:07.296559486Z [2023-02-19 11:09:07] [31mERROR[39m self signed certificate in certificate chain
2023-02-19T11:09:07.296630486Z [31m
2023-02-19T11:09:07.296638786Z [31mself signed certificate in certificate chain[39m
2023-02-19T11:09:07.296645186Z
2023-02-19T11:09:07.296650786Z [33m"Unknown database error"[39m
Is there anyone else how has this problem?
Related
I have a SQL docker in Ubuntu server. I follow this instructions for enable the SSL conections. https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-docker-container-security?view=sql-server-linux-ver15
My certificate is from GoDaddy and i have three files (certname.crt => Principal CRT, gd_bundle-g2-g1.crt, and keyfile.key), in the mssql.conf i reference the principal CRT and the key file.
For SQL i convert the CRT to PEM, changing the extension from CRT to PEM.
I start the docker and this starts normally, and loads the certificate, i can see this in the logs
The certificate [Certificate File:'/etc/ssl/certs/mssql.pem', Private Key File:'/etc/ssl/private/mssql.key'] was successfully loaded for encryption.
I can connect from SSMS normally, but in NodeJS i tried to connect with the library mssql from NPM, but i have this error:
ConnectionError: Failed to connect to domainexample.com:port - unable to verify the first certificate
I have a .net framework console application. Inside this application, I'm fetching secrets and certificates from keyvault using tenantId, client Id and Client Secret.
Application is fetching secrets and certificates properly.
Now I have containerized the application using Docker. After running the image I'm unable to fetch secrets and certificates. I'm getting below error:
" Retry failed after 4 tries. Retry settings can be adjusted in ClientOptions.Retry. (No such host is known.) (No such host is known.) (No such
host is known.) (No such host is known.)"
To resolve the error, please try the following workarounds:
Check whether your container was setup behind an nginx reverse proxy.
If yes, then try removing the upstream section from the nginx reverse proxy and set proxy_pass to use docker-compose service's hostname.
After any change make sure to restart WSL and Docker.
Check if DNS is resolving the host names successfully or not, otherwise try adding the below in your docker-compose.yml file.
dns:
- 8.8.8.8
Try removing auto generated values by WSL in /etc/resolv.conf and add DNS like below if above doesn't work.
# [network]
# generateResolvConf = false
nameserver 8.8.8.8
Try restarting the WSL by running below command as an Admin:
Restart-NetAdapter -Name "vEthernet (WSL)"
Try installing a Docker Desktop update as a workaround.
For more in detail, please refer below links:
Getting "Name or service not known (login.microsoftonline.com:443)" regularly, but occasionally it succeeds? · Discussion #3102 · dotnet/dotnet-docker · GitHub
ssl - How to fetch Certificate from Azure Key vault to be used in docker image - Stack Overflow
I'm running the CIS kube-bench tool on the master node and trying to resolve this error
[FAIL] 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated).
I understand that I need to update the API server manifest YAML file with this flag pointing to the right CA file --kubelet-certificate-authority however, I'm not sure which one is the right CA Certififace for Kubelet.
These are my files in the PKI directory:-
apiserver-etcd-client.crt
apiserver-etcd-client.key
apiserver-kubelet-client.crt
apiserver-kubelet-client.key
apiserver.crt
apiserver.key
ca.crt
ca.key
etcd
front-proxy-ca.crt
front-proxy-ca.key
front-proxy-client.crt
front-proxy-client.key
sa.key
sa.pub
3 very similar discussions on the same topic. I wont provide you all steps cause it well written in documentation and related questions on stack. Only high-level overview
How Do I Properly Set --kubelet-certificate-authority apiserver parameter?
Kubernetes kubelet-certificate-authority on premise with kubespray causes certificate validation error for master node
Kubernetes kubelet-certificate-authority on premise with kubespray causes certificate validation error for master node
Your actions:
Follow the Kubernetes documentation and setup the TLS connection between the apiserver and kubelets.
These connections terminate at the kubelet's HTTPS endpoint. By
default, the apiserver does not verify the kubelet's serving
certificate, which makes the connection subject to man-in-the-middle
attacks and unsafe to run over untrusted and/or public networks.
Enable Kubelet authentication and Kubelet authorization
Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the --kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
From #Matt answer
Use /etc/kubernetes/ssl/ca.crt to sign new certificate for kubelet with valid IP SANs.
Set --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.crt (valid CA).
In /var/lib/kubelet/config.yaml (kubelet config file) set tlsCertFile and tlsPrivateKeyFile to point to newly created kubelet crt and key files.
And from clarifications:
Yes you have to generate certificates for kubelets and sign sign them
the provided certificate authority located here on the master
/etc/kubernetes/ssl/ca.crt
By default in Kubernetes there are 3 different Parent CA (kubernetes-ca, etcd-ca, kubernetes-front-proxy-ca). You are looking for kubernetes-ca because kubelet using kubernetes-ca, and you can check the documentation. kubernetes-ca default path is /etc/kubernetes/pki/ca.crt But also you verify it via kubelet configmap with below commands
kubectl get configmap -n kube-system $(kubectl get configmaps -n kube-system | grep kubelet | awk '{print $1}') -o yaml | grep -i clientca
I want to update my docker stack automatically from my CI server but I don't figure out how to configure credentials to be able to drive docker from an external host.
I have enabled experimental mode on my server and it works fine in local with docker-machine.
My deploy script look like this:
echo $DOCKER_CERT > cert.pem # which other file ?
OPTS=" --tlsverify --host $DOCKER_DEPLOY_HOST --tlscert cert.pem" # which other args ???
docker $OPTS pull $REPO_IMAGE
docker $OPTS service update multiverse-prod_api
Is there a way (or planned in future version) to achieve this with just an ssh key ?
Thanks !!
You need to configure the docker server with a self signed cert, and then configure the client with a client cert signed from the same ca. The steps to create the certificates and configure the server and client are described by docker in their documentation.
I am trying to get my hands with puppet. I boot up 2 VM both running on linuxmint 17. I intended one as puppetmaster and one as puppetclient. I am follow this guide https://help.ubuntu.com/12.04/serverguide/puppet.html
in /etc/hostname
in /etc/hosts
master:
127.0.0.1 localhost /// no chance
127.0.1.1 puppetmaster //
192.168.75.141 puppetclient //this client's ip address after nm-tool search.
client:
127.0.0.1 localhost
127.0.1.1 puppetclient
192.168.75.142 puppetmaster // this is the master's ip address
In both client and master I created a file in etc/puppet/manifests/site.pp
package {
'apache2':
ensure => installed
}
service {
'apache2':
ensure => true,
enable => true,
require => Package['apache2']
}
In master I created a file in /etc/puppet/manifests/nodes.pp
node 'meercat02.example.com' {
include apache2
}
In client I create a file /etc/default/puppet and put START=yes.
Here's what I think there's a problem. In the guide, the file should already exist but in my case I have to create it.
So then I followed everything in the guide to sign the client certificate. I typed in sudo puppetca --sign puppetclient in puppetmaster's terminal . That didn't work and I found the solution in another post. https://serverfault.com/questions/457349/installed-puppetmaster-but-why-do-i-get-puppetca-command-not-found. So after reading the post I typed sudo puppet cert list --sign 'puppetclient'. Then it gives me this
Notice: Signed certificate request for ca
Error: Could not find certificates request for list
After the first five pages of google search I end up here asking for help. =) Anyone can help me resolve this issue? Thanks.
You cannot sign a certificate before there is a certificate request.
You have to establish the agent/master communication first.
Find out the certificate name of your master
puppet master --configprint certname
On the agent node, make sure that name resolves to the master's IP address (you currently used puppetmaster for this, which might suffice).
Send the initial request to the master
Do this on the agent node.
puppet agent --test --master=<name you just registered>
The agent generates a CSR, and prints a message that it could not receive a certificate.
Sign the certificate
On the master:
puppet cert list
Locate the CSR of your agent, then
puppet cert sign <agent>
The next puppet agent --test call will receive the certificate.
Try this
puppet agent --test master="name you just register"
its work for me.