Terraform custom truststore - terraform

I have the following issue when trying to use terraform.
I try to use terraform via an enterprise proxy.
So I set the HTTPS_PROXY env variable.
But the enterprise proxy act as a "man in the middle" (parsing web pages for viruses ...) and is configured with a security server certificate issued by our enterprise own authority.
It seems that terraform is not able to connect to (https) registries because this CA Root certificate is not trusted.
Is there a way I can configure terraform to use a custom CA Root trustore with (under Windows) ?
Bellow is the error I get Terraform try to connect (at init phase)
<!-- language: none -->
> terraform.exe init
Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
Error installing provider "aws": Get https://releases.hashicorp.com/terraform-provider-aws/: net/http: TLS handshake timeout.
Terraform analyses the configuration and state and automatically downloads
plugins for the providers used. However, when attempting to download this
plugin an unexpected error occured.
This may be caused if for some reason Terraform is unable to reach the
plugin repository. The repository may be unreachable if access is blocked
by a firewall.
If automatic installation is not possible or desirable in your environment,
you may alternatively manually install plugins by downloading a suitable
distribution package and placing the plugin's executable file in the
following directory:
terraform.d/plugins/windows_amd64

Finally the truststore was not in cause.
The issue was due to a setting the HTTPS_PROXY env to an httpS url rather than http.
Now it works fine with my custom root certificate in windows trust and even with NTLM authentication.

Related

Fetch secrets and certificates from AzureKeyVault inside Docker container

I have a .net framework console application. Inside this application, I'm fetching secrets and certificates from keyvault using tenantId, client Id and Client Secret.
Application is fetching secrets and certificates properly.
Now I have containerized the application using Docker. After running the image I'm unable to fetch secrets and certificates. I'm getting below error:
" Retry failed after 4 tries. Retry settings can be adjusted in ClientOptions.Retry. (No such host is known.) (No such host is known.) (No such
host is known.) (No such host is known.)"
To resolve the error, please try the following workarounds:
Check whether your container was setup behind an nginx reverse proxy.
If yes, then try removing the upstream section from the nginx reverse proxy and set proxy_pass to use docker-compose service's hostname.
After any change make sure to restart WSL and Docker.
Check if DNS is resolving the host names successfully or not, otherwise try adding the below in your docker-compose.yml file.
dns:
- 8.8.8.8
Try removing auto generated values by WSL in /etc/resolv.conf and add DNS like below if above doesn't work.
# [network]
# generateResolvConf = false
nameserver 8.8.8.8
Try restarting the WSL by running below command as an Admin:
Restart-NetAdapter -Name "vEthernet (WSL)"
Try installing a Docker Desktop update as a workaround.
For more in detail, please refer below links:
Getting "Name or service not known (login.microsoftonline.com:443)" regularly, but occasionally it succeeds? · Discussion #3102 · dotnet/dotnet-docker · GitHub
ssl - How to fetch Certificate from Azure Key vault to be used in docker image - Stack Overflow

jenkins.plugins.publish_over.BapPublisherException: Failed to connect and initialize SSH connection Message [Auth fail]

I am learning to use Jenkins to deploy a .Net 5.0 application on an AWS EC2 server. This is the first time I am using Linux server and Jenkins for .Net (I'm am a life long Windows guy), and I am facing an error while trying to publish my artifacts over SSH to Web Server.
My setup:
Jenkins server is an AWS EC2 Linux AMI server.
Web Server is also an AWS EC2 LInux AMI server.
My Jenkins is correctly installed and working. I am able to build and run unit test cases without any issues.
For Deploy, I am using 'Publish Over SSH' plugin, and I have followed all steps to configure this plugin as mentioned here https://plugins.jenkins.io/publish-over-ssh/.
However, when try to 'Test Configuration', I get the below error,
Failed to connect or change directory
jenkins.plugins.publish_over.BapPublisherException: Failed to connect and initialize SSH connection. Message: [Failed to connect session for config [WebServer]. Message [Auth fail]]
I did a ping test from Jenkins server to Web Server, and it is a success.
I'm using the .pem key in the 'Key' section of 'Publish over SSH'. This key is the same key I use to SSH into the web server.
The below link suggests many different solutions, but none is working in my case.
Jenkins Publish over ssh authentification failed with private key
I was looking at the below link which describes the same problem,
Jenkins publish over SSH failed to change to remote directory
However in my case I have kept 'Remote Directory' as empty. I don't know if I have to specify any directory here. Anyways, I tried creating a new directory under the home directory of user ec2-user as '/home/ec2-user/publish' and then used this path as Remote Directory, but it still didn't work.
Screenshot of my settings in Jenkins:
I would appreciate if anyone can point me to the right direction or highlight any mistake I'm doing with my configuration.
In my case following steps solved the problem.
Solution is based on Ubuntu 22.04
add two line in /etc/ssh/sshd_config
PubkeyAuthentication yes
PubkeyAcceptedKeyTypes +ssh-rsa
restart sshd service
sudo service sshd restart
you might consider the following:
a. From the screenshot you’ve provided, it seems that you have checked the Use password authentication, or use different key option which will require you to add your key and password (inputs from these fields will be used in connecting to your server via SSH connection). If you use the same SSH key and passphrase/password on all of your servers, you can uncheck/untick that box and just use the config you have specified above.
b. You might also check if port 22 of your web server allows inbound traffic from the security group where your Jenkins server/EC2 instance is running. See reference here.
c. Also, make sure that the remote directory you have specified is existing otherwise the connection may fail.
Here's the sample config

Gitlab : Peer's certificate issuer has been marked as not trusted by the user

I have a on-prem gitlab where I am trying to run some builds/pipeline but getting the below error -
fatal: unable to access 'https://gitlab-ci-token:[MASKED]#gitlab.systems/testing/test-project-poc.git/': Peer's certificate issuer has been marked as not trusted by the user.
I have already looked into this - Gitlab:Peer's Certificate issuer is not recognized and followed the steps of obtaining the .pem file by merging the server certificate, intermediate certificate and root certificate but I am still getting the below error and really struggling to find the root cause.
/etc/gitlab/gitlab.rb config
##! enable/disable 2-way SSL client authentication
#nginx['ssl_verify_client'] = "off"
##! if ssl_verify_client on, verification depth in the client certificates chain
#nginx['ssl_verify_depth'] = "1"
nginx['ssl_certificate'] = "/etc/gitlab/ssl/gitlab.systems.pem"
nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/gitlab.systems.key"
Is there any other configuration which i need to update/modify? Any guidance is really appreciated.
I am guessing you are using a self signed certificate. If that is the case you have two options to rectify this issue:
Recommended option: Here again I assume that you have already solved the issue between the gitlab-runner and gitlab itseld, hence you registered the runner successfully. So you have already the certificate file in a /etc/gitlab-runner/certs. So on the server hosting the gitlab-runner, run the below command:
git config --system http.sslCAInfo /etc/gitlab-runner/certs/CERITIFICATE_NAME.crt
This is unsafe: Here you just disable the git https certificate verification:
git config --system http.sslverify false

How to import publicly available jelastic manifests from gitlab repositories in the jelastic dashboard?

I am currently transitioning from github to gitlab. Today, my code is present at both those locations. I have a jps manifest on github:
https://github.com/shopozor/services/blob/master/manifest.jps
and the very same manifest on gitlab:
https://gitlab.hidora.com/softozor/services/blob/master/manifest.jps
In the Jelastic dashboard, I am able to load my github manifest. However, I am not able to load my manifest versioned on gitlab:
What is the problem? Do I have to configure something special somewhere? Both manifests are publicly available. Why can't I import the gitlab manifest?
I also tried to use the raw manifest:
https://gitlab.hidora.com/softozor/services/raw/master/manifest.jps
and I've also tried to get the manifest file by means of the gitlab API, without success.
EDIT
I've tried to load this manifest. There we see that I am running a command
wget "${baseUrl}/jelastic/postgres/execCmdScript.sh" -O /var/lib/pgsql/script.sh 2>&1
In the jelastic console, that command raises the error
[07:56:54 Shopozor.cluster:2]: ERROR: cmd [sqldb: 62900].response: {" result": 4109," source": “JEL”," error": “The operation could not be performed. ”," errOut": ""," nodeid": 62900," exitStatus": 4," out": “--2020-03-27 07:56:53-- https://gitlab.hidora.com/softozor/services/raw/install-postgres-in-dedicated-env/jelastic/postgres/execCmdScript.sh\nResolving gitlab.hidora.com (gitlab.hidora.com)... 10.102.1.82\nConnecting to gitlab.hidora.com (gitlab.hidora.com)|10.102.1.82|:443... failed: Connection refused.”}
If I now take a computer which I never authenticated with on gitlab through ssh, and run that very same command, then it works. This is a bit strange, isn't it? What authentication does Jelastic need??? it's all public and available to anyone, except Jelastic?
After some more research, I was finally able to load my manifests from gitlab into jelastic. The problem is probably due to the gitlab configuration. Loading the jps from the gitlab repo doesn't work over https in the settings I have (which I haven't made myself, it's a CI / CD as a service). It works, however, over http.

Proxy configuration for OpenShift Origin

I am setting up an OpenShift origin server. The configurations I do heavily relies on the walkthrough description:
https://github.com/openshift/origin/blob/master/examples/sample-app/README.md
After creating a project, I add a new app like this (successfully):
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-hello-world.git
OpenShift tries to build immediatelly, only to fail as follows:
F0222 15:24:58.504626 1 builder.go:204] Error: build error: fatal: unable to access 'https://github.com/openshift/ruby-hello-world.git/': Failed connect to github.com:443; Connection refused
I consulted the documentation about the proxy configuration:
https://docs.openshift.com/enterprise/3.0/admin_guide/http_proxies.html#git-repository-access
Concluded that I can simply edit the YAML descriptor for this specific app to include my corporate proxy.
...
source:
type: Git
git:
uri: "git://github.com/openshift/ruby-hello-world.git"
httpProxy: http://proxy.example.com
httpsProxy: https://proxy.example.com
...
With that change the build proceeds.
Can the HTTP proxy be configured system wide?
Note: again, I simply downloaded the binaries (client, server), did not install via ansible. And I did not find relevant properties openshift.local.config folder, inside my server binary folder.
After some time I now know enough to answer my own question.
There are two places where one needs to deal with corporate proxy settings.
Docker
This thread will tell you what to do in detail:
Cannot download Docker images behind a proxy
In my case on RHEL 7.2 I needed to edit this file: /etc/sysconfig/docker
I had to add the following entries:
HTTP_PROXY="http://proxy.company.com:4128"
HTTPS_PROXY="http://proxy.company.com:4128"
Then a restart of the docker service was necessary.
Origin Proxy
What I missed originally was the place to configure our corporate proxy settings. Currently I have a cluster (1 master, 1 node) installed via ansible.
These are the relevant files to edit on the servers:
* /etc/sysconfig/origin-master
* /etc/sysconfig/origin-node
There already placeholders in this file:
#NO_PROXY=master.example.com
#HTTP_PROXY=http://USER:PASSWORD#IPADDR:PORT
#HTTPS_PROXY=https://USER:PASSWORD#IPADDR:PORT
Documentation:
https://docs.openshift.org/latest/install_config/http_proxies.html

Resources