I have VMware Photon OS running in VMware Player. This will be used as the host OS to run Docker containers.
However, since I'm behind a ZScaler, I'm having issues running commands that access external resources. E.g. docker pull python gives me the following output (I added some line breaks to make it more readable):
error pulling image configuration:
Get https://dseasb33srnrn.cloudfront.net/registry-v2/docker/registry/v2/blobs/sha256/a0/a0d32d529a0a6728f808050fd2baf9c12e24c852e5b0967ad245c006c3eea2ed/data
?Expires=1493287220
&Signature=gQ60zfNavWYavBzKK12qbqwfOH2ReXMVbWlS39oKNg0xQi-DZM68zPi22xfDl-8W56tQmz5WL5j8L39tjWkLJRNmKHwvwjsxaSNOkPMYQmhppIRD0OuVwfwHr-
1jvnk6mDZM7fCrChLCrF8Ds-2j-dq1XqhiNe5Sn8DYjFTpVWM_
&Key-Pair-Id=APKAJECH5M7VWIS5YZ6Q:
x509: certificate signed by unknown authority
I have tries to extract the CA root certificates (in PEM format) for ZScaler from my Windows workstation, and have appended them to /etc/pki/tls/certs/ca-bundle.crt. But even after restarting Docker, this didn't solve the issue.
I've read through numerous posts, most referencing the command update-ca-trust which does not exist on my system (even though the ca-certificates package is installed).
I have no idea how to go forward. AFAIK, there are two options. Either:
Add the ZScaler certificates so SSL connections are trusted.
Allow insecure connections to the Docker hub (but even then it will probably still complain because the certificate isn't trusted).
The latter works by the way, e.g. executing curl with the -k option allows me to access any https resource.
The problem is zscaler is acting as MAN-IN-THE-MIDDLE doing the ssl inspecting in your organization (see https://support.zscaler.com/hc/en-us/articles/205059995-How-does-Zscaler-protect-SSL-traffic-).
Since you've tried put the certificate in docker, I guess you've been already familiar with steps described in https://stackoverflow.com/a/36454369/1443505. The answer in this is almost correct for the zscaler scenario. One thing need to note is that because zscaler intercepts the CA tree. We need add all the certificates on the chains.
For now, the certificate chains behind zscaler looks as following
We need to export them all one by one and follow the instructions in https://stackoverflow.com/a/36454369/1443505 for each of them.
Related
TL;DR: Everything network-related is working perfectly except one specific binary (in this case - elm).
I am running a new arch machine - I am connected via wifi and have networks access.
However - elm does not seem to know that. Running elm make fails when it tries to download the dependencies. (This is a project imported from somewhere else).
I could not connect to https://package.elm-lang.org to get the latest list of
packages, and I was unable to verify your dependencies with the information I
have cached locally.
Are you able to connect to the internet? These dependencies may work once you
get access to the registry!
Adding the IP of package.elm-lang.org to /etc/hosts fixes that, but it then throws a similar error for github.com. I can keep doing that, but surely there is a way to convince elm to access the internet.
I'm not using a proxy or anything like that. My connection obviously works and seem stable. elm init also fails for the same reasons so i'm unable to test on a brand new directory.
Thank you all for your help :)
Apparently fresh arch uses the systemd-resolved daemon for DNS, but elm decides to just read resolv.conf directly (which is blank), and then defaults to 127.0.0.1 as the DNS server.
Setting a DNS server manually in resolv.conf did the trick.
My organization has pre-installed its own root certificates in our machines to enable it to inspect HTTPS traffic. The browsers don't complain since they trust the OS certificates by default. This causes me all sorts of problem using NodeJS though, which does NOT trust those certificates.
Node has the option to extend the certs it will trust by setting the NODE_EXTRA_CA_CERTS option and I'm trying to to employ this. Problem is: this option takes a file:
NODE_EXTRA_CA_CERTS=file
When set, the well known "root" CAs (like VeriSign) will be extended with the extra certificates in file. The file should consist of one or more trusted certificates in PEM format.
Where do I get this file on Windows?
I've found several pages describing how to set NGINX to use HTTPS/TLS.
However, all suggest setting a secret tls with the key & cert.
We want to be able to use TLS but ask NGINX to load the key/cert via init-container which in this case implemented by acs-keyvault-agent.
Any ideas?
If your only goal is to obtain the TLS key/cert from Azure Key Vault, then you're probably better of going with the Key Vault FlexVolume project from Azure. This would have the advantage of not using init containers at all and just dealing with volumes and volume mounts.
Since you explicitly want to use Hexadite/acs-keyvault-agent and in default mode (which uses volume mounts btw) there is a full example of how to do this in the projects examples folder here: examples/acs-keyvault-deployment.yaml#L40-L47.
Obviously you need to build, push, and configure the container correctly for your environment. Then you will need to conifgure Nginx to use the CertFileName.pem and KeyFilename.pem from the /secrets/certs_keys/ folder.
Hope this helps.
In order to create an outgoing WebHook, I need to trust my company's root CA certificate, which is inserted into all traffic going through our network.
I have tried everything as I described in the following issue that I logged:
https://github.com/RocketChat/Rocket.Chat/issues/11546
This involved mainly trying both NODE_EXTRA_CA_CERTS and CAFILE (the latter being a suggestion from Meteor) environment variables, by using service config override. I can confirm the environment variables are set in the running processes for the Rocket.Chat service, but they have no effect.
(Not that I think it makes a difference, but I am running Rocket.Chat from the snap. I followed advice given here to add the environment variables to the service processes:
https://forum.snapcraft.io/t/declaratively-defining-environment-variables/175/26)
I have puppet master on RHEL 6 and agent on Windows.
IT is showing up properly in the console Web, however it is not downloading new catalogue, due to CA error.
I did renew on client, but the master does not show up the windows cert at all for accepting.
This appears to be the agent has a newer certificate and the master will only accept one certificate per machine (based on fqdn or fully qualified domain name). What you need to do is remove the certificate from the master so that it will accept the new request from the machine.
Alternatively you should also make sure you are in an elevated process always when running Puppet (unless you are in advanced scenarios where you are using lower privileges and know all the ins and outs of what that entails on Windows). The reason? Puppet home for elevated processes is in C:\ProgramData\PuppetLabs\Puppet, for non-elevated it is in ~/.puppet (which is usually C:\Users\username\.puppet). A certificate request for each machine can only be accepted once, but a non-elevated process won't see the one in ProgramData and will try unsuccessfully to request another.
Also make sure that the firewall on the Windows machine is not preventing it from accessing the Puppet Server, the port is usually 8140. This can cause SSL issues in reaching the master.