.NET Core build in docker linux container fails due to SSL authentication to Nuget - linux

I was given a .NET Core project to run in a Linux Docker container to do the build, everything seems to be okay on the docker configuration side, but when I run this command: dotnet publish -c Release -o out, I get the SSL authentication error below.
The SSL connection could not be established, see inner exception. Authentication failed because the remote party has closed the transport stream.
I did my research and apparently it seemed that I was missing:
the environment variables Kestrel for ASPNET (as per https://github.com/aspnet/AspNetCore.Docs/issues/6199), which I add to my docker-compose, but I don't think it is the issue.
a Developer .pfx certificate, so I updated my docker-compose with the Kestrel Path to the certs file as seen below.
version: '3'
services:
netcore:
container_name: test_alerting_comp
tty: true
stdin_open: true
image: alerting_netcore
environment:
- http_proxy=http://someproxy:8080
- https_proxy=http://someproxy:8080
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+;http://+
- ASPNETCORE_HTTPS_PORT=443
- ASPNETCORE_Kestrel__Certificates__Default__Password="ABC"
- ASPNETCORE_Kestrel__Certificates__Default__Path=/root/.dotnet/corefx/cryptography/x509stores/my
ports:
- "8080:80"
- "443:443"
build: .
#context: .
security_opt:
- seccomp:unconfined
volumes:
- "c:/FakePath/git/my_project/src:/app"
- "c:/TEMP/nuget:/root"
networks:
- net
networks:
net:
I re-run the docker container and executed dotnet publish -c Release -o out with the same results.
From my host I can do this to my local NuGet:
A) wget https://nuget.local.com/api/v2 without issues,
B) but from the container I can't.
C) However from the container I can do this to official NuGet wget https://api.nuget.org/v3/index.json, so definetely my proxy is working okay.
Debugging SSL issue:
The given .pfx certificate is a self-signed one, and it is working okay from Windows OS (at least I was told that).
strace shows me from where the certs are being pulled from as below
root#9b98d5447904:/app# strace wget https://nuget.local.com/api/v2 |& grep certs open("/etc/ssl/certs/ca-certificates.crt", O_RDONLY) = 3
I exported the .pfx as follows:
openssl pkcs12 -in ADPRootCertificate.pfx -out my_adp_dev.crt then moved it to /usr/local/share/ca-certificates/, removed the private part, just left in the file public part (-----BEGIN CERTIFICATE----- -----END CERTIFICATE----- ) executed update-ca-certificates and I could see 1 added, double checked in file /etc/ssl/certs/ca-certificates.crt and the new cert was in there.
Executed this again wget https://nuget.local.com/api/v2 and failed.
I used OpenSSL to get more info and as you can see it is not working, the cert has a weird CN, because they used a wildcard for the subject and to me this is wrong, but they state that .pfx is working in Windows OS.
root#ce21098e9643:/usr/local/share/ca-certificates# openssl s_client -connect nuget.local.com:443 -CApath /etc/ssl/certs
CONNECTED(00000003)
depth=0 CN = *.local.com
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 CN = *.local.com
verify error:num=21:unable to verify the first certificate
verify return:1
---
Certificate chain
0 s:/CN=\x00*\x00l\x00o\x00c\x00a\x00l\x00.\x00c\x00o\x00m
i:/C=ES/ST=SomeCity/L=SomeCity/OU=DEV/O=ASD/CN=Development CA
---
Server certificate
-----BEGIN CERTIFICATE-----
XXXXXXXXXXX
XXXXXXXXXXX
-----END CERTIFICATE-----
subject=s:/CN=\x00*\x00l\x00o\x00c\x00a\x00l\x00.\x00c\x00o\x00m
issuer=i:/C=ES/ST=SomeCity/L=SomeCity/OU=DEV/O=ASD/CN=Development CA
---
No client certificate CA names sent
Peer signing digest: SHA1
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 1284 bytes and written 358 bytes
Verification error: unable to verify the first certificate
---
New, TLSv1.2, Cipher is ECDHE-RSA-AES256-SHA384
Server public key is 1024 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-SHA384
Session-ID: 95410000753146AAE1D313E8538972244C7B79A60DAF3AA14206417490E703F3
Session-ID-ctx:
Master-Key: B09214XXXXXXX0007D126D24D306BB763673EC52XXXXXXB153D310B22C341200EF013BC991XXXXXXX888C08A954265623
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1558993408
Timeout : 7200 (sec)
Verify return code: 21 (unable to verify the first certificate)
Extended master secret: yes
---
I don't know what issue I'm facing, but it appears to be that:
A) the self-signed .pfx was wrongly configured, and now that it is being used in Linux it doesn't work as it should.
B) I need some more config in the container, which I'm not aware of.
What else should I do?
I'm thinking on probaly create other cert to use from Linux hosts.
Is it feasible to create another self-signed cert with OpenSSL for IIS ver 8 and import it to IIS?.
Any ideas are welcome, cheers.

ANSWERING TO MYSELF
It was not a Linux container issue, it is a certificate issue in the web server (IIS), because we are using self-signed certificates and in this way the cert will be always an invalid certificate. Self-signed certs works okay on Windows OS side, doesn't matter the invalid error. Of course self-signed certs are just for a test environment or so.
From Linux OS when you are trying to pull packages from NuGet you will get the error below, because:
1) The cert is indeed invalid, and
2) because apparently there is not an option to ignore an invalid certificate from Linux side.
The SSL connection could not be established, see inner exception.
The remote certificate is invalid according to the validation procedure.
The solution is you are working in a corporate environment, is to request to System Administrator a proper signed certificate, for that you generate a CSR from your web server, in my case IIS, then pass it to them, so they will send you back a .cer file to install in that web server.
The other option that I was trying to do but I couldn't due to the limitations of my corporate environment, is to create a fake CA (with OpenSSL), then you sign the CSR's yourself to have some valid certificates for your Dev or test environment.
Apologizes for answering this myself, but I believe it is worth to share my findings.
Hope it helps.

I had a similar problem. Docker build would not restore my nugets.
Unable to load the service index for source https://api.nuget.org/v3/index.json.
The SSL connection could not be established, see inner exception.
Authentication failed because the remote party has closed the transport stream.
(On a Mac running Catalina)
I turned off Fiddler and then it all worked again.

Related

Gitlab : Peer's certificate issuer has been marked as not trusted by the user

I have a on-prem gitlab where I am trying to run some builds/pipeline but getting the below error -
fatal: unable to access 'https://gitlab-ci-token:[MASKED]#gitlab.systems/testing/test-project-poc.git/': Peer's certificate issuer has been marked as not trusted by the user.
I have already looked into this - Gitlab:Peer's Certificate issuer is not recognized and followed the steps of obtaining the .pem file by merging the server certificate, intermediate certificate and root certificate but I am still getting the below error and really struggling to find the root cause.
/etc/gitlab/gitlab.rb config
##! enable/disable 2-way SSL client authentication
#nginx['ssl_verify_client'] = "off"
##! if ssl_verify_client on, verification depth in the client certificates chain
#nginx['ssl_verify_depth'] = "1"
nginx['ssl_certificate'] = "/etc/gitlab/ssl/gitlab.systems.pem"
nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/gitlab.systems.key"
Is there any other configuration which i need to update/modify? Any guidance is really appreciated.
I am guessing you are using a self signed certificate. If that is the case you have two options to rectify this issue:
Recommended option: Here again I assume that you have already solved the issue between the gitlab-runner and gitlab itseld, hence you registered the runner successfully. So you have already the certificate file in a /etc/gitlab-runner/certs. So on the server hosting the gitlab-runner, run the below command:
git config --system http.sslCAInfo /etc/gitlab-runner/certs/CERITIFICATE_NAME.crt
This is unsafe: Here you just disable the git https certificate verification:
git config --system http.sslverify false

Azure Linux web app: change OpenSSL default security level?

In my Azure Linux web app, I'm trying to perform an API call to an external provider, with a certificate. That call fails, while it's working fine when deploying the same code on a Windows app service plan. The equivalent cURL command line is:
curl --cert-type p12 --cert /var/ssl/private/THUMBPRINT.p12 -X POST https://www.example.com
The call fails with the following error:
curl: (58) could not load PKCS12 client certificate, OpenSSL error error:140AB18E:SSL routines:SSL_CTX_use_certificate:ca md too weak
The issue is caused by OpenSSL 1.1.1d, which by defaults requires a security level of 2, and my certificate is signed with SHA1 with RSA encryption:
openssl pkcs12 -in THUMBPRINT.p12 -nodes | openssl x509 -noout -text | grep 'Signature Algorithm'
Signature Algorithm: sha1WithRSAEncryption
Signature Algorithm: sha1WithRSAEncryption
On a normal Linux VM, I could edit /etc/ssl/openssl/cnf to change
CipherString = DEFAULT#SECLEVEL=2
to security level 1, but on an Azure Linux web app, the changes I make to that file are not persisted..
So my question is: how do I change the OpenSSL security level on an Azure web app? Or is there a better way to allow the use of my weak certificate?
Note: I'm not the issuer of the certificate, so I can't regenerate it myself. I'll check with the issuer if they can regenerate it, but in the meantime I'd like to proceed if possible :)
A call with Microsoft support led me to a solution. It's possible to run a script whenever the web app container starts, which means it's possible to edit the openssl.cnf file before the dotnet app in launched.
To do this, navigate to the Configuration blade of your Linux web app, then General settings, then Startup command:
The Startup command is a command that's ran when the container starts. You can do what you want, but it HAS to launch your app, because it's no longer done automatically.
You can SSH to your Linux web app, and edit that custom_startup.sh file:
#!/usr/sh
# allow weak certificates (certificate signed with SHA1)
# by downgrading OpenSSL security level from 2 to 1
sed -i 's/SECLEVEL=2/SECLEVEL=1/g' /etc/ssl/openssl.cnf
# run the dotnet website
cd /home/site/wwwroot
dotnet APPLICATION_DLL_NAME.dll
The relevant doc can be found here: https://learn.microsoft.com/en-us/azure/app-service/containers/app-service-linux-faq#built-in-images
Note however that the Startup command is not working for Azure Functions (at the time of writing May 19th, 2020). I've opened an issue on Github.
To work around this, I ended up creating custom Docker images:
Dockerfile for a webapp:
FROM mcr.microsoft.com/appsvc/dotnetcore:3.1-latest_20200502.1
# allow weak certificates (certificate signed with SHA1)
# by downgrading OpenSSL security level from 2 to 1
RUN sed -i 's/SECLEVEL=2/SECLEVEL=1/g' /etc/ssl/openssl.cnf
Dockerfile for an Azure function:
FROM mcr.microsoft.com/azure-functions/dotnet:3.0.13614-appservice
# allow weak certificates (certificate signed with SHA1)
# by downgrading OpenSSL security level from 2 to 1
RUN sed -i 's/SECLEVEL=2/SECLEVEL=1/g' /etc/ssl/openssl.cnf

Failed to install Gitlab - curl (60) ssl certificate

I was trying to install gitlab on my linux server following this guide and got stucked in the second setp that says
curl: (60) SSL certificate problem: self signed certificate
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
any idea on how can I solve this?
ANSWER be sure to have http_proxy and https_proxy variables correctly set.
---- UPDATE ----
After setting the variables I got the following answer from curl
Detected operating system as Ubuntu/trusty.
Checking for curl...
Detected curl...
Running apt-get update... done.
Installing apt-transport-https... done.
Installing /etc/apt/sources.list.d/gitlab_gitlab-ce.list...curl: (60) SSL certificate problem: self signed certificate
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
Unable to run:
curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/config_file.list?os=Ubuntu&dist=trusty&source=script
Double check your curl installation and try again.
Tell curl to ignore SSL warnings with -k/--insecure. Documented in man curl.
Edit: also check your proxy settings, as the host you're trying to curl to does, in fact, have a valid SSL certificate. See the --proxy option of curl.

(60) Peer's certificate issuer has been marked as not trusted by the user: Linux/Apache

I am trying to find out why my HTTPS link is not working for my website:
So I ran this command to try:
curl https://localhost/
I am using a valid signed SSL certificate and my HTTP link is working fine. I am using a Multi Domain certificate that was exported from an IIS 6 server. My instance on AWS has the 443 port enabled.
Here is a picture of my CA certificates:
I have tried to change the http.conf file's Virtual Host following the instructions in here: http://ananthakrishnanravi.wordpress.com/2012/04/15/configuring-ssl-and-https-for-your-website-amazon-ec2/
Is there any suggestions on how to get my website properly working on a HTTPS protocol?
Let me know if you need anymore information.
Thanks,
If you're not sure of the certificate that your web server is serving, you can use this command to view the certificate:
openssl s_client -showcerts -connect hostname.domain.tld:443
Also, the hostname in the certificate must match the site that you are requesting. For example, if you request a page from localhost, but your certificate is for www.yourdomain.com, the certificate check will fail.
This means that you are using a self-signed certificate.
In order for this warning not to appear, you need to purchase a certificate from a Certificate Authority.
If you are using Self Signed SSL certificate then you faced this issue.
For this you can use curl command with -k option.
curl -k https://yourdomain.com/
And if you are trying with Postman that time disable the SSL Certificate option in setting.
I got a same error but not similar to your, but summary here hope useful for others:
OS: CentOS 7
Run Python's pyspider error:
File "/usr/local/lib64/python3.6/site-packages/tornado/concurrent.py", line 238, in result
raise_exc_info(self._exc_info)
File "", line 4, in raise_exc_info
Exception: HTTP 599: Peer's certificate issuer has been marked as not trusted by the user.
root cause and steps to fix:
previously existed a soft link:
/usr/lib64/libcurl.so.4 -> /usr/lib64/libcurl.so.4.3.0_openssl
which is invalid one, so changed to valid:
/usr/lib64/libcurl.so.4 -> /usr/lib64/libcurl.so.4.3.0
while two file is:
-rwxr-xr-x 1 root root 435192 Nov 5 2018 /usr/lib64/libcurl.so.4.3.0
-rwxr-xr-x 1 root root 399304 May 10 09:20 /usr/lib64/libcurl.so.4.3.0_openssl
then for pyspider reinstall pycurl:
pip3 uninstall pycurl
export PYCURL_SSL_LIBRARY=nss
export LDFLAGS=-L/usr/local/opt/openssl/lib;export CPPFLAGS=-I/usr/local/opt/openssl/include;pip install pycurl --compile --no-cache-dir
in which PYCURL_SSL_LIBRARY is nss, due to current curl backend is nss according to
# curl --version
curl 7.29.0 (x86_64-redhat-linux-gnu) libcurl/7.29.0 NSS/3.36 zlib/1.2.7 libidn/1.28 libssh2/1.4.3
...
can fix my problem.

Where is the default CA certs used in nodejs?

I'm connecting to a server whos cert is signed by my own CA, the ca's cert had installed into system's keychain.
connecting with openssl s_client -connect some.where says Verify return code: 0 (ok)
but i cant connect with nodejs's tls/https module, which fails with
Error: SELF_SIGNED_CERT_IN_CHAIN
but connecting to a normal server (i.e google.com:443) works fine.
seems that nodejs's openssl is not sharing same keychain with system's openssl.
but I cannt find where is it. i tried overide with SSL_CERT_DIR but not seemed working.
BTW: i can bypass the server verifying by setting NODE_TLS_REJECT_UNAUTHORIZED=0 , but that's not pretty enough ;)
Im using OSX 10.8.3 with OpenSSL 0.9.8r, node v0.9.8
The default root certificates are static and compiled into the node binary.
https://github.com/nodejs/node/blob/v4.2.0/src/node_root_certs.h
You can make node use the system's OpenSSL certificates. This is done by starting node via:
node --use-openssl-ca
See the docs for further information.
See this answer on how system certificates are extended for Debian and Ubuntu
If you're using the tls module (and it seems like you are) with tls.connect you can pass a ca param in the options that is an array of strings or buffers of certificates you want to trust.

Resources