I am currently trying to deploy a NuGet package to our local Nexus server from a Linux build machine (dotnet core project)
However, the Nexus server is running on HTTPS with a company domain certificate which is not recognised by my build machine.
When I run nuget push I get the following error:
Pushing App.1.0.0.nupkg to 'https://v-nexus/repository/nuget/'...
PUT https://v-nexus/repository/nuget/
An error was encountered when fetching 'PUT https://v-nexus/repository/nuget/'. The request will now be retried.
Error: TrustFailure (The authentication or decryption has failed.)
The authentication or decryption has failed.
Invalid certificate received from server. Error code: 0xffffffff800b010a
PUT https://v-nexus/repository/nuget/
An error was encountered when fetching 'PUT https://v-nexus/repository/nuget/'. The request will now be retried.
Error: TrustFailure (The authentication or decryption has failed.)
The authentication or decryption has failed.
Invalid certificate received from server. Error code: 0xffffffff800b010a
PUT https://v-nexus/repository/nuget/
Error: TrustFailure (The authentication or decryption has failed.)
The authentication or decryption has failed.
Invalid certificate received from server. Error code: 0xffffffff800b010a
I have tried:
Installing the server certificate globally (using update-ca-certificates)
Installing the certificate using certmgr
Is there any other way that I have missed, or is this a known issue using NuGet in Linux. (I am using docker containers so don't want the solution to be "use windows"!) This is forming part of our automated build system so I am limited to Linux docker containers.
One of my colleges, running Windows is able to push the package without any issues, so I know it's not an issue with the server.
This is a very old question but I came across it looking for an answer to this issue.
The problem seems to be Nuget, under Mono, is broken.
It's not really an answer but hopefully it helps somebody else.
Related
We created a root crtificate, which unfortunately expired today in Azure VPN, I regenerated the certificate, upload it to Azure VPN, regenerated a client certificate and se up the OpenVPN configuration file. (After downloaadin the "VPN Client" from the Azure portal.
However, I keep getting "Peer certification verification failure" and I can't seem to understand why. Everything I read suggests that it is as there is a mismatch between the server and the client, however, I must be making the same mistakes, as I have followed the instructions below to generate the root certs, and the client certs::
https://learn.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-certificates-point-to-site#cer
I've used the following open-ssl command to generate convert to a PEM file:
"C:\Program Files\OpenSSL-Win64\bin\openssl" pkcs12 -in child.pfx -out child.pem -clcerts
Then followed this for creating the OVPN file for the iOS device. (I have downloaded the OpenVPN Client to my desktop machine to make it easier to test)
https://learn.microsoft.com/en-us/azure/vpn-gateway/point-to-site-vpn-client-cert-mac
I have done this more than once, as well as having "Reset" the VPN gateway, just to try and make sure that it isn't something weird going on.
Does anyone have any ideas as to where I am going wrong?
In case anyone comes across this, there are two things that I have done to fix this issue:
I ended up entering the name of the Root Certificate into the azure settings (the cn=psroot2025 part)
I had been using a windows version of OpenVPN to test the connections were working, by the looks of it, some versions of OpenVPN return the "Peer certification verification failure" error, although this is not the case. You need to download version 2.5.4 from https://openvpn.net/community-downloads/ instead of the latest and this seems to not have the same issue (I had originally installed vrsion 2.5.7.)
Hope that helps...
Terraform init is giving the following error. No version has been upgraded and it was working few days back but suddenly it is failing.
Error: Failed to query available provider packages
Could not retrieve the list of available versions for provider hashicorp/aws:
could not connect to registry.terraform.io: Failed to request discovery
document: Get "https://registry.terraform.io/.well-known/terraform.json": read: connection reset by peer
when I run curl from the server, it is not able to connect as well.
curl https://registry.terraform.io/
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to registry.terraform.io:443
Are you on a network where an admin might have installed a proxy between you and the internet? If so, you need to get the signing certificates and configure them in your provider.
If you're on a home network or a public one, this is a man in the middle attack. Do not use this network.
If you have the certificates, they can be configured in your aws provider by pointing cacert_path, cert_path and key_path at the appropriate .pem files.
If you have verified that there is a valid reason to have a proxy between you and the internet, you are not touching production, and the certificates are hard to come by, you can test your code by setting insecure = true on your provider. Obviously, don't check that in.
I get this error from time to time. It's been frequently reported on the terrafrom github page. One particular comment always reminds me to refresh my network settings (e.g. restart network connection):
OK, I think I have isolated and resolved the issue in my case. It's
always DNS to blame in the end, right? I hardcoded CloudFlare DNS
(1.1.1.1 and its IPv4 and IPv6 aliases) into my network settings on
the laptop, and since then everything seems to be working like a
treat.
How I fix that nre relic provider downloading issue
Error while installing newrelic/newrelic v3.13.0: could not query provider registry for registry.terraform.io/newrelic/newrelic: failed to retrieve authentication checksums for provider: the request failed
│ after 2 attempts, please try again later: Get "https://github.com/newrelic/terraform-provider-newrelic/releases/download/v3.13.0/terraform-provider-newrelic_3.13.0_SHA256SUMS": net/http: request canceled
│ while waiting for connection (Client.Timeout exceeded while awaiting headers)
https://learnubuntu.com/change-dns-server/
add google nameservers here:
/etc/resolv.conf
and then check with thsi command:
dig google.com | grep SERVER
and done.
This is a temp change, will disappear when moving to the new terminal.
Hello over the weekend my self-hosted build agent lost connection to the DevOps Services. Not completely but I am getting connection errors on almost all of my builds.
my builds are failing with the error:
ERR Agent] System.Net.Http.HttpRequestException: The SSL connection could not be established, see inner exception.
---> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host..
From the Agent diagnostic logs.
I tried re-installing the Agent but this is not possible either as I get the exact same error when I try to run config.cmd remove
I tried adding --sslskipcertvalidation but this does not seem to work with the remove command. I tried to remove the old from the services, but it will not remove from the agent machine as I cannot complete the "config.cmd remove" process.
No clue why I am getting these connection errors all of a sudden. (I am on the Basic plan)
Does not seem to be relating to any current reported Azure outages (2021-04-12).
I ended up installing the agent again in a separate folder from the old agent.
Likely the issue was that the old agent was installed with the url [org.name].visualstudio.com I guess there is now some SSL certificate issue on the old url.
I installed the new agent with the dev.azure.com/[org.name] url and this solved all of my issues.
The same thing happened to me, but I was able to solve it by installing this application on the server, which is used to activate the TLS2 protocols with just one click and the SSL Cipher Suites
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (0x9f) DH 2048 bits FS 256
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (0x9e) DH 2048 bits FS 128
nartac IISCrypto is the app
in windows server 2012, I installed IIS.
I created a site with host name host1 (name of windows server) and create a folder called host1 under C:\inetpub\wwwroot, contains a html file.
when navigate http://host1/, I got the desired content of the html file
Now, I have to create a certificate using that thread.
I success all the steps, but stucked on step 7:
at that step, the cmd of (wacs.exe) is closed by it self.
Could you please help me solving that issue ? Big thanks.
To get the reason behind the issue you need to check the event viewer log.
I tried to reproduce your issue. I got the below error in event viewer:
A fatal error occurred while creating an SSL client credential. The internal error state is 10013.
you will find this log under the windows-> system logs.
The reason behind the issue is on your machine TLS 1.2 is not enabled.
To resolve the issue you could follow the below steps:
1)open the registry editor.
2) go to the below section:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2
set below value to the particular section:
client:
"DisabledByDefault"=dword:00000000
"Enabled"=dword:00000001
Server:
"DisabledByDefault"=dword:00000000
"Enabled"=dword:00000001
Note: Do not forget to restart your machine after editing registry key values.
reference link:
A fatal error occurred while creating a TLS client credential. The internal error state is 10013
I have CCNET 1.8.5.0 installed on two build servers and I configured WebDashboard on one server to monitor both of them. But it leads to such bug: when user logs in to one of the servers, webdashboard shows him as being authorized on other server too (Logout button is showed instead of Login). But when it tries to access project on second server he gets usual error:
Request processing has failed on the remote server: Permission to execute 'ViewProject' has been denied.
How could I force webdashboard to separate authorization on every server?
this seems to be a bug :-(
there is no configuration to my knowledge that would bypass this problem for the moment.