etcd-wrapper | etcdmain: rejected connection from "" (tls: "" does not match any of DNSNames [""]) - coreos

I create a etcd cluster, after the upgrade I start seeing the following error:
etcd-wrapper[11905]| etcdmain: rejected connection from "" (tls: "" does not match any of DNSNames [""])
I am not sure what its means? I provide the certificate with DNS names which is not working currently, and an ip address. I see the requests are coming from the right IP address, but they are being rejected. It was working previous not sure what changed.
Do I remove those DNS entries and its will resolve the issue?
Is there a way to bypass it from a parameter in etcd?

Figured it out the certs generated had the wrong DNS names that were not used, and just generated new certs removing the unneeded DNS information and resolved the DNSNames rejected issue.
The TLS issue is due to mis-configuration of config, I miss configured some parameters with HTTP instead of HTTPS. Please make sure all the configuration is HTTPS or else its will throw this error and other errors.

Related

I am trying to run terraform init but getting this error: Failed to query available provider packages

Terraform init is giving the following error. No version has been upgraded and it was working few days back but suddenly it is failing.
Error: Failed to query available provider packages
Could not retrieve the list of available versions for provider hashicorp/aws:
could not connect to registry.terraform.io: Failed to request discovery
document: Get "https://registry.terraform.io/.well-known/terraform.json": read: connection reset by peer
when I run curl from the server, it is not able to connect as well.
curl https://registry.terraform.io/
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to registry.terraform.io:443
Are you on a network where an admin might have installed a proxy between you and the internet? If so, you need to get the signing certificates and configure them in your provider.
If you're on a home network or a public one, this is a man in the middle attack. Do not use this network.
If you have the certificates, they can be configured in your aws provider by pointing cacert_path, cert_path and key_path at the appropriate .pem files.
If you have verified that there is a valid reason to have a proxy between you and the internet, you are not touching production, and the certificates are hard to come by, you can test your code by setting insecure = true on your provider. Obviously, don't check that in.
I get this error from time to time. It's been frequently reported on the terrafrom github page. One particular comment always reminds me to refresh my network settings (e.g. restart network connection):
OK, I think I have isolated and resolved the issue in my case. It's
always DNS to blame in the end, right? I hardcoded CloudFlare DNS
(1.1.1.1 and its IPv4 and IPv6 aliases) into my network settings on
the laptop, and since then everything seems to be working like a
treat.
How I fix that nre relic provider downloading issue
Error while installing newrelic/newrelic v3.13.0: could not query provider registry for registry.terraform.io/newrelic/newrelic: failed to retrieve authentication checksums for provider: the request failed
│ after 2 attempts, please try again later: Get "https://github.com/newrelic/terraform-provider-newrelic/releases/download/v3.13.0/terraform-provider-newrelic_3.13.0_SHA256SUMS": net/http: request canceled
│ while waiting for connection (Client.Timeout exceeded while awaiting headers)
https://learnubuntu.com/change-dns-server/
add google nameservers here:
/etc/resolv.conf
and then check with thsi command:
dig google.com | grep SERVER
and done.
This is a temp change, will disappear when moving to the new terminal.

How would you resolve the [DEP0123] deprecation warning when using Cloud SQL from an external Node.js instance?

When connecting to the CloudSQL DB, you must provide the PostgreSQL configuration details (this makes sense).
When getting the necessary address information from Google's configuration page, you are provided with an external IP address (and nothing else as far as external solutions work) to connect to it.
This then produces a warning when using it:
(node:18101) [DEP0123] DeprecationWarning: Setting the TLS ServerName to an
IP address is not permitted by RFC 6066. This will be ignored in a future
version.
I have tried researching about this warning, and am struggling to come up with a proper resolution for this, since Google does not provide any sort of servername (or similar) for this.
I'm thinking one solution could be to externally add a subdomain to my companies servers that points to this IP address, but to be honest that is not very ideal for us (if it's the only solution though, that is fine).
This is a server running Node 12.7.0 on Debian 9.9 (stretch), connecting to CloudSQL PostgreSQL 11 (beta).
Obviously the expected solution is to remove all [deprecation] warnings from a production code-base, so looking to resolve that, and any ideas of how best to attack that since nobody seems to have posted this elsewhere!
Edit:
I was able to resolve this by adding proper hostnames to said IPs. Not sure if there is a better solution (if you find one, please let me know!), but this will work in the interim since keeping the external server around is only a short-term scenario for us anyways.
Related issue created for adding this to the documentation: https://github.com/brianc/node-postgres/issues/1950
I changed from IP name to DNS name and the error is gone.
You can change in ServerName: // your ip and just put "localhost".

Authenticating with Kerberos in a disjointed namespace

I have a problem with Kerberos in my network.
My Active Directory domain name is configured as "acme.com". However, the DNS suffix is "wifi.acme.com". In computer (endpoint1) I tried to execute a SMB query against endpoint2
dir \\\\endpoint2.wifi.acme.com\admin$
which fails with the following error:
"The request is not supported".
I have a security policy that restricts NTLM outgoing connections (Network Security: Restrict NTLM : Outgoing NTLM traffic to remote servers).
In Wireshark I can see that the Kerberos TGS request returned with an error:
"err-s-principal-unknown kerberos".
I tried the following solutions with no success:
Update the msDS-AllowedDNSSuffixes attribute with the proper DNS suffix.
Define the host name to Kerberos realm mappings (as in Kerberos-SSO-Handling-Disjointed-Active-Directory-and-UNIX-DNS)
Is there a solution to this problem without modifying the DNS suffix nor the Active Directory domain to have the same name?
Thanks.
Try doing it with two slashes instead of 4.
dir \\endpoint2.wifi.acme.com\admin$
The error message you get is significant. An error 5 means you got connected, and didn't have credentials. An error 53 means you didn't get connected.
Why are you trying WIFI.acme.com? When your using an FQDN, the domain suffix usually won't make a difference (there are cases where it can).
Try just pinging the resource and see if your getting a response (or even an IP address). I suspect your not, and that you should be going to \endpoint2.acme.com\admin$ instead.
if all else fails, try dir \\admin$ where IP address is the IP of the 'endpoint2' device.

Cloudflare 524 w/ nodejs + express

I'm running a nodejs webserver on azure using the express library for http handling. We've been attempting to enable cloudflare protection on the domains pointing to this box, but when we turn cloudflare proxying on, we see cycling periods of requests succeeding, and requests failing with a 524 error. I understand this error is returned when the server fails to respond to the connection with an HTTP response in time, but I'm having a hard time figuring out why it is
A. Only failing sometimes as opposed to all the time
B. Immediately fixed when we turn cloudflare proxying off.
I've been attempting to confirm the TCP connection using
tcpdump -i eth0 port 443 | grep cloudflare (the request come over https) and have seen curl requests fail seemingly without any traffic hitting the box, while others do arrive. For further reference, these requests should be and are quite quick when they succeed, so I'm having a hard time believe the issue is due to a long running process stalling the response.
I do not believe we have any sort of IP based throttling or firewall (at least not intentionally?)
Any ideas greatly appreciated, thanks
It seems that the issue was caused by DNS resolution.
On Azure, you can configure a custom domain name for your created webapp. And according to the CloudFlare usage, you need to switch the DNS resolution to CloudFlare DNS server, please see more infomation for configuring domain name https://azure.microsoft.com/en-us/documentation/articles/web-sites-custom-domain-name/.
You can try to refer to the faq doc of CloudFlare How do I enter Windows Azure DNS records in CloudFlare? to make sure the DNS settings is correct.
Try clearing your cookies.
Had a similar issue when I changed cloudflare settings to a new host but cloudflare cookies for the domain was doing something funky to the request (I am guessing it might be trying to contact the old host?)

Mis-configured domain, causing 104 (connection reset by peer) error on heroku website

I have a misconfigured heroku website. It shows error 104 (Read Error: Connection reset by peer) upon typing its URL and hitting enter. But subsequently refreshing the URL a couple of times makes the URL load correctly (some kind of fallback kicks in? - not that I knowingly configured any). The URL is http://damadam.in/ (it's a naked domain).
I bought this domain from godaddy. In Godaddy's control panel where I have the DNS Zone file, the host www points to damadam.herokuapp.com (under CName). http://damadam.in is set to forward to http://www.damadam.in. Lastly, in my heroku control panel both http://damadam.in and http://www.damadam.in have damadam.herokuapp.com as the DNS target (could this last configuration be the problem)?
Can someone help me properly set this thing up?
This is not a http response code, but rather an error number indicating something was wrong with the connection.
"Connection reset by peer" means that, on the route from your computer to the final destination, a node decided to forcefully stop and reset the connection. On a configuration level I don't think you will be able to do much about this. If there was some kind of DNS misconfiguration, you would not see a read error, but a DNS Error instead.
Make sure that your local network is stable (e.g. connect to your modem with an ethernet cable, rather than through wifi). If this connection is stable, try again at a later date. Connections between nodes can break, and in some cases not all traffic might be able to reach the intended destination. If behaviour persists through a greater length of time, contact your host, in this case Godaddy, and ask them to look into this problem. It might be just a faulty piece of equipment

Resources