Getting connection refused errors when doing mvn clean install - redquerybuilder

I downloaded the Tardis branch of RedQueryBuilder and did an mvn clean install.
It runs through things for a bit then it gets to this part
[INFO] Running com.redspr.redquerybuilder.core.client.GwtTestDom
[INFO] logging for HtmlUnit thread
[INFO] [ERROR] I/O error on HTTP request
[INFO] org.apache.http.conn.HttpHostConnectException: Connection to http://50.19.99.237:53655 refused
Just wondering if there's a quick answer, like, oh your gwt is out of date, or some such other easy to solve issue.

So here was the problem, had nothing to do with GAE.
The problem was that the name of my host, in /etc/hostname, had no corresponding host entry in /etc/hosts. It was complicated by the fact that I had "search mydomain.com" in my resolv.conf, which was further complicated by the fact that mydomain.com is wildcarded, so any unknown hostnames resolve to a particular IP address.
So what happend was, the test suite would look for myhost, since it didn't find myhost in DNS or in /etc/hosts, it looked up myhost.mydomain.com, as a result of my resolv.conf, which returned a valid IP address, because of the wildcard. Then the test suite got a connection refused, because it was connecting to a totally different host. So the solution was to add 127.0.0.1 myhost myhost.mydomain.com to my /etc/hosts and it built and ran fine.
Long story short, if the host defined in /etc/hostname does not have a valid DNS or /etc/hosts entry, the build will fail, as it will either get a host unknown, or in my case a connection refused because of my DNS jiggery pokery.

Related

Rabbit MQ changing hostname while preserving rabbitMQ artifacts and messages

This question is regarding rabbitmq config
I hope this question is appropriate for stackoverflow forum.
Please point me to right forum if it isnt
My problem statement that I need to to change hostname of a linux server from "thishost" to "thathost"
The host "thishost" has RabbitMQ installed on it with a ton of artifacts and messages
I need to be able to preserve all the RabbitMQ artifacts such as queues, exchanges and also messages when the hostname changes to "thathost"
I am considering configuration change to enable rabbitmq see old hostname (thishost) despite the name change for linux
To ensure that rabbitmq hostname remains same I peg it to the original hostname by configuring following two parameters in the rabbitmq configuration file
/etc/rabbitmq/rabbitmq-env.conf
...
HOSTNAME=thishost
NODENAME=rabbit#thishost
Having done this change in rabbitmq config, I changed the linux hostname to "thathost" and try to start the rabbitmq service.
The rabbitmq service now refuses to start and the error messages are as follows
service rabbitmq-server start
Job for rabbitmq-server.service failed because the control process exited with error code.
See "systemctl status rabbitmq-server.service" and "journalctl -xe" for details.
journalctl -xe
Nov 30 11:20:07 ubuntula1 systemd[1]: Failed to start RabbitMQ Messaging Server.
Nov 30 11:20:18 ubuntula1 systemd[1]: rabbitmq-server.service: Failed with result 'exit-code'.
The logfile /var/log/rabbitq shows following error
/var/log/rabbitq
ERROR: epmd error for host thishost: nxdomain (non-existing domain)
Any thoughts on
how to fix the rabbitmq config
any alternative way on making rabbitmq agnostic to hostname
is there a better idea to preserve the rabbitmq artifacts across hostnames
Please note I tried following
export import artifacts using rabbitmqctl export__definitions/import_definitions
Store and load messages using rabbitio
However as I mentioned I have a ton of artifacts and messages and the rigor involved that approach makes it error prone so I am searching for a less rigorous approach
Thanks much folks
Going by the error message in logfile "epmd error for host thishost: nxdomain (non-existing domain)"
I stumbled upon this post How to resolve ERROR: epmd error for host nxdomain (non-existing domain)?
While this is not directly relevant, it sure provides a tip that a /etc/hosts entry is needed for mapping old hostname to the same ip address.
With alias for old hostname addded in /etc/hosts my problem was solved :-)
SO to sum it up, if you want to change the hostname of your linux host - then you need to do two things to save your artifacts from becoming unusable after hostname change
Change to rabbitmq configuration as I already described
/etc/rabbitmq/rabbitmq-env.conf
...
HOSTNAME=thishost
Make an alisas in my /etc/hosts to add old hostname mapping to ip address in addition to new hostname as follows
/etc/hosts
...
a.b.c.d thathost thishost
That solved my problem and rabbitmq starts fine with all existing artifacts intact after hostname change

How could updating ssl certs cause node forever to stop?

I just went through some very strange debugging after running
sudo systemctl stop nginx
sudo /opt/bitnami/letsencrypt/lego --tls --email="..." --domains="..." --path="/opt/bitnami/letsencrypt" renew --days 90
sudo systemctl start nginx
I was getting a 502 error, and many errors of the form
[error] 25208#25208: *1 connect() failed (111: Connection refused) while connecting to upstream, client
I had multiple domains running on this server, but I only updated one of their ssl certs. The other domains were still up, but the one that was updated began erroring out with 502. After endless Google searches, it kept pointing to an IPv6 issue, and changing localhost to 127.0.0.1 in the nginx config, or to a mismatch of ports between nginx and node. It turned out that somehow forever had just stopped, without leaving any indication, e.g. I couldn't find anything from today in ~/.forever. I just want to know if i'm missing anything obvious, this was not the first time that I updated ssl certs, and I did the exact same thing last time without this happening.

One openshift-origin worker node won't resolv cluster.local records, causing Imagepullbackoff

We have setup an okd 3.11 cluster with 100+ nodes. Everything was working fine but then a worker node stopped resolving the registry service internal url. This causes new pods to be scheduled to that node fail with ImagePullBackoff error.
Failed to pull image "docker-registry.default.svc:5000/app-name/app-name:latest": rpc error: code = Unknown desc = Get https://docker-registry.default.svc:5000/v1/_ping: dial tcp: lookup docker-registry.default.svc on 10.*.*.71:53: server misbehaving
We tried running nslookup on the worker node and following were the results
While this doesn't work (while it works on other nodes)
[root#worker22 ~]# nslookup docker-registry.default.svc.cluster.local
Server: 10.*.*.71
Address: 10.*.*.71#53
** server can't find docker-registry.default.svc.cluster.local: SERVFAIL
This works just fine.
[root#worker22 ~]# nslookup docker-registry.default.svc.cluster.local 127.0.0.1
Server: 127.0.0.1
Address: 127.0.0.1#53
Name: docker-registry.default.svc.cluster.local
Address: 172.*.*.212
Adding server=/cluster.local/172.30.0.1 to dnsmasq conf file /etc/dnsmasq.d/origin-upstream-dns.conf works as a work around but can't find what is causing this.
I have tried adding -q to dnsmasq service's ExecStart and it shows that the dnsmasq won't query the openshift dns running locally at 127.0.0.1:53.
Dnsmasq config/resolv.conf is in order on the node.
I have tried restarting dnsmasq/NetworkManager/Docker, I have tried respawning ovs/sdn pods but still no help.
Found some documented evidence that dnsmasq can behave like that.
It has been suggested by some RedHat articles that a long running dnsmasq service may misbehave and stop resolving names. Similar cases have been reported for openshift environment as well.
The links below suggest that restarting the service would solve the problem for some time and then the issue may resurface. As stated earlier, in my case service restart didn't help but oldest remedy in IT worked (rebooting the node solved the problem).
Reference:
https://access.redhat.com/solutions/3393141
https://bugzilla.redhat.com/show_bug.cgi?id=1560489

DNS resolve timeout/delay for domains mapped to localhost in hosts file

I'm actually facing an issue which came up when using the proxy in Angular CLI.
But it's not related directly to Angular nor to node.js... it seems to have it's roots some levels deeper (namely on operating system level)
##Short version:
When I have a domain to IP mapping in my hosts file /etc/hosts and proxy it using node-http-proxy which is the underlying layer of the angular-cli proxy feature there's a delay of 5000ms before the request gets resolved and the response is provided.
Proxying is mandatory for backend communication to avoid cross origin errors in development because angular apps are served via port 4200.
##Longer version:
Operating System: OSX Catalina 10.15.4
Based on a deeper analysis it's not caused by Angular CLI and even not node.js.
It seems there's something going "wrong" with the system as I can reproduce the behavior in my terminal as well using the arp command
There's a mapping in the /etc/hosts file which looks like below:
127.0.0.1 service.company.local
When running then the command: arp service.company.local it won't resolve of course as this domain isn't known for DNS servers.
It finishes with the output: arp: service.company.local: Unknown host
Also when the computer is disconnected from internet/network (wifi of) the arp still takes 5000ms before it finishes with the Unknown host message, whereas it directly returns Unknown host for existing domains (then without delay).
The problem is pretty frustrating as it heavily slows down local development of an Angular app which is doing some cascading requests take so extremely long that a fluent work isn't possible.
Screenshot from Chrome Dev Tools:
Is there some known solution to get around this issue without moving away from the domain to ip mapping within the hosts file?
Addition (content of the hosts file)
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
127.0.0.1 service.company.local
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
I'm very thankful for any hints.

RabbitMQ wont cluster (nxdomain)

I want to set up 2 rabbitmq servers to work in cluster.
When when trying to run
rabbitmqctl join_cluster rabbit#my_rabbit_1.my.domain.name on my_rabbit_1
I get unable to connect to epmd (port 4369) on my_rabbit_2.my.domain.name: nxdomain (non-existing domain)
I use rabbitmq:latest (debian), .erlang.cookie is the same, hosts resolve fine: I can ping both directions, nmap -6 -p 4369 my_rabbit_2.my.domain.nam returns 4369/tcp open epmd
EDIT:
tcpdump shows that while resolving hostname, rabbit or epmd performs not 2 types of DNS query: AAAA for IPv6 and A for IPv4 address, but only IPv4 which fails repeatedly with nxdomain as there is no IPv4 address available. However, it does not try AAAA DNS query, except when trying to run command like rabbitmq -n rabbit#local.machine.domain.name: then it runs AAAA query and outputs successfully. Hence the problem. How do I solve that?
Finally found solution that worked for me. Erlang documentation says that, by default, -proto_dist specifies a protocol for Erlang distribution, which defaults to inet_tcp (TCP over IPv4). So in IPv6-only environment you have to set -proto_dist inet6_tcp flag for erl.
This can be done by adding the following lines to your rabbitmq-env.conf (see RabbitMQ configuration docs):
# For rabbitmq-server
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="-proto_dist inet6_tcp"
# For rabbitmqctl
RABBITMQ_CTL_ERL_ARGS="-proto_dist inet6_tcp"
Note that rabbitmqctl and rabbitmq-server use different erl settings: I was unable to create cluster without RABBITMQ_CTL_ERL_ARGS="-proto_dist inet6_tcp" setting using rabbitmqctl join_cluster rabbit#host.in.my.domain. It should not be necessary in production mode. Also note that RabbitMQ configuration docs advice against using this setting except for debugging.
unable to connect to epmd (port 4369) on my_rabbit_2.my.domain.name: nxdomain (non-existing domain)
This is an error raised when the rabbitmq server is running on a hostname other than what you think it is running on, or when hostname doesn't resolve to what you think it does.
Amusingly enough I had this exact same issue last night when one instance in our cluster failed, came back on a new hostname, and somehow corrupted its internal authentication store etc.
Without the exact dns entries etc for your setup, all I can offer is general troubleshooting steps.
See this StackOverflow question for a resolution that may help you - in particular the answer by Kishor Pawar.
Are you sure you configured rabbitmq to listen on IPV6? Is there a reason you can't bind it to IPV4 as well on 127.0.0.1 for management operations?

Resources