I know this could be a bottle in the sea but.
I've settle an internal DNS using Tinydns (let's call it vmDNS) that have been working for while now. Yesterday very strange behavior that I cannot explain. I had huge amount of query to Root DNS server coming from my vmDNS around 50 per second.
I may have missed something here but why my vmDNS is supposed to ask to DNS resolution to root server as I've configured the resolver to ask 8.8.8.8.
In the mean time looking at the process running and filtering on "dns" value it appears I have hundred of dnscache process running. I've killed thoses process and so far it came back to normal.
But I cannot understand why this happened.
Note: The vmDNS is not reachable from outside the network.
Here some command output while the strange behavior was happening.
ps -aux |grep dns
dnscache 14811 dnscache 189u IPv4 669392975 0t0 UDP vmDNS:8460->192.203.230.10:domain
dnscache 14811 dnscache 190u IPv4 669389015 0t0 UDP vmDNS:30047->192.5.5.241:domain
dnscache 14811 dnscache 191u IPv4 669392974 0t0 UDP vmDNS:34153->192.203.230.10:domain
dnscache 14811 dnscache 192u IPv4 669376088 0t0 UDP vmDNS:45196->128.9.0.107:domain
dnscache 14811 dnscache 202u IPv4 669393005 0t0 UDP vmDNS:37691->198.41.0.10:domain
dnscache 14811 dnscache 203u IPv4 669393015 0t0 UDP vmDNS:40394->192.112.36.4:domain
-------------------------------
dnscache 14811 dnscache 171u IPv4 669389086 0t0 UDP vmDNS:45861->198.32.64.12:domain
dnscache 14811 dnscache 172u IPv4 669389087 0t0 UDP vmDNS:57794->198.41.0.4:domain
dnscache 14811 dnscache 173u IPv4 669389088 0t0 UDP vmDNS:62378->128.8.10.90:domai
Related
I can't make connection into my nodecluster, my nodetool status is currenty refused, i am using Cassandra 4.1 but not working trying edit in cassandra.yaml for localhost using 127.0.0.1 also edit my cassandra-env.sh for public name rename it with localhost too is also not working, so i decided to downgrade into 4.0.7 and working perfectly nothing to change into parameter of cassandra.yaml also cassandra-env.sh
Tools
Cassandra 4.1
Operating system : Ubuntu 20.04
Java version : openjdk version "11.0.17" 2022-10-18 OpenJDK Runtime
Environment (build 11.0.17+8-post-Ubuntu-1ubuntu220.04) OpenJDK
64-Bit Server VM (build 11.0.17+8-post-Ubuntu-1ubuntu220.04, mixed
mode)
here the code error in my nodetool status
root#myserver:/etc/cassandra# nodetool status
nodetool: Failed to connect to '127.0.0.1:7199' - ConnectException: 'Connection refused (Connection refused)'.
also in my cqlsh not working, only show like this
root#myserver:/etc/cassandra# cqlsh 161.97.96.126 9042
Connection error: ('Unable to connect to any servers', {'161.97.96.126:9042': ConnectionRefusedError(111, "Tried connecting to [('161.97.96.126', 9042)]. Last error: Connection refused")})
i was desperate, but try to install another version means downgrade from 4.1 to 4.0.7 (i make purge remove all my cassandra 4.1 file, installing from the beginning for 4.0.7), then viola nothing to change for the parameter in cassandra.yaml also cassandra-env.sh but works perfectly with my current tools above
Is Cassandra 4.1 is still not compatible with Ubuntu 20.04?
update 23-01-2023 22:10 pm
here my code when i try again installing cassandra 4.1 without edit anything, just fresh install again
root#myvps:~# sudo service cassandra status
β cassandra.service - LSB: distributed storage system for structured data
Loaded: loaded (/etc/init.d/cassandra; generated)
Active: active (running) since Mon 2023-01-23 14:49:41 CET; 8s ago
Docs: man:systemd-sysv-generator(8)
Process: 1644739 ExecStart=/etc/init.d/cassandra start (code=exited, status=0/SUCCESS)
Tasks: 24 (limit: 9479)
Memory: 2.2G
CGroup: /system.slice/cassandra.service
ββ1644848 /usr/bin/java -ea -da:net.openhft... -XX:+UseThreadPriorities -XX:+HeapDumpOnOutOfMemoryError -X>
Jan 23 14:49:41 myvps.contaboserver.net systemd[1]: Starting LSB: distributed storage system for structured data...
Jan 23 14:49:41 myvps.contaboserver.net systemd[1]: Started LSB: distributed storage system for structured data.
root#myvps:~# sudo service cassandra status
β cassandra.service - LSB: distributed storage system for structured data
Loaded: loaded (/etc/init.d/cassandra; generated)
Active: active (exited) since Mon 2023-01-23 14:49:41 CET; 31s ago
Docs: man:systemd-sysv-generator(8)
Process: 1644739 ExecStart=/etc/init.d/cassandra start (code=exited, status=0/SUCCESS)
Jan 23 14:49:41 myvps.contaboserver.net systemd[1]: Starting LSB: distributed storage system for structured data...
Jan 23 14:49:41 myvps.contaboserver.net systemd[1]: Started LSB: distributed storage system for structured data.
root#myvps:~# nodetool version
nodetool: Failed to connect to '127.0.0.1:7199' - ConnectException: 'Connection refused (Connection refused)'.
root#myvps:~# cqlsh
Connection error: ('Unable to connect to any servers', {'127.0.0.1:9042': ConnectionRefusedError(111, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")})
Update 23-01-2023 22:16 PM
i try accessing 2 thing
netstat -tnlp and show this
root#myvps:~# netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 537/redis-server 12
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 598/nginx: master p
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 447/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 534/sshd: /usr/sbin
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 578/postgres
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 598/nginx: master p
tcp6 0 0 ::1:6379 :::* LISTEN 537/redis-server 12
tcp6 0 0 :::80 :::* LISTEN 598/nginx: master p
tcp6 0 0 :::22 :::* LISTEN 534/sshd: /usr/sbin
tcp6 0 0 ::1:5432 :::* LISTEN 578/postgres
tcp6 0 0 :::443 :::* LISTEN 598/nginx: master p
and also type sudo lsof -nPi -sTCP:LISTEN will show this
root#myvps:~# sudo lsof -nPi -sTCP:LISTEN
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
systemd-r 447 systemd-resolve 13u IPv4 18529 0t0 TCP 127.0.0.53:53 (LISTEN)
sshd 534 root 3u IPv4 18387 0t0 TCP *:22 (LISTEN)
sshd 534 root 4u IPv6 18389 0t0 TCP *:22 (LISTEN)
redis-ser 537 redis 6u IPv4 20184 0t0 TCP 127.0.0.1:6379 (LISTEN)
redis-ser 537 redis 7u IPv6 20185 0t0 TCP [::1]:6379 (LISTEN)
postgres 578 postgres 5u IPv6 20704 0t0 TCP [::1]:5432 (LISTEN)
postgres 578 postgres 6u IPv4 20705 0t0 TCP 127.0.0.1:5432 (LISTEN)
nginx 598 root 6u IPv6 18878 0t0 TCP *:443 (LISTEN)
nginx 598 root 7u IPv4 18879 0t0 TCP *:443 (LISTEN)
nginx 598 root 8u IPv4 18880 0t0 TCP *:80 (LISTEN)
nginx 598 root 9u IPv6 18881 0t0 TCP *:80 (LISTEN)
nginx 601 www-data 6u IPv6 18878 0t0 TCP *:443 (LISTEN)
nginx 601 www-data 7u IPv4 18879 0t0 TCP *:443 (LISTEN)
nginx 601 www-data 8u IPv4 18880 0t0 TCP *:80 (LISTEN)
nginx 601 www-data 9u IPv6 18881 0t0 TCP *:80 (LISTEN)
nginx 602 www-data 6u IPv6 18878 0t0 TCP *:443 (LISTEN)
nginx 602 www-data 7u IPv4 18879 0t0 TCP *:443 (LISTEN)
nginx 602 www-data 8u IPv4 18880 0t0 TCP *:80 (LISTEN)
nginx 602 www-data 9u IPv6 18881 0t0 TCP *:80 (LISTEN)
nginx 603 www-data 6u IPv6 18878 0t0 TCP *:443 (LISTEN)
nginx 603 www-data 7u IPv4 18879 0t0 TCP *:443 (LISTEN)
nginx 603 www-data 8u IPv4 18880 0t0 TCP *:80 (LISTEN)
nginx 603 www-data 9u IPv6 18881 0t0 TCP *:80 (LISTEN)
nginx 604 www-data 6u IPv6 18878 0t0 TCP *:443 (LISTEN)
nginx 604 www-data 7u IPv4 18879 0t0 TCP *:443 (LISTEN)
nginx 604 www-data 8u IPv4 18880 0t0 TCP *:80 (LISTEN)
nginx 604 www-data 9u IPv6 18881 0t0 TCP *:80 (LISTEN)
I can confirm that Cassandra 4.1 works on the latest versions of Ubuntu including 20.04 LTS and 22.04 LTS.
I didn't run into any issues installing/running Cassandra 4.1 out-of-the-box. You didn't specify the steps to replicate the problem so I'm assuming all you've done is perform a fresh installation of Cassandra 4.1. In any case, I followed the Installing Cassandra instructions documented in the official website and it just worked.
For what it's worth, I installed the same version of Java 11 as you:
openjdk version "11.0.17" 2022-10-18
Zero config change
After installing Cassandra 4.1 with NO configuration changes, I am able to run nodetool commands as expected:
$ nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 127.0.0.1 104.37 KiB 16 100.0% 0a7969a9-0d00-42f1-a574-87dfde5e3e7d rack1
$ nodetool version
ReleaseVersion: 4.1.0
I am also able to connect to the cluster with cqlsh:
Connected to Test Cluster at 127.0.0.1:9042
[cqlsh 6.1.0 | Cassandra 4.1.0 | CQL spec 3.4.6 | Native protocol v5]
Use HELP for help.
cqlsh>
Configure IP addresses
In an attempt to replicate what you did, I updated cassandra.yaml with the IP address of my test machine:
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "10.1.2.3:7000"
listen_address: 10.1.2.3
rpc_address: 10.1.2.3
After starting Cassandra, I am again able to run nodetool commands as expected:
$ nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 10.1.2.3 136 KiB 16 100.0% 0a7969a9-0d00-42f1-a574-87dfde5e3e7d rack1
I am also able to connect to the cluster with cqlsh:
$ cqlsh 10.1.2.3
Connected to Test Cluster at 10.1.2.3:9042
[cqlsh 6.1.0 | Cassandra 4.1.0 | CQL spec 3.4.6 | Native protocol v5]
Use HELP for help.
cqlsh>
Conclusion
The most likely reason you're not able to connect to your cluster is that Cassandra is not running on the node. You can easily verify it with Linux utilities like lsof and netstat:
$ sudo lsof -nPi -sTCP:LISTEN
$ netstat -tnlp
You will need to check the Cassandra system.log for clues as to why is it is not running.
If you specified an IP address in listen_address, make sure that you also update the seeds list with the same IP address otherwise Cassandra will shutdown because it is unable to gossip with the seeds. Cheers!
π Please support the Apache Cassandra community by hovering over the cassandra tag then click on the Watch tag button. π Thanks!
I get:
Failed to load resource: net::ERR_CONTENT_LENGTH_MISMATCH chunk-vendors.js:1
in Google Chrome console and blank page when trying to load Vue development page started via:
user#ubuntu:~# npm run serve
DONE Compiled successfully in 11909ms
App running at:
- Local: http://(my_public_ip):5008/
- Network: http://(my_public_ip):5008/
Note that the development build is not optimized.
To create a production build, run npm run build.
What I have tried up to now:
1.
sudo npm install -g n
sudo n 7.0
// also remember to update npm
sudo npm update -g npm
sudo npm cache clean -force
npm cache verify
sudo rm -rf /usr/local/bin/npm /usr/local/share/man/man1/node* ~/.npm
sudo rm -rf /usr/local/lib/node*
sudo rm -rf /usr/local/bin/node*
sudo rm -rf /usr/local/include/node*
sudo apt-get purge nodejs npm
sudo apt autoremove
sudo apt-get install npm nodejs
Tried to load the page in incognito mode (with no cache)
Nothing worked for me.
Everything was working great a few weeks ago. No settings have been changed. Nothing global installed or removed on the server.
lsof result is:
root#ubuntu:~# sudo lsof -i -P -n | grep LISTEN
systemd-r 855 systemd-resolve 13u IPv4 22732 0t0 TCP 127.0.0.53:53 (LISTEN)
sshd 935 root 3u IPv4 24127 0t0 TCP *:22 (LISTEN)
sshd 935 root 4u IPv6 24129 0t0 TCP *:22 (LISTEN)
nginx 940 root 6u IPv4 25804 0t0 TCP *:5001 (LISTEN)
nginx 940 root 7u IPv4 25805 0t0 TCP *:80 (LISTEN)
nginx 940 root 8u IPv6 25806 0t0 TCP *:80 (LISTEN)
nginx 942 www-data 6u IPv4 25804 0t0 TCP *:5001 (LISTEN)
nginx 942 www-data 7u IPv4 25805 0t0 TCP *:80 (LISTEN)
nginx 942 www-data 8u IPv6 25806 0t0 TCP *:80 (LISTEN)
nginx 943 www-data 6u IPv4 25804 0t0 TCP *:5001 (LISTEN)
nginx 943 www-data 7u IPv4 25805 0t0 TCP *:80 (LISTEN)
nginx 943 www-data 8u IPv6 25806 0t0 TCP *:80 (LISTEN)
postgres 992 postgres 3u IPv4 24408 0t0 TCP 127.0.0.1:5432 (LISTEN)
node 25318 user 19u IPv4 153059 0t0 TCP (my_public_ip):5008 (LISTEN)
Any ideas?
It's related to a timeout error that can be solved using this setup
On the file: vue.config.js
module.exports = {
devServer: {
proxy: {
'/*': {
target: 'http://localhost:8080',
secure: false,
prependPath: false,
proxyTimeout: 1000 * 60 * 10,
timeout: 1000 * 60 * 10
}
}
}
}
It was solved by itself. I came back to it after a day and it worked like it used to work.
I have no idea what was the reason.
I'm sorry if this is a bit brief. My server is currently down after I did a sudo dist-upgrade. All my system settings are seemingly good. I can ping my IP, can SSH my server. However, I cannot https my server. I have permanently redirected HTTP to HTTPS and this is my apache log if I http my server:
[20/Jan/2018:16:45:55 +0530] "GET / HTTP/1.1" 301 574 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.108 Safari/537.36
However, there's no HTTPS log and there's no server/apache response thereafter.
On issuing
netstat -ntlp | grep LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:*
LISTEN 1152/sshd
tcp 0 0 127.0.0.1:5432 0.0.0.0:*
LISTEN 1212/postgres
tcp6 0 0 :::443 :::*
LISTEN 1255/apache2
tcp6 0 0 :::80 :::*
LISTEN 1255/apache2
tcp6 0 0 :::22 :::*
LISTEN 1152/sshd
And
sudo lsof -i:443
apache2 1255 root 6u IPv6 22977 0t0 TCP *:https
(LISTEN)
apache2 1262 www-data 6u IPv6 22977 0t0 TCP *:https (LISTEN)
apache2 1263 www-data 6u IPv6 22977 0t0 TCP *:https (LISTEN)
Now, I would like a suggestion on what could be the reason for server not being accessible via browser.
I have been downvoted for this.But I shall like to why I have been downvoted for this? I have given all possible details showing my 443 port is possibly open for incoming traffic and my apache is logging access for http. I would like to know where I'm wrong with this.
Firstly, sorry for posting this question here. This question should probably be going to other SO forums.
Now, regarding the answer - in case this would help someone - the issue was with system reboot (updating required security fixes), the firewall (UFW) settings, it seems, automatically changes to allowing only IPV6 on SSL ports (by default). What needs to be done is to change the same to allow both IPV4 and IPV6.
Thanks again to the community and I regret posting the question in the wrong forum. However, I'm keeping this here for any reference and not deleting the question.
P.S.: In case if someone feels downvoting the question is unnecessary, it would be a good help if it's removed.
I'm having some trouble with my NodeJS API.
Sometimes, it returns, ConnectionError: getaddrinfo EMFILE and is all is fuc** after this.
So, I've started to investigate. I found it would be caused by the "to many files descriptors open". We can apparently increase the number of open files that are authorized but it would not definitely fix the problem.
I found in this article, that we can increase the file descriptors settings and the ulimit. But what is the difference?
Then, to try to isolate my problem, I've run the lsof -i -n -P | grep nodejs command. Indeed, the number of connections established is increasing, so I imagine I have somewhere in my code some connections that are not closed.
I have some fs.readFileSync and fs.readDirSync etc⦠but I have not set the autoClose:true. Did you think it would be that?
Do you have any ideas or advice?
PS : the App run on a Ubuntu machine
EDIT , 16-02-2016
I have ran this command on my production machine lsof -i -n -P | grep nodejs
What I see is something like this:
...
nodejs 27596 root 631u IPv4 109781565 0t0 TCP 127.0.0.1:45268->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 632u IPv4 109782317 0t0 TCP 172.31.58.93:4242->172.31.55.229:61616 (ESTABLISHED)
nodejs 27596 root 633u IPv4 109779882 0t0 TCP 127.0.0.1:45174->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 634u IPv4 109779884 0t0 TCP 127.0.0.1:45175->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 635u IPv4 109781569 0t0 TCP 127.0.0.1:45269->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 636u IPv4 109781571 0t0 TCP 127.0.0.1:45270->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 637u IPv4 109782319 0t0 TCP 127.0.0.1:45293->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 642u IPv4 109781790 0t0 TCP 127.0.0.1:45283->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 643u IPv4 109781794 0t0 TCP 127.0.0.1:45284->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 644u IPv4 109781796 0t0 TCP 127.0.0.1:45285->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 645u IPv4 109781798 0t0 TCP 172.31.58.93:4242->172.31.55.229:61602 (ESTABLISHED)
nodejs 27596 root 646u IPv4 109781800 0t0 TCP 127.0.0.1:45286->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 647u IPv4 109781802 0t0 TCP 172.31.58.93:4242->172.31.0.198:1527 (ESTABLISHED)
nodejs 27596 root 648u IPv4 109781804 0t0 TCP 127.0.0.1:45287->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 649u IPv4 109781806 0t0 TCP 127.0.0.1:45288->127.0.0.1:7272 (ESTABLISHED)
But I don't know what it means, do you have any ideas about this?
Thanks a lot.
So I thought I found out what is going on!
I use Redis & Redis session npm modules for my cart storage, but when I created, updated, I created a connection to Redis, each time, before using it.
var session = new Sessions({ port : conf.redisPort, host : conf.redisHost});
session.get({
app : rsapp,
token : this.sessionToken },
function(err, resp) {
// here some workin'
})
Now I just created the connection when my app starts and store this as a singleton and use whenever I want.
// At the start of the App
var NS = {
sessions : new RedisSessions({port: config.redisPort, host: config.redisHost}),
};
// Later somewhere in the app
NS.session.get({
app : rsapp,
token : this.sessionToken },
function(err, resp) {
// here some workin'
})
It was pretty obvious but now I found it⦠If it can help someone, I'll mark this one as solved.
I am running a nodejs server and using socket.io 1.3.5 for handling websocket connections. When server receive a socket disconnect event with "ping timeout" or "transport close", it disconnects the socket but do not clear all the binding. It prints the below log
socket.io:client client close with reason ping timeout
socket.io:socket closing socket - reason ping timeout
socket.io:client ignoring remove for WX-M8GL6SvkQtxXMAAAA
The strange thing I have noticed when socket disconnects due to some network error, the tcp socket binding from server to browser is not cleared and remains in ESTABLISHED state forever. I can see the the below mentioned connection even after 12 hrs of receiving disconnect due to ping timeout.
node 29881 user 14u IPv4 38563924 0t0 TCP 10.5.7.33:5100->10.5.6.50:49649 (ESTABLISHED)
node 29881 user 15u IPv4 38563929 0t0 TCP 10.5.7.33:5100->10.5.6.50:49653 (ESTABLISHED)
node 29881 user 16u IPv4 38563937 0t0 TCP 10.5.7.33:5100->10.5.6.60:49659 (ESTABLISHED)
What can I do to remove the stale connections on socket disconnect events ?
I found the solution! According to this issue, there is a bug in SocketIO which is fixed in version 1.4.x:
https://github.com/socketio/socket.io/issues/2118
I have upgraded to v1.4.5 and the TCP connections are now correctly closed after the SocketIO connection closes.