Is Cassandra 4.1 not compatible with Ubuntu 20.04? - cassandra

I can't make connection into my nodecluster, my nodetool status is currenty refused, i am using Cassandra 4.1 but not working trying edit in cassandra.yaml for localhost using 127.0.0.1 also edit my cassandra-env.sh for public name rename it with localhost too is also not working, so i decided to downgrade into 4.0.7 and working perfectly nothing to change into parameter of cassandra.yaml also cassandra-env.sh
Tools
Cassandra 4.1
Operating system : Ubuntu 20.04
Java version : openjdk version "11.0.17" 2022-10-18 OpenJDK Runtime
Environment (build 11.0.17+8-post-Ubuntu-1ubuntu220.04) OpenJDK
64-Bit Server VM (build 11.0.17+8-post-Ubuntu-1ubuntu220.04, mixed
mode)
here the code error in my nodetool status
root#myserver:/etc/cassandra# nodetool status
nodetool: Failed to connect to '127.0.0.1:7199' - ConnectException: 'Connection refused (Connection refused)'.
also in my cqlsh not working, only show like this
root#myserver:/etc/cassandra# cqlsh 161.97.96.126 9042
Connection error: ('Unable to connect to any servers', {'161.97.96.126:9042': ConnectionRefusedError(111, "Tried connecting to [('161.97.96.126', 9042)]. Last error: Connection refused")})
i was desperate, but try to install another version means downgrade from 4.1 to 4.0.7 (i make purge remove all my cassandra 4.1 file, installing from the beginning for 4.0.7), then viola nothing to change for the parameter in cassandra.yaml also cassandra-env.sh but works perfectly with my current tools above
Is Cassandra 4.1 is still not compatible with Ubuntu 20.04?
update 23-01-2023 22:10 pm
here my code when i try again installing cassandra 4.1 without edit anything, just fresh install again
root#myvps:~# sudo service cassandra status
● cassandra.service - LSB: distributed storage system for structured data
Loaded: loaded (/etc/init.d/cassandra; generated)
Active: active (running) since Mon 2023-01-23 14:49:41 CET; 8s ago
Docs: man:systemd-sysv-generator(8)
Process: 1644739 ExecStart=/etc/init.d/cassandra start (code=exited, status=0/SUCCESS)
Tasks: 24 (limit: 9479)
Memory: 2.2G
CGroup: /system.slice/cassandra.service
└─1644848 /usr/bin/java -ea -da:net.openhft... -XX:+UseThreadPriorities -XX:+HeapDumpOnOutOfMemoryError -X>
Jan 23 14:49:41 myvps.contaboserver.net systemd[1]: Starting LSB: distributed storage system for structured data...
Jan 23 14:49:41 myvps.contaboserver.net systemd[1]: Started LSB: distributed storage system for structured data.
root#myvps:~# sudo service cassandra status
● cassandra.service - LSB: distributed storage system for structured data
Loaded: loaded (/etc/init.d/cassandra; generated)
Active: active (exited) since Mon 2023-01-23 14:49:41 CET; 31s ago
Docs: man:systemd-sysv-generator(8)
Process: 1644739 ExecStart=/etc/init.d/cassandra start (code=exited, status=0/SUCCESS)
Jan 23 14:49:41 myvps.contaboserver.net systemd[1]: Starting LSB: distributed storage system for structured data...
Jan 23 14:49:41 myvps.contaboserver.net systemd[1]: Started LSB: distributed storage system for structured data.
root#myvps:~# nodetool version
nodetool: Failed to connect to '127.0.0.1:7199' - ConnectException: 'Connection refused (Connection refused)'.
root#myvps:~# cqlsh
Connection error: ('Unable to connect to any servers', {'127.0.0.1:9042': ConnectionRefusedError(111, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")})
Update 23-01-2023 22:16 PM
i try accessing 2 thing
netstat -tnlp and show this
root#myvps:~# netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 537/redis-server 12
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 598/nginx: master p
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 447/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 534/sshd: /usr/sbin
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 578/postgres
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 598/nginx: master p
tcp6 0 0 ::1:6379 :::* LISTEN 537/redis-server 12
tcp6 0 0 :::80 :::* LISTEN 598/nginx: master p
tcp6 0 0 :::22 :::* LISTEN 534/sshd: /usr/sbin
tcp6 0 0 ::1:5432 :::* LISTEN 578/postgres
tcp6 0 0 :::443 :::* LISTEN 598/nginx: master p
and also type sudo lsof -nPi -sTCP:LISTEN will show this
root#myvps:~# sudo lsof -nPi -sTCP:LISTEN
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
systemd-r 447 systemd-resolve 13u IPv4 18529 0t0 TCP 127.0.0.53:53 (LISTEN)
sshd 534 root 3u IPv4 18387 0t0 TCP *:22 (LISTEN)
sshd 534 root 4u IPv6 18389 0t0 TCP *:22 (LISTEN)
redis-ser 537 redis 6u IPv4 20184 0t0 TCP 127.0.0.1:6379 (LISTEN)
redis-ser 537 redis 7u IPv6 20185 0t0 TCP [::1]:6379 (LISTEN)
postgres 578 postgres 5u IPv6 20704 0t0 TCP [::1]:5432 (LISTEN)
postgres 578 postgres 6u IPv4 20705 0t0 TCP 127.0.0.1:5432 (LISTEN)
nginx 598 root 6u IPv6 18878 0t0 TCP *:443 (LISTEN)
nginx 598 root 7u IPv4 18879 0t0 TCP *:443 (LISTEN)
nginx 598 root 8u IPv4 18880 0t0 TCP *:80 (LISTEN)
nginx 598 root 9u IPv6 18881 0t0 TCP *:80 (LISTEN)
nginx 601 www-data 6u IPv6 18878 0t0 TCP *:443 (LISTEN)
nginx 601 www-data 7u IPv4 18879 0t0 TCP *:443 (LISTEN)
nginx 601 www-data 8u IPv4 18880 0t0 TCP *:80 (LISTEN)
nginx 601 www-data 9u IPv6 18881 0t0 TCP *:80 (LISTEN)
nginx 602 www-data 6u IPv6 18878 0t0 TCP *:443 (LISTEN)
nginx 602 www-data 7u IPv4 18879 0t0 TCP *:443 (LISTEN)
nginx 602 www-data 8u IPv4 18880 0t0 TCP *:80 (LISTEN)
nginx 602 www-data 9u IPv6 18881 0t0 TCP *:80 (LISTEN)
nginx 603 www-data 6u IPv6 18878 0t0 TCP *:443 (LISTEN)
nginx 603 www-data 7u IPv4 18879 0t0 TCP *:443 (LISTEN)
nginx 603 www-data 8u IPv4 18880 0t0 TCP *:80 (LISTEN)
nginx 603 www-data 9u IPv6 18881 0t0 TCP *:80 (LISTEN)
nginx 604 www-data 6u IPv6 18878 0t0 TCP *:443 (LISTEN)
nginx 604 www-data 7u IPv4 18879 0t0 TCP *:443 (LISTEN)
nginx 604 www-data 8u IPv4 18880 0t0 TCP *:80 (LISTEN)
nginx 604 www-data 9u IPv6 18881 0t0 TCP *:80 (LISTEN)

I can confirm that Cassandra 4.1 works on the latest versions of Ubuntu including 20.04 LTS and 22.04 LTS.
I didn't run into any issues installing/running Cassandra 4.1 out-of-the-box. You didn't specify the steps to replicate the problem so I'm assuming all you've done is perform a fresh installation of Cassandra 4.1. In any case, I followed the Installing Cassandra instructions documented in the official website and it just worked.
For what it's worth, I installed the same version of Java 11 as you:
openjdk version "11.0.17" 2022-10-18
Zero config change
After installing Cassandra 4.1 with NO configuration changes, I am able to run nodetool commands as expected:
$ nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 127.0.0.1 104.37 KiB 16 100.0% 0a7969a9-0d00-42f1-a574-87dfde5e3e7d rack1
$ nodetool version
ReleaseVersion: 4.1.0
I am also able to connect to the cluster with cqlsh:
Connected to Test Cluster at 127.0.0.1:9042
[cqlsh 6.1.0 | Cassandra 4.1.0 | CQL spec 3.4.6 | Native protocol v5]
Use HELP for help.
cqlsh>
Configure IP addresses
In an attempt to replicate what you did, I updated cassandra.yaml with the IP address of my test machine:
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "10.1.2.3:7000"
listen_address: 10.1.2.3
rpc_address: 10.1.2.3
After starting Cassandra, I am again able to run nodetool commands as expected:
$ nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 10.1.2.3 136 KiB 16 100.0% 0a7969a9-0d00-42f1-a574-87dfde5e3e7d rack1
I am also able to connect to the cluster with cqlsh:
$ cqlsh 10.1.2.3
Connected to Test Cluster at 10.1.2.3:9042
[cqlsh 6.1.0 | Cassandra 4.1.0 | CQL spec 3.4.6 | Native protocol v5]
Use HELP for help.
cqlsh>
Conclusion
The most likely reason you're not able to connect to your cluster is that Cassandra is not running on the node. You can easily verify it with Linux utilities like lsof and netstat:
$ sudo lsof -nPi -sTCP:LISTEN
$ netstat -tnlp
You will need to check the Cassandra system.log for clues as to why is it is not running.
If you specified an IP address in listen_address, make sure that you also update the seeds list with the same IP address otherwise Cassandra will shutdown because it is unable to gossip with the seeds. Cheers!
πŸ‘‰ Please support the Apache Cassandra community by hovering over the cassandra tag then click on the Watch tag button. πŸ™ Thanks!

Related

djbdns cache doing huge amount of query on dns root server

I know this could be a bottle in the sea but.
I've settle an internal DNS using Tinydns (let's call it vmDNS) that have been working for while now. Yesterday very strange behavior that I cannot explain. I had huge amount of query to Root DNS server coming from my vmDNS around 50 per second.
I may have missed something here but why my vmDNS is supposed to ask to DNS resolution to root server as I've configured the resolver to ask 8.8.8.8.
In the mean time looking at the process running and filtering on "dns" value it appears I have hundred of dnscache process running. I've killed thoses process and so far it came back to normal.
But I cannot understand why this happened.
Note: The vmDNS is not reachable from outside the network.
Here some command output while the strange behavior was happening.
ps -aux |grep dns
dnscache 14811 dnscache 189u IPv4 669392975 0t0 UDP vmDNS:8460->192.203.230.10:domain
dnscache 14811 dnscache 190u IPv4 669389015 0t0 UDP vmDNS:30047->192.5.5.241:domain
dnscache 14811 dnscache 191u IPv4 669392974 0t0 UDP vmDNS:34153->192.203.230.10:domain
dnscache 14811 dnscache 192u IPv4 669376088 0t0 UDP vmDNS:45196->128.9.0.107:domain
dnscache 14811 dnscache 202u IPv4 669393005 0t0 UDP vmDNS:37691->198.41.0.10:domain
dnscache 14811 dnscache 203u IPv4 669393015 0t0 UDP vmDNS:40394->192.112.36.4:domain
-------------------------------
dnscache 14811 dnscache 171u IPv4 669389086 0t0 UDP vmDNS:45861->198.32.64.12:domain
dnscache 14811 dnscache 172u IPv4 669389087 0t0 UDP vmDNS:57794->198.41.0.4:domain
dnscache 14811 dnscache 173u IPv4 669389088 0t0 UDP vmDNS:62378->128.8.10.90:domai

Vue npm run serve Failed to load resource: net::ERR_CONTENT_LENGTH_MISMATCH

I get:
Failed to load resource: net::ERR_CONTENT_LENGTH_MISMATCH chunk-vendors.js:1
in Google Chrome console and blank page when trying to load Vue development page started via:
user#ubuntu:~# npm run serve
DONE Compiled successfully in 11909ms
App running at:
- Local: http://(my_public_ip):5008/
- Network: http://(my_public_ip):5008/
Note that the development build is not optimized.
To create a production build, run npm run build.
What I have tried up to now:
1.
sudo npm install -g n
sudo n 7.0
// also remember to update npm
sudo npm update -g npm
sudo npm cache clean -force
npm cache verify
sudo rm -rf /usr/local/bin/npm /usr/local/share/man/man1/node* ~/.npm
sudo rm -rf /usr/local/lib/node*
sudo rm -rf /usr/local/bin/node*
sudo rm -rf /usr/local/include/node*
sudo apt-get purge nodejs npm
sudo apt autoremove
sudo apt-get install npm nodejs
Tried to load the page in incognito mode (with no cache)
Nothing worked for me.
Everything was working great a few weeks ago. No settings have been changed. Nothing global installed or removed on the server.
lsof result is:
root#ubuntu:~# sudo lsof -i -P -n | grep LISTEN
systemd-r 855 systemd-resolve 13u IPv4 22732 0t0 TCP 127.0.0.53:53 (LISTEN)
sshd 935 root 3u IPv4 24127 0t0 TCP *:22 (LISTEN)
sshd 935 root 4u IPv6 24129 0t0 TCP *:22 (LISTEN)
nginx 940 root 6u IPv4 25804 0t0 TCP *:5001 (LISTEN)
nginx 940 root 7u IPv4 25805 0t0 TCP *:80 (LISTEN)
nginx 940 root 8u IPv6 25806 0t0 TCP *:80 (LISTEN)
nginx 942 www-data 6u IPv4 25804 0t0 TCP *:5001 (LISTEN)
nginx 942 www-data 7u IPv4 25805 0t0 TCP *:80 (LISTEN)
nginx 942 www-data 8u IPv6 25806 0t0 TCP *:80 (LISTEN)
nginx 943 www-data 6u IPv4 25804 0t0 TCP *:5001 (LISTEN)
nginx 943 www-data 7u IPv4 25805 0t0 TCP *:80 (LISTEN)
nginx 943 www-data 8u IPv6 25806 0t0 TCP *:80 (LISTEN)
postgres 992 postgres 3u IPv4 24408 0t0 TCP 127.0.0.1:5432 (LISTEN)
node 25318 user 19u IPv4 153059 0t0 TCP (my_public_ip):5008 (LISTEN)
Any ideas?
It's related to a timeout error that can be solved using this setup
On the file: vue.config.js
module.exports = {
devServer: {
proxy: {
'/*': {
target: 'http://localhost:8080',
secure: false,
prependPath: false,
proxyTimeout: 1000 * 60 * 10,
timeout: 1000 * 60 * 10
}
}
}
}
It was solved by itself. I came back to it after a day and it worked like it used to work.
I have no idea what was the reason.

Connect remote scylla db server shows error

I have installed scylla-db in google cloud servers.
Steps i have followed:
sudo yum install epel-release
sudo curl -o /etc/yum.repos.d/scylla.repo -L http://repositories.scylladb.com/scylla/repo/a2a0ba89d456770dfdc1cd70325e3291/centos/scylladb-2.0.repo
sudo yum install scylla
sudo scylla_setup
(Given yes to "verify supportable version" , " verify packages" , "core dump", " fstim ssd "
For remaining : Given NO)
IN file :/etc/scylla.d/io.conf
SEASTAR_IO="--max-io-requests=12 --num-io-queues=1"
( edited this file manually )
sudo systemctl start scylla-server
It shows: Cannot able to read yaml file. Then google it and downgraded the yaml-cpp version to 0.5.1 from 0.5.3 version.
then
scylla-server started running .
[root#scylla ~]# nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 127.0.0.1 208.69 KB 256 ? 888e91da-9385-4c61-8417-dd59c1a979b8 rack1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
[root#scylla ~]# cat /etc/scylla/scylla.yaml | grep seeds:
- seeds: "127.0.0.1"
[root#scylla ~]# cat /etc/scylla/scylla.yaml | grep rpc_address:
rpc_address: localhost
#broadcast_rpc_address:
[root#scylla ~]# cat /etc/scylla/scylla.yaml | grep listen_address:
listen_address: localhost
[root#scylla ~]# cqlsh
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.0.8 | CQL spec 3.3.1 | Native protocol v4]
Use HELP for help.
cqlsh> exit
[root#scylla ~]# netstat -tupln | grep LISTEN
tcp 0 0 127.0.0.1:10000 0.0.0.0:* LISTEN 6387/scylla
tcp 0 0 127.0.0.1:9042 0.0.0.0:* LISTEN 6387/scylla
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1105/sshd
tcp 0 0 127.0.0.1:7000 0.0.0.0:* LISTEN 6387/scylla
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1119/master
tcp 0 0 0.0.0.0:9180 0.0.0.0:* LISTEN 6387/scylla
tcp 0 0 127.0.0.1:9160 0.0.0.0:* LISTEN 6387/scylla
tcp6 0 0 :::80 :::* LISTEN 5217/httpd
tcp6 0 0 :::22 :::* LISTEN 1105/sshd
tcp6 0 0 :::35063 :::* LISTEN 6412/scylla-jmx
tcp6 0 0 ::1:25 :::* LISTEN 1119/master
tcp6 0 0 127.0.0.1:7199 :::* LISTEN 6412/scylla-jmx
scylla-server is running.
Same setup was done another server
Server Name scylla-db-1
I need to connect to the server scylla ( IP: xx.xx.xxx) from this server.
when i execute the below :
[root#scylla-db-1 ~]# cqlsh xx.xx.xxx
Connection error: ('Unable to connect to any servers', {'xx.xx.xxx': error(111, "Tried connecting to [('xx.xx.xxx', 9042)]. Last error: Connection refused")})
How to connect the remote server from this server?
Also
while checking the http://xx.xx.xxx:10000 and http://xx.xx.xxx:10000/ui in the browser , I m getting problem loading page.
Note :
I have done editing the /etc/scylla.d/io.conf file for assigning the
max-io-requests manually
Port 10000 is the rest api for scylla and is usually left bounded to the 127.0.0.1 - thats why you can not access it
To gain access via cql you need to edit the /etc/scylla/scylla.yaml file and set the rpc_address
Please follow the instructions for configuring scylla for a cluster deployment: single dc http://docs.scylladb.com/procedures/create_cluster/ or multi dc http://docs.scylladb.com/procedures/create_cluster_multidc/.
You need set rpc_address on scylla.yaml and while connecting through cql give your rpc_address as well cqlsh xx.xxx.xxx.xx with user/pass if enabled.

Cassandra linux cannot connect from remote

I have installed cassandra but not able to connect to cassandra server from remote ip..
[root#li1632-39 ~]# cassandra -v
3.0.9
I and connecting public_ip:9042 but connection refused. When I try to validate by telnet I can see port is closed.
When I try to check the status of cassandra its running.
[root#li1636-25 ~]# nodetool status
Datacenter: singapore
=====================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 192.168.130.104 297.48 KB 256 100.0% 85bebb4d-4ce9-4144-b33a-8e9759a87e54 rack5
UN 192.168.130.59 262.73 KB 256 100.0% f79f1c04-b567-4e15-98f0-5fd1a8345f61 rack5
I have cassandra.yaml
listen_address: 192.168.130.59
rpc_address: 192.168.130.59
start_rpc: true
I have also tried with cassandra.yaml
listen_address: 0.0.0.0
rpc_address: 0.0.0.0
start_rpc: true
In this case I am getting below error.
[root#li1636-25 ~]# nodetool status
nodetool: Failed to connect to '127.0.0.1:7199' - ConnectException: 'Connection refused (Connection refused)'.
Error from the remote host on telnet.
A-MacBook-Air:~ ads$ telnet public_ip 9042
Trying 172.104.52.39...
telnet: connect to address public_ip: Connection refused
telnet: Unable to connect to remote host
Below is the result of Netstat
netstat -tulpn | grep LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3575/sshd
tcp 0 0 192.168.130.59:7000 0.0.0.0:* LISTEN 6777/java
tcp 0 0 127.0.0.1:7199 0.0.0.0:* LISTEN 6777/java
tcp 0 0 127.0.0.1:37857 0.0.0.0:* LISTEN 6777/java
tcp 0 0 192.168.130.59:9160 0.0.0.0:* LISTEN 6777/java
tcp6 0 0 192.168.130.59:9042 :::* LISTEN 6777/java
tcp6 0 0 :::22 :::* LISTEN 3575/sshd
I have stopped Firewalld also.

Can Memory leaks cause getaddrinfo EMFILE

I'm having some trouble with my NodeJS API.
Sometimes, it returns, ConnectionError: getaddrinfo EMFILE and is all is fuc** after this.
So, I've started to investigate. I found it would be caused by the "to many files descriptors open". We can apparently increase the number of open files that are authorized but it would not definitely fix the problem.
I found in this article, that we can increase the file descriptors settings and the ulimit. But what is the difference?
Then, to try to isolate my problem, I've run the lsof -i -n -P | grep nodejs command. Indeed, the number of connections established is increasing, so I imagine I have somewhere in my code some connections that are not closed.
I have some fs.readFileSync and fs.readDirSync etc… but I have not set the autoClose:true. Did you think it would be that?
Do you have any ideas or advice?
PS : the App run on a Ubuntu machine
EDIT , 16-02-2016
I have ran this command on my production machine lsof -i -n -P | grep nodejs
What I see is something like this:
...
nodejs 27596 root 631u IPv4 109781565 0t0 TCP 127.0.0.1:45268->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 632u IPv4 109782317 0t0 TCP 172.31.58.93:4242->172.31.55.229:61616 (ESTABLISHED)
nodejs 27596 root 633u IPv4 109779882 0t0 TCP 127.0.0.1:45174->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 634u IPv4 109779884 0t0 TCP 127.0.0.1:45175->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 635u IPv4 109781569 0t0 TCP 127.0.0.1:45269->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 636u IPv4 109781571 0t0 TCP 127.0.0.1:45270->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 637u IPv4 109782319 0t0 TCP 127.0.0.1:45293->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 642u IPv4 109781790 0t0 TCP 127.0.0.1:45283->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 643u IPv4 109781794 0t0 TCP 127.0.0.1:45284->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 644u IPv4 109781796 0t0 TCP 127.0.0.1:45285->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 645u IPv4 109781798 0t0 TCP 172.31.58.93:4242->172.31.55.229:61602 (ESTABLISHED)
nodejs 27596 root 646u IPv4 109781800 0t0 TCP 127.0.0.1:45286->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 647u IPv4 109781802 0t0 TCP 172.31.58.93:4242->172.31.0.198:1527 (ESTABLISHED)
nodejs 27596 root 648u IPv4 109781804 0t0 TCP 127.0.0.1:45287->127.0.0.1:7272 (ESTABLISHED)
nodejs 27596 root 649u IPv4 109781806 0t0 TCP 127.0.0.1:45288->127.0.0.1:7272 (ESTABLISHED)
But I don't know what it means, do you have any ideas about this?
Thanks a lot.
So I thought I found out what is going on!
I use Redis & Redis session npm modules for my cart storage, but when I created, updated, I created a connection to Redis, each time, before using it.
var session = new Sessions({ port : conf.redisPort, host : conf.redisHost});
session.get({
app : rsapp,
token : this.sessionToken },
function(err, resp) {
// here some workin'
})
Now I just created the connection when my app starts and store this as a singleton and use whenever I want.
// At the start of the App
var NS = {
sessions : new RedisSessions({port: config.redisPort, host: config.redisHost}),
};
// Later somewhere in the app
NS.session.get({
app : rsapp,
token : this.sessionToken },
function(err, resp) {
// here some workin'
})
It was pretty obvious but now I found it… If it can help someone, I'll mark this one as solved.

Resources