Postgresql server closed the connection unexpectedly - node.js

Having one of the most unusual problems I ever encountered.
I'm having my PostgreSQL database installed on a windows server, and listening on all ip addresses:
listen_addresses = '*'
I can access and send queries without any issues on various client devices, either linux or windows OS base.
I'm having issue only with one certain linux client which for some reason fails to execute queries when query response is a little bit "heavier" if I can put it that way.
I'll try to elaborate by example.
I have psql client on that machine and a users table in my postgresql database on a remote windows server which has about 20 records, so when I run this query:
select "firstName", "createdAt", "updatedAt", username from users limit 13;
I get results normally:
firstName | createdAt | updatedAt | username
-------------+-------------------------------+-------------------------------+-------------
User 1 | 2017-01-26 12:48:52.995+01 | 2017-01-26 12:48:52.995+01 | user1
User 2 | 2019-08-24 10:29:16.16329+02 | 2019-08-24 10:29:16.16329+02 | user2
User 3 | 2018-10-05 11:45:14.127813+02 | 2018-10-05 11:45:14.127813+02 | user3
User 4 | 2017-09-27 18:53:56.535867+02 | 2017-09-27 18:53:56.535867+02 | user4
User 5 | 2017-03-28 11:46:27.03684+02 | 2017-03-28 11:46:27.03684+02 | user5
User 6 | 2017-03-28 11:46:40.840481+02 | 2017-03-28 11:46:40.840481+02 | user6
User 7 | 2018-05-22 12:43:08.397247+02 | 2018-05-22 12:43:08.397247+02 | user7
User 8 | 2017-03-28 11:46:36.24854+02 | 2017-03-28 11:46:36.24854+02 | user8
User 9 | 2022-04-30 14:04:02.24541+02 | 2022-04-30 14:04:02.24541+02 | user9
User 10 | 2022-04-30 14:04:02.24541+02 | 2022-04-30 14:04:02.24541+02 | user10
User 11 | 2022-04-30 14:04:02.24541+02 | 2022-04-30 14:04:02.24541+02 | user11
User 12 | 2022-04-30 14:04:02.24541+02 | 2022-04-30 14:04:02.24541+02 | user12
User 13 | 2022-04-30 14:04:02.24541+02 | 2022-04-30 14:04:02.24541+02 | user13
(13 rows)
And any query with limit up to 13 returns data without issues.
But immediately after adding one more row to the results (limit 14 in the query) I get this:
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
And when I do querying other tables, I got same issue too, data with lower limits will return successfully, but when I get higher load increasing limits in the query, it fails.
Looking into postgresql logs on my server I get this:
CEST FATAL: connection to client lost
CEST LOG: could not receive data from client: An existing connection was forcibly closed by the remote host.
Doing same queries in my node app using npm pg#8.0.3 or any other version, I'm getting same issue, successes on less data in response, and this when it fails fetching more rows:
Error: read ECONNRESET
at TCP.onStreamRead (internal/stream_base_commons.js:209:20) {
errno: -104,
code: 'ECONNRESET',
syscall: 'read'
}
I also did some wireshark pcap dumps on client machine, while hitting these queries, and noticed that when I get an error wireshark log looks like this:
3301 2.220496557 25.67.20.168 25.20.186.130 TCP 68 [TCP Dup ACK 2839#1] 45208 → 5432 [ACK] Seq=27 Ack=1 Win=64542 Len=0 SLE=2729 SRE=3143
I don't know much on wireshark and network issues, but it looks something about duplicate acknowledgement "TCP Dup ACK" issue.
All of this is even more wierd because I get this problem only on one linux (ubuntu) client, and other clients work fine without any issues, and there is about 10 of them windows/linux-ubuntu mixed.
It is most likely some network issue I guess.
I'll appreciate any clue on this.

If both client and server think that the other end hung up on them, it is clearly a network problem.
You don't tell us how long these queries take, but it is possible that you hit a timeout in some in-between network component that decides that this seemingly idle connection should be terminated (there are people who don't know that there are other protocols than HTTP). You can prevent that by setting tcp_keepalives_idle on the server. Here is more about that topic.
It might well be a different problem, but it is certainly a network problem.

Related

Why does mq_open creates queeue node with a different permission than specified? [duplicate]

This question already has answers here:
mq_open() - EACCES, Permission denied
(1 answer)
Does umask affect message queues?
(1 answer)
Closed 1 year ago.
I am on an IMX chip running a distro with yocto linux.
In one of my apps, I am calling mq_open with flags O_CREAT | O_RDWR and mode as
S_IROTH | S_IWOTH | S_IRGRP | S_IWGRP | S_IRUSR | S_IWUSR
However, the actual device node is having permissions:
root#NEW-Board:~# ls -l /dev/mqueue/my-ipc
-rw-r--r-- 1 root root 80 Nov 22 12:34 my-ipc
I would have expected the permissions to be -rw-rw-rw-. Whats happening here?

Operating Systems used by AWS RDS

What operating systems does Amazon RDS use. While I understand that when using RDS we are just exposed to a endpoint and internally the database we use might be supported by multiple systems, I would like to know what is the OS used by those systems.
To check the underlying operating of your MySQL DB instance on AWS RDS, you can use the following command:
mysql> SHOW variables LIKE '%version%';
Result:
+-------------------------+------------------------------+
| Variable_name | Value |
+-------------------------+------------------------------+
| innodb_version | 5.6.39 |
| protocol_version | 10 |
| slave_type_conversions | |
| version | 5.6.39-log |
| version_comment | MySQL Community Server (GPL) |
| version_compile_machine | x86_64 |
| version_compile_os | Linux |
+-------------------------+------------------------------+
7 rows in set (0.01 sec)
Those systems are running under the Amazon Linux Distribution

Set lower metric on wlan

i've been looking for a solution for this for a while, hope you can help me.
I have a network at home like this.
+----------+
| INTERNET |
+-----+----+
|
+-----+----+
| CABLE |
| MODEM |
+-----+----+
|
+----------+---------+
| |
| |
+-----v-----+ +-----v-----+
+--------+ D-LINK | | D-LINK |
| | DIR-600 | | DI-524 |
| +-----------+ +-----+-----+
| | |
| | |
+ +-----+-----+ |
192.168.2.XXX---> Windows7 | |
+ | | |
| +-----------+ |
+ |Ubuntu | |
192.168.2.YYY +->Virtualbox <---Public IP ++
+-----------+
One Cable Modem with a Router (Dir-600) for local IPs, and an Access Point (DI-524) for public IPs.
On the local network i have a computer with Windows 7 and Virtualbox, In the virtualbox I have an Ubuntu 14.04 server. This server has internet conection on ETH0 with a bridged adapter, so it has a local IP like 192.168.2.XXX.
Also on the virtualbox i have set a wlan adapter with direct access to the virtual ubuntu server and connected to the DI-524 network with a public IP.
So, the ubuntu server has 2 interfaces:
ETH0 connected to local network with IP 192.168.2.XXX
WLAN1 connected to DI-524 with public IP.
What I want is:
Give WLAN1 the highest priority for internet access. And only if there is no WLAN connection, the virtual machine can access internet through ETH0.
I know it can be done changing metrics, but don't know how, i've tried many commands but nothing seems to work.
Can anybody help me?
Thanks in advance!
yes you can do that via ifmetric package install it on ubuntu then set number for example 10 to wlan0 and number 20 for eth0 it mean the highest priority number for wlan0 highest priority is 0 by default (priority denotes metric) check this topic enter link description here
but plz check your metric first via
route -n
then you can delete old metrics via something like this command
sudo route del -net default gw 192.168.2.XX netmask 0.0.0.0 dev wlan0 metric 0

Running cudaHashcat-1.33 on AWS g2.2xlarge - Error cuModuleLoad() 209 when trying cudaExample0.sh

As it says in the description I have installed cudaHashcat-1.33 on an AWS g2.2xlarge instance.
I've used the .run file to install the CUDA Toolkit and then performed this test: deviceQuery ; as explained here in the official documentation (http://docs.nvidia.com/cuda/cuda-getting-started-guide-for-linux/index.html#running-binaries).
Then I installed cudaHashcat-1.33, following these instructions.
sudo apt-get install p7zip-full
wget http://hashcat.net/files/cudaHashcat-1.33.7z
7za x cudaHashcat-1.33.7z
cd cudaHashcat-1.33
Then I tried to run this: cudaExample0.sh in ~/cudaHashcat-1.33/cudaExample0.sh and I end up getting this output:
cudaHashcat v1.33 starting...
Device #1: GRID K520, 4095MB, 797Mhz, 8MCU
Device #1: WARNING! Kernel exec timeout is not disabled, it might cause you errors of code 702
Hashes: 6494 hashes; 6494 unique digests, 1 unique salts
Bitmaps: 16 bits, 65536 entries, 0x0000ffff mask, 262144 bytes
Applicable Optimizers:
* Zero-Byte
* Precompute-Init
* Precompute-Merkle-Demgard
* Meet-In-The-Middle
* Early-Skip
* Not-Salted
* Not-Iterated
* Single-Salt
* Scalar-Mode
* Raw-Hash
Watchdog: Temperature abort trigger set to 90c
Watchdog: Temperature retain trigger set to 80c
ERROR: cuModuleLoad() 209
A second example is this one, where I actually use the file I want to attack.
ubuntu#ip-172-31-58-154:~$ ~/maskprocessor/src/mp64.bin ?l?l?l?l?l?l?l?l | ~/cudaHashcat-1.33/cudaHashcat64.bin -m 2500 xxx.hccap
cudaHashcat v1.33 starting...
Device #1: GRID K520, 4095MB, 797Mhz, 8MCU
Device #1: WARNING! Kernel exec timeout is not disabled, it might cause you errors of code 702
Hashes: 1 hashes; 1 unique digests, 1 unique salts
Bitmaps: 8 bits, 256 entries, 0x000000ff mask, 1024 bytes
Rules: 1
Applicable Optimizers:
* Zero-Byte
* Single-Hash
* Single-Salt
Watchdog: Temperature abort trigger set to 90c
Watchdog: Temperature retain trigger set to 80c
ERROR: cuModuleLoad() 209
nvidia-smi
[root#ip-xxx-xxx-xxx-xxx cudaHashcat-1.33]$ nvidia-smi
Wed Mar 4 19:07:35 2015
+------------------------------------------------------+
| NVIDIA-SMI 340.32 Driver Version: 340.32 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GRID K520 On | 0000:00:03.0 Off | N/A |
| N/A 43C P8 17W / 125W | 10MiB / 4095MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Compute processes: GPU Memory |
| GPU PID Process name Usage |
|=============================================================================|
| No running compute processes found |
+-----------------------------------------------------------------------------+
If someone knows what is going on, I'd appreciate any help.
So after a lot of searching through forums I finally found an answer. #Robert Crovella, thanks for pointing out that the driver was the wrong one. So it turns out that finding the linux drivers for NVIDIA is not that easy, but I came across this page, which then lead me to the Linux Drivers of NVIDIA. Just download the driver required for your architecture (if you use wget click on 'Download' first, since there is an acceptance page). After that do 'chmod +x nvidia-driver.run' and then install it with 'sudo ./nvidia-driver.run'.
Hope that my experience helps someone else.

Large amount of http connections from self

I have a relatively high traffic linux/apache webserver running Wordpress (oh the headaches). I think our developer configured the memcache settings incorrectly because when I run this command to look at all incoming httpd connections.
sudo netstat -anp |grep 'tcp\|udp' | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n
I get:
1 68.106.x.x
1 74.125.x.x
1 74.125.x.x
1 74.125.x.x
1 74.125.x.x
15 0.0.0.0
70 173.0.x.x
194 127.0.0.1
...I see that I have 194 connections from 127.0.0.1, and VERY few from actual public IP's. looking at netstat further I can see those are going to port 11211 (memcache). Even if I restart httpd, it only takes a few seconds for the open memcached connections from 127.0.0.1 to skyrocket up again and almost immediately we are pushing our max httpd process limit (currently MaxClients = 105).
Here are the details for those connections:
tcp 0 0 127.0.0.1:26210 127.0.0.1:11211 ESTABLISHED -
cat /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS=""

Resources