Operating Systems used by AWS RDS - amazon-rds

What operating systems does Amazon RDS use. While I understand that when using RDS we are just exposed to a endpoint and internally the database we use might be supported by multiple systems, I would like to know what is the OS used by those systems.

To check the underlying operating of your MySQL DB instance on AWS RDS, you can use the following command:
mysql> SHOW variables LIKE '%version%';
Result:
+-------------------------+------------------------------+
| Variable_name | Value |
+-------------------------+------------------------------+
| innodb_version | 5.6.39 |
| protocol_version | 10 |
| slave_type_conversions | |
| version | 5.6.39-log |
| version_comment | MySQL Community Server (GPL) |
| version_compile_machine | x86_64 |
| version_compile_os | Linux |
+-------------------------+------------------------------+
7 rows in set (0.01 sec)

Those systems are running under the Amazon Linux Distribution

Related

Postgresql server closed the connection unexpectedly

Having one of the most unusual problems I ever encountered.
I'm having my PostgreSQL database installed on a windows server, and listening on all ip addresses:
listen_addresses = '*'
I can access and send queries without any issues on various client devices, either linux or windows OS base.
I'm having issue only with one certain linux client which for some reason fails to execute queries when query response is a little bit "heavier" if I can put it that way.
I'll try to elaborate by example.
I have psql client on that machine and a users table in my postgresql database on a remote windows server which has about 20 records, so when I run this query:
select "firstName", "createdAt", "updatedAt", username from users limit 13;
I get results normally:
firstName | createdAt | updatedAt | username
-------------+-------------------------------+-------------------------------+-------------
User 1 | 2017-01-26 12:48:52.995+01 | 2017-01-26 12:48:52.995+01 | user1
User 2 | 2019-08-24 10:29:16.16329+02 | 2019-08-24 10:29:16.16329+02 | user2
User 3 | 2018-10-05 11:45:14.127813+02 | 2018-10-05 11:45:14.127813+02 | user3
User 4 | 2017-09-27 18:53:56.535867+02 | 2017-09-27 18:53:56.535867+02 | user4
User 5 | 2017-03-28 11:46:27.03684+02 | 2017-03-28 11:46:27.03684+02 | user5
User 6 | 2017-03-28 11:46:40.840481+02 | 2017-03-28 11:46:40.840481+02 | user6
User 7 | 2018-05-22 12:43:08.397247+02 | 2018-05-22 12:43:08.397247+02 | user7
User 8 | 2017-03-28 11:46:36.24854+02 | 2017-03-28 11:46:36.24854+02 | user8
User 9 | 2022-04-30 14:04:02.24541+02 | 2022-04-30 14:04:02.24541+02 | user9
User 10 | 2022-04-30 14:04:02.24541+02 | 2022-04-30 14:04:02.24541+02 | user10
User 11 | 2022-04-30 14:04:02.24541+02 | 2022-04-30 14:04:02.24541+02 | user11
User 12 | 2022-04-30 14:04:02.24541+02 | 2022-04-30 14:04:02.24541+02 | user12
User 13 | 2022-04-30 14:04:02.24541+02 | 2022-04-30 14:04:02.24541+02 | user13
(13 rows)
And any query with limit up to 13 returns data without issues.
But immediately after adding one more row to the results (limit 14 in the query) I get this:
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
And when I do querying other tables, I got same issue too, data with lower limits will return successfully, but when I get higher load increasing limits in the query, it fails.
Looking into postgresql logs on my server I get this:
CEST FATAL: connection to client lost
CEST LOG: could not receive data from client: An existing connection was forcibly closed by the remote host.
Doing same queries in my node app using npm pg#8.0.3 or any other version, I'm getting same issue, successes on less data in response, and this when it fails fetching more rows:
Error: read ECONNRESET
at TCP.onStreamRead (internal/stream_base_commons.js:209:20) {
errno: -104,
code: 'ECONNRESET',
syscall: 'read'
}
I also did some wireshark pcap dumps on client machine, while hitting these queries, and noticed that when I get an error wireshark log looks like this:
3301 2.220496557 25.67.20.168 25.20.186.130 TCP 68 [TCP Dup ACK 2839#1] 45208 → 5432 [ACK] Seq=27 Ack=1 Win=64542 Len=0 SLE=2729 SRE=3143
I don't know much on wireshark and network issues, but it looks something about duplicate acknowledgement "TCP Dup ACK" issue.
All of this is even more wierd because I get this problem only on one linux (ubuntu) client, and other clients work fine without any issues, and there is about 10 of them windows/linux-ubuntu mixed.
It is most likely some network issue I guess.
I'll appreciate any clue on this.
If both client and server think that the other end hung up on them, it is clearly a network problem.
You don't tell us how long these queries take, but it is possible that you hit a timeout in some in-between network component that decides that this seemingly idle connection should be terminated (there are people who don't know that there are other protocols than HTTP). You can prevent that by setting tcp_keepalives_idle on the server. Here is more about that topic.
It might well be a different problem, but it is certainly a network problem.

Nvidia-smi doesn't show GPU Memory Usage and full path for Process Names [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I ran the command nvidia-smi on my windows 10 PC.
Why does it display GPU Memory Usage as "N/A"?
How do I access full path for each Process names that is active? (right now it only shows a part of the path)
Are there any alternative ways to access such information other than nvidia-smi?
C:\Users\ks>nvidia-smi
Sun Nov 29 09:04:35 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 445.87 Driver Version: 445.87 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1650 WDDM | 00000000:08:00.0 On | N/A |
| 50% 31C P8 8W / 75W | 506MiB / 4096MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU PID Type Process name GPU Memory |
| Usage |
|=============================================================================|
| 0 1164 C+G Insufficient Permissions N/A |
| 0 2140 C+G ...8bbwe\Microsoft.Notes.exe N/A |
| 0 3188 C+G C:\Windows\explorer.exe N/A |
| 0 4492 C+G ...me\Application\chrome.exe N/A |
| 0 6156 C+G ...artMenuExperienceHost.exe N/A |
| 0 7844 C+G ...y\ShellExperienceHost.exe N/A |
| 0 10156 C+G ...b3d8bbwe\WinStore.App.exe N/A |
| 0 11340 C+G ...lPanel\SystemSettings.exe N/A |
| 0 12932 C+G ...es.TextInput.InputApp.exe N/A |
+-----------------------------------------------------------------------------+
Why does it display GPU Memory Usage as "N/A"?
As talonmies answered, on WDDM systems, the NVIDIA driver doesn't manage GPU memory. The WDDM subsystem does.
You can check this by running a command nvidia-smi --help-query-compute-apps, then it shows the reason under "used_gpu_memory" or "used_memory".
Mine says Not available on Windows when running in WDDM mode because Windows KMD manages all the memory not NVIDIA driver.
How do I access full path for each Process names that is active? (right now it only shows a part of the path)
You can access the full paths by running a command nvidia-smi --query-compute-apps=pid,process_name,used_memory --format=csv.

Set lower metric on wlan

i've been looking for a solution for this for a while, hope you can help me.
I have a network at home like this.
+----------+
| INTERNET |
+-----+----+
|
+-----+----+
| CABLE |
| MODEM |
+-----+----+
|
+----------+---------+
| |
| |
+-----v-----+ +-----v-----+
+--------+ D-LINK | | D-LINK |
| | DIR-600 | | DI-524 |
| +-----------+ +-----+-----+
| | |
| | |
+ +-----+-----+ |
192.168.2.XXX---> Windows7 | |
+ | | |
| +-----------+ |
+ |Ubuntu | |
192.168.2.YYY +->Virtualbox <---Public IP ++
+-----------+
One Cable Modem with a Router (Dir-600) for local IPs, and an Access Point (DI-524) for public IPs.
On the local network i have a computer with Windows 7 and Virtualbox, In the virtualbox I have an Ubuntu 14.04 server. This server has internet conection on ETH0 with a bridged adapter, so it has a local IP like 192.168.2.XXX.
Also on the virtualbox i have set a wlan adapter with direct access to the virtual ubuntu server and connected to the DI-524 network with a public IP.
So, the ubuntu server has 2 interfaces:
ETH0 connected to local network with IP 192.168.2.XXX
WLAN1 connected to DI-524 with public IP.
What I want is:
Give WLAN1 the highest priority for internet access. And only if there is no WLAN connection, the virtual machine can access internet through ETH0.
I know it can be done changing metrics, but don't know how, i've tried many commands but nothing seems to work.
Can anybody help me?
Thanks in advance!
yes you can do that via ifmetric package install it on ubuntu then set number for example 10 to wlan0 and number 20 for eth0 it mean the highest priority number for wlan0 highest priority is 0 by default (priority denotes metric) check this topic enter link description here
but plz check your metric first via
route -n
then you can delete old metrics via something like this command
sudo route del -net default gw 192.168.2.XX netmask 0.0.0.0 dev wlan0 metric 0

Running cudaHashcat-1.33 on AWS g2.2xlarge - Error cuModuleLoad() 209 when trying cudaExample0.sh

As it says in the description I have installed cudaHashcat-1.33 on an AWS g2.2xlarge instance.
I've used the .run file to install the CUDA Toolkit and then performed this test: deviceQuery ; as explained here in the official documentation (http://docs.nvidia.com/cuda/cuda-getting-started-guide-for-linux/index.html#running-binaries).
Then I installed cudaHashcat-1.33, following these instructions.
sudo apt-get install p7zip-full
wget http://hashcat.net/files/cudaHashcat-1.33.7z
7za x cudaHashcat-1.33.7z
cd cudaHashcat-1.33
Then I tried to run this: cudaExample0.sh in ~/cudaHashcat-1.33/cudaExample0.sh and I end up getting this output:
cudaHashcat v1.33 starting...
Device #1: GRID K520, 4095MB, 797Mhz, 8MCU
Device #1: WARNING! Kernel exec timeout is not disabled, it might cause you errors of code 702
Hashes: 6494 hashes; 6494 unique digests, 1 unique salts
Bitmaps: 16 bits, 65536 entries, 0x0000ffff mask, 262144 bytes
Applicable Optimizers:
* Zero-Byte
* Precompute-Init
* Precompute-Merkle-Demgard
* Meet-In-The-Middle
* Early-Skip
* Not-Salted
* Not-Iterated
* Single-Salt
* Scalar-Mode
* Raw-Hash
Watchdog: Temperature abort trigger set to 90c
Watchdog: Temperature retain trigger set to 80c
ERROR: cuModuleLoad() 209
A second example is this one, where I actually use the file I want to attack.
ubuntu#ip-172-31-58-154:~$ ~/maskprocessor/src/mp64.bin ?l?l?l?l?l?l?l?l | ~/cudaHashcat-1.33/cudaHashcat64.bin -m 2500 xxx.hccap
cudaHashcat v1.33 starting...
Device #1: GRID K520, 4095MB, 797Mhz, 8MCU
Device #1: WARNING! Kernel exec timeout is not disabled, it might cause you errors of code 702
Hashes: 1 hashes; 1 unique digests, 1 unique salts
Bitmaps: 8 bits, 256 entries, 0x000000ff mask, 1024 bytes
Rules: 1
Applicable Optimizers:
* Zero-Byte
* Single-Hash
* Single-Salt
Watchdog: Temperature abort trigger set to 90c
Watchdog: Temperature retain trigger set to 80c
ERROR: cuModuleLoad() 209
nvidia-smi
[root#ip-xxx-xxx-xxx-xxx cudaHashcat-1.33]$ nvidia-smi
Wed Mar 4 19:07:35 2015
+------------------------------------------------------+
| NVIDIA-SMI 340.32 Driver Version: 340.32 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GRID K520 On | 0000:00:03.0 Off | N/A |
| N/A 43C P8 17W / 125W | 10MiB / 4095MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Compute processes: GPU Memory |
| GPU PID Process name Usage |
|=============================================================================|
| No running compute processes found |
+-----------------------------------------------------------------------------+
If someone knows what is going on, I'd appreciate any help.
So after a lot of searching through forums I finally found an answer. #Robert Crovella, thanks for pointing out that the driver was the wrong one. So it turns out that finding the linux drivers for NVIDIA is not that easy, but I came across this page, which then lead me to the Linux Drivers of NVIDIA. Just download the driver required for your architecture (if you use wget click on 'Download' first, since there is an acceptance page). After that do 'chmod +x nvidia-driver.run' and then install it with 'sudo ./nvidia-driver.run'.
Hope that my experience helps someone else.

What is at physical memory 0x8000 (32Kb) to 0x10000 (1Mb) on Linux

I'm compiling the kernel with a custom kernel module that prints out the kernel's code start and end (physical) addresses. It starts at 0x8000 and ends at 0xefe6d8. Looking through the generated System.map, I see that almost all functions in the kernel sit at 0x10000 (1Mb) in physical memory and onwards. But the code starts at 0x8000. I cannot figure out what lives in between those two addresses. Can anyone shed some light on this?
Snippet from System.map (virtual mapping starts on 0xc0000000):
c0008000 T _text
c0008000 T stext
c000804c t __create_page_tables
c000814c t __turn_mmu_on_loc
c0008158 t __vet_atags
c0100000 T __exception_text_start
The __create_page_tables function is indicative that the page tables live after the __vet_atags code. But why would they be part of executable memory?
From the kernel boot procotol, the kernel memory layout is as follows:
~ ~
| Protected-mode kernel |
100000 +------------------------+
| I/O memory hole |
0A0000 +------------------------+
| Reserved for BIOS | Leave as much as possible unused
~ ~
| Command line | (Can also be below the X+10000 mark)
X+10000 +------------------------+
| Stack/heap | For use by the kernel real-mode code.
X+08000 +------------------------+
| Kernel setup | The kernel real-mode code.
| Kernel boot sector | The kernel legacy boot sector.
X +------------------------+
| Boot loader | <- Boot sector entry point 0000:7C00
001000 +------------------------+
| Reserved for MBR/BIOS |
000800 +------------------------+
| Typically used by MBR |
000600 +------------------------+
| BIOS use only   |
000000 +------------------------+

Resources