Linux amount of swap displayed by "free" is different from "smem" [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I am trying to analyze from where the amount of swap is from, and looking at smem display I get a completely different amount of swap usage.
Free shows the following :
[root#server1 ~/smem-1.3]# free -k
total used free shared buffers cached
Mem: 24554040 24197360 356680 0 510200 14443128
-/+ buffers/cache: 9244032 15310008
Swap: 20980880 2473120 18507760
And smem shows :
PID User Command Swap USS PSS RSS
...
18829 oracle oracle_1 (LOCAL=NO) 0 3.9M 98.3M 10.1G
18813 oracle oracle_1 (LOCAL=NO) 0 3.9M 98.6M 10.1G
18809 oracle oracle_1 (LOCAL=NO) 0 4.1M 99.2M 10.0G
28657 oracle ora_lms0_1 56.0K 54.1M 100.3M 4.2G
29589 oracle ora_lms1_1 964.0K 69.7M 118.9M 4.5G
29886 oracle ora_dbw1_1 5.7M 20.8M 130.9M 10.2G
29857 oracle ora_dbw0_1 4.2M 22.6M 133.0M 10.3G
11075 ccm_user /usr/java/jre1.6/bin/java - 197.8M 133.9M 135.9M 140.7M
21688 bsuser /usr/local/java/bin/java -c 30.7M 145.1M 147.2M 152.1M
29930 oracle ora_lck0_1 2.3M 58.6M 169.8M 1.0G
29901 oracle ora_smon_1 0 78.0M 195.6M 4.3G
15604 oracle /var/oragrid/jdk/jre//bin/j 65.4M 253.9M 254.3M 262.2M
-------------------------------------------------------------------------------
359 10 678.8M 2.5G 13.5G 1.2T
Why free shows me "2.4G" and smem only shows me 679M? One of them is showing some wrong result.
I need to find out where are the remaining 1.8G, or prove that free is showing wrong results.
Last but not least, the kernel is 2.6.18.

Well, the main issue is RSS(resident set size) and PSS(proportional set size). From http://www.selenic.com/smem/ as it says - "PSS instead measures each application's "fair share" of each shared area to give a realistic measure". On the otherhand, RSS overestimates by calculating shared memory area of multiple applications as their own. And this is why, you see the difference. In simple word, smem can differentiate between applications shared memory and rather than treating shared area as every applications own!

Related

My computer instantly reboots without any warning [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
**EventID 41
Version 8
Level 1
Task 63
Opcode 0
Keywords 0x8000400000000002
- TimeCreated
[ SystemTime] 2021-09-26T18:19:37.8668359Z
EventRecordID 1614
Correlation
- Execution
[ ProcessID] 4
[ ThreadID] 8
Channel System
Computer DESKTOP-IJTG7GS
- Security
[ UserID] S-1-5-18
- EventData
BugcheckCode 0
BugcheckParameter1 0x0
BugcheckParameter2 0x0
BugcheckParameter3 0x0
BugcheckParameter4 0x0
SleepInProgress 6
PowerButtonTimestamp 0
BootAppStatus 3221226017
Checkpoint 0
ConnectedStandbyInProgress false
SystemSleepTransitionsToOn 1
CsEntryScenarioInstanceId 0
BugcheckInfoFromEFI false
CheckpointStatus 0
CsEntryScenarioInstanceIdV2 0
LongPowerButtonPressDetected false**
So my computer restarts abruptly on it's own sometimes. I have tested ram and overheating issue but did not find any problem. I even installed windows again but the problem keeps on coming. Above is the event viewer critical error details. Please tell me what is the problem and how should i fix it. I am guessing it might be power supply. Just to be sure, what do you think it is.
My CPU is Intel(R) Xeon(R) CPU E3-1245 V2 # 3.40GHz 3.40 GHz it sits at 50 to 75 Celsius under load. normally its below 50. I have 20 gb ram and a 1050 ti. By the way i tested the pc after removing 1050 ti and placing quadro 4000 in it. But the problem did not solve. At one point computer would not even boot it kept on restarting at the booting screen. I don't know what to do...Help!?
Most probably it can be the PSU.
Windows Blue Screen of Death (BSOD) is a known windows error screen that appears now and then, randomly when some system drivers get corrupted, incompatible apps got installed, drivers outdated, etc. One such error is BSOD 0x8000400000000002. This error is related to kernel 41 critical error on Windows 10. And this is the same error you got. You can try:
Update the drivers: Open windows Device manager and from there you can update your drivers.
Turn off Fast Startup
Use a Restore Point, if you have one.
Unistall Recent Windows Update
Click on Start and open settings.
From settings, open Windows Update & Security option.
Then select “View Update History“
From the new page, click on “Uninstall Updates“.
Now, right-click on recently installed update and select Uninstall option.
Do uninstall all recent updates one by one and then restart your PC.
Update your BIOS

Copy to SD Card changes the Execute permissions (Linux) [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 17 days ago.
Improve this question
I have a file on /tmp
-rw-r--r-- 1 root root 6782 Jun 30 11:20 DATA_00.csv
when I copy it to SD Card with
cp /tmp/DATA_00.csv /mnt/mmccard/
Its Execute flag is set !
-rwxr-xr-x 1 root root 6782 Jun 30 11:21 DATA_00.csv
Is it normal ?
on Linux 2.6.20 ;)
#koyaanisqatsi
Hi, I don't have new information with fdisk -l
In fact I don't know why there is not only one partition.
/mnt/mmccard type vfat (rw,sync,fmask=0022,dmask=0022,codepage=cp437,iocharset=iso8859-1)
Disk /dev/mmcblk0p1: 8064 MB, 8064598016 bytes
4 heads, 16 sectors/track, 246112 cylinders
Units = cylinders of 64 * 512 = 32768 bytes
Device Boot Start End Blocks Id System
/dev/mmcblk0p1p1 ? 29216898 55800336 850670010+ 7a Unknown
Partition 1 does not end on cylinder boundary.
/dev/mmcblk0p1p2 ? 25540106 55528404 959625529 72 Unknown
Partition 2 does not end on cylinder boundary.
/dev/mmcblk0p1p3 ? 1 1 0 0 Empty
Partition 3 does not end on cylinder boundary.
/dev/mmcblk0p1p4 438273 438279 221 0 Empty
Partition 4 does not end on cylinder boundary.
This was formatted with w10 as FAT32
Hello
Yes - Depending on the filesystem the SD Card has. I guess somewhat
like MS FAT16/FAT32?
Check out the command mount without any option/parameter.

Openstack-Devstack: Can't create instance, There are not enough hosts available [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I installed openstack via devstack on Ubuntu 14.04. I have got 8 gb of ram on my computer and i have created around 8 VM's which i don't use simultaneously as I use the VM differently.
Now i cannot create any more VM's. I get an error message
No Valid Host was found.
there are not enough hosts available.
Can someone advice what should i do?
Since you say that this is a devstack installation, I'm assuming that you aren't running this in a production environment. Openstack allows users to bump up their over-subscription ratio for the RAM. By default, it is kept at 1.5 times the physical RAM available in the machine. Hence, it should be 12 Gb of usable memory. To change the subscription ratio:
sudo vim /etc/nova/nova.conf
#Add these two lines
ram_allocation_ratio=2
cpu_allocation_ratio=20 # Default value here is 16
These values are just a rough estimate. Change the values around to make them work for your environment. Restart the Devstack.
To check if the changes were made, log into mysql (or whichever DB is supporting devstack) and check:
mysql> use nova;
mysql> select * from compute_nodes \G;
*************************** 1. row ***************************
created_at: 2015-09-25 13:52:55
updated_at: 2016-02-03 18:32:49
deleted_at: NULL
id: 1
service_id: 7
vcpus: 8
memory_mb: 12007
local_gb: 446
vcpus_used: 6
memory_mb_used: 8832
local_gb_used: 80
hypervisor_type: QEMU
disk_available_least: 240
free_ram_mb: 3175
free_disk_gb: 366
current_workload: 0
running_vms: 4
pci_stats: NULL
metrics: []
.....
1 row in set (0.00 sec)
The Scheduler looks at the free_ram_mb. If you have a free_ram_mb of 3175 and if you want to run a new m1.medium instance with 4096M of memory, the Scheduler will end up with this message in the logs:
WARNING nova.scheduler.manager Failed to schedule_run_instance: No valid host was found.
Hence, make sure to keep an eye out for those when starting a new VM after making those changes.

Understand the rsync transfer rate in its output [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I transferred a large file (>60GB) using rsync but I got confused when I was calculating the actual transfer rate. The output is
dbdump.sql
69840316437 100% 7.75MB/s 2:23:09 (xfer#1, to-check=0/1)
sent 30 bytes received 17317620159 bytes 2015199.88 bytes/sec
total size is 69840316437 speedup is 4.03
The rate displayed directly from the second line is 7.75MB/s. But the rate I calculated from last line but one is about 2MB/s. However, if you divide the total size with the total time 69840316437/(2x3600+23x60+9)=8131367 byte/sec about 8MB/s.
Which one is the actual mean transfer rate?
Thanks
The 7.75MB/s is just the transfer speed reported for the last block of transfer - the statistics are reported once a second or so. It would appear that you have sparse file handling enabled, as well, because, while the file is 69GB in size, it only transferred 17GB. Either that, or, you had partially transferred the file in the past, and this run just finished it up, or maybe it had been fully transferred in the past and this run only sent the blocks that changed... The reported speed up is <full size> / <transferred size>, which is about 69 / 17 = 4.03 in this case - meaning it managed to fully replicate a 69GB file in the time it took to actually transfer a 17GB file.

Linux Read Only Partition's data changes [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have a read only partition who's data is changing.
The change occurs on the first mount only. Subsequent mounts do not change the partition data.
Tried with ext3 and ext2 incase journalling was an issue ... no help.
Tried tune2fs with -c -1 -i 0 in order to disable updating timestamps or any other data that maybe touched by a check being executed ... no help
Normally I wouldn't care, but I need to hashsum this partition for data integrity purposes.
Linux can do a write on read-only fs in some rare cases. E.g. when it detects a fs in inconsistent state (after cold reboot) and is able to do a quick, safe-for-data fix.
I had a kind of such fix when working with Ubuntu Rescue Remix and the write was on second harddrive, before even mounting anything on it (while booting). Information about this was in dmesg, so check the dmesg too.
E.g. here is an orphan cleanup possible on readonly fs, it will temporary DISABLE READONLY flag
1485 if (s_flags & MS_RDONLY) {
1486 ext3_msg(sb, KERN_INFO, "orphan cleanup on readonly fs");
1487 sb->s_flags &= ~MS_RDONLY;
1488 }
... writes...
1549 sb->s_flags = s_flags; /* Restore MS_RDONLY status */
This is done in *ext3_mount-> mount_bdev -> (callback) ext3_fill_super -> ext3_orphan_cleanup
If the block device is not read-protected itself, linux (ASKING YEAH!)
1463 if (bdev_read_only(sb->s_bdev)) {
1464 ext3_msg(sb, KERN_ERR, "error: write access "
1465 "unavailable, skipping orphan cleanup.");
1466 return;
1467 }
WILL COMMIT A WRITE ON READONLY FS
Update: here is a list
http://www.forensicswiki.org/wiki/Forensic_Linux_Live_CD_issues
Ext3 File system requires journal recovery To disable recovery: use "noload" flag, or use "ro,loop" flags, or use "ext2" file system type
Ext4 File system requires journal recovery To disable recovery: use "noload" flag, or use "ro,loop" flags, or use "ext2" file system type
ReiserFS File system has unfinished transactions "nolog" flag does not work (see man mount). To disable journal updates: use "ro,loop" flags
XFS Always (when unmounting) "norecovery" flag does not help (fixed in recent 2.6 kernels). To disable data writes: use "ro,loop" flags

Resources