Linux Read Only Partition's data changes [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have a read only partition who's data is changing.
The change occurs on the first mount only. Subsequent mounts do not change the partition data.
Tried with ext3 and ext2 incase journalling was an issue ... no help.
Tried tune2fs with -c -1 -i 0 in order to disable updating timestamps or any other data that maybe touched by a check being executed ... no help
Normally I wouldn't care, but I need to hashsum this partition for data integrity purposes.

Linux can do a write on read-only fs in some rare cases. E.g. when it detects a fs in inconsistent state (after cold reboot) and is able to do a quick, safe-for-data fix.
I had a kind of such fix when working with Ubuntu Rescue Remix and the write was on second harddrive, before even mounting anything on it (while booting). Information about this was in dmesg, so check the dmesg too.
E.g. here is an orphan cleanup possible on readonly fs, it will temporary DISABLE READONLY flag
1485 if (s_flags & MS_RDONLY) {
1486 ext3_msg(sb, KERN_INFO, "orphan cleanup on readonly fs");
1487 sb->s_flags &= ~MS_RDONLY;
1488 }
... writes...
1549 sb->s_flags = s_flags; /* Restore MS_RDONLY status */
This is done in *ext3_mount-> mount_bdev -> (callback) ext3_fill_super -> ext3_orphan_cleanup
If the block device is not read-protected itself, linux (ASKING YEAH!)
1463 if (bdev_read_only(sb->s_bdev)) {
1464 ext3_msg(sb, KERN_ERR, "error: write access "
1465 "unavailable, skipping orphan cleanup.");
1466 return;
1467 }
WILL COMMIT A WRITE ON READONLY FS
Update: here is a list
http://www.forensicswiki.org/wiki/Forensic_Linux_Live_CD_issues
Ext3 File system requires journal recovery To disable recovery: use "noload" flag, or use "ro,loop" flags, or use "ext2" file system type
Ext4 File system requires journal recovery To disable recovery: use "noload" flag, or use "ro,loop" flags, or use "ext2" file system type
ReiserFS File system has unfinished transactions "nolog" flag does not work (see man mount). To disable journal updates: use "ro,loop" flags
XFS Always (when unmounting) "norecovery" flag does not help (fixed in recent 2.6 kernels). To disable data writes: use "ro,loop" flags

Related

Why dd can't handle sparse files in shell scripts? [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 4 years ago.
Improve this question
I have the following sparse file that I want to flash to an SD card:
647M -rw------- 1 root root 4.2G Sep 21 16:53 make_sd_card.sh.xNws4e
As you can see, it takes ~647M on disk for an apparent size of 4.2G.
If I flash it directly with dd, in my shell, it's really fast, ~6s:
$ time (sudo /bin/dd if=make_sd_card.sh.xNws4e of=/dev/mmcblkp0 conv=sparse; sync)
8601600+0 records in
8601600+0 records out
4404019200 bytes (4.4 GB, 4.1 GiB) copied, 6.20815 s, 709 MB/s
real 0m6.284s
user 0m1.920s
sys 0m4.336s
But when I do the very same commands inside a shell script, it behaves like if it was copying all the zeroes and takes a big amount of time (~2m10):
$ time sudo ./plop.sh ./make_sd_card.sh.xNws4e
+ dd if=./make_sd_card.sh.xNws4e of=/dev/mmcblk0 conv=sparse
8601600+0 records in
8601600+0 records out
4404019200 bytes (4.4 GB, 4.1 GiB) copied, 127.984 s, 34.4 MB/s
+ sync
real 2m9.885s
user 0m3.520s
sys 0m15.560s
If I watch the dirty section of /proc/meminfo, I can see that this counter is much higher when dd-ing from a shell script than directly from the shell.
My shell is bash an for the record, the script is:
#!/bin/bash
set -xeu
dd if=$1 of=/dev/mmcblk0 conv=sparse bs=512
sync
[EDIT] I'm resurrecting this topic, because a developer I work with, has found these commands: bmap_create and bmap_copy which seems to do exactly what I was trying with achieve clumsily with dd.
In debian, they are part of the bmap-tools package.
With it, it takes 1m2s to flash a 4.1GB sparse SD image, with a real size of 674MB, when it takes 6m26s with dd or cp.
This difference is caused by a typo in the non-scripted invocation, which did not actually write to your memory card. There is no difference in dd behavior between scripted and interactive invocation.
Keep in mind what a sparse file is: It's a file on a filesystem that's able to store metadata tracking which blocks have values at all, and thus for which zero blocks have never been allocated any storage on disk whatsoever.
This concept -- of a sparse file -- is specific to files. You can't have a sparse block device.
The distinction between your two lines of code is that one of them (the fast one) has a typo (mmcblkp0 instead of mmcblk0), so it's referring to a block device name that doesn't exist. Thus, it creates a file. Files can be sparse. Thus, it creates a sparse file. Creating a sparse file is fast.
The other one, without the typo, writes to the block device. Block devices can't be sparse. Thus, it always takes the full execution time to run.

Openstack-Devstack: Can't create instance, There are not enough hosts available [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I installed openstack via devstack on Ubuntu 14.04. I have got 8 gb of ram on my computer and i have created around 8 VM's which i don't use simultaneously as I use the VM differently.
Now i cannot create any more VM's. I get an error message
No Valid Host was found.
there are not enough hosts available.
Can someone advice what should i do?
Since you say that this is a devstack installation, I'm assuming that you aren't running this in a production environment. Openstack allows users to bump up their over-subscription ratio for the RAM. By default, it is kept at 1.5 times the physical RAM available in the machine. Hence, it should be 12 Gb of usable memory. To change the subscription ratio:
sudo vim /etc/nova/nova.conf
#Add these two lines
ram_allocation_ratio=2
cpu_allocation_ratio=20 # Default value here is 16
These values are just a rough estimate. Change the values around to make them work for your environment. Restart the Devstack.
To check if the changes were made, log into mysql (or whichever DB is supporting devstack) and check:
mysql> use nova;
mysql> select * from compute_nodes \G;
*************************** 1. row ***************************
created_at: 2015-09-25 13:52:55
updated_at: 2016-02-03 18:32:49
deleted_at: NULL
id: 1
service_id: 7
vcpus: 8
memory_mb: 12007
local_gb: 446
vcpus_used: 6
memory_mb_used: 8832
local_gb_used: 80
hypervisor_type: QEMU
disk_available_least: 240
free_ram_mb: 3175
free_disk_gb: 366
current_workload: 0
running_vms: 4
pci_stats: NULL
metrics: []
.....
1 row in set (0.00 sec)
The Scheduler looks at the free_ram_mb. If you have a free_ram_mb of 3175 and if you want to run a new m1.medium instance with 4096M of memory, the Scheduler will end up with this message in the logs:
WARNING nova.scheduler.manager Failed to schedule_run_instance: No valid host was found.
Hence, make sure to keep an eye out for those when starting a new VM after making those changes.

Comprehensive list of rsync error codes [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I'm writing a script that does daily snapshots of users' home directories. First I do a dry run using:
rsync -azvrn --out-format="%M %f" source/dir dest/dir
and then the actual rsync operation (by removing the -n option).
I'm trying to parse the output of the dry run. Specifically, I'm interested in learning the exact cause of the rsync error (if one occurred). Does anyone know of
The most common rsync errors and their codes?
A link to a comprehensive rsync error code page?
Most importantly, rsync (at least on CentOs 5) does not return an error code. Rather it displays the errors internally and returns with 0. Like thus:
sending incremental file list
rsync: link_stat "/data/users/gary/testdi" failed: No such file or directory (2)
sent 18 bytes received 12 bytes 60.00 bytes/sec
total size is 0 speedup is 0.00 (DRY RUN)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1039) [sender=3.0.6]
Has anyone had to parse rsync errors and have a suggestion on how to store the rsync return state(s)? I believe, when transferring multiple files, that errors may be raised on a per file basis and are collected at the end as shown on the last line of code above.
Per the rsync "man" page, here are the error codes it could return and what they mean. If you're scripting it in bash, you could look at $?
0 Success
1 Syntax or usage error
2 Protocol incompatibility
3 Errors selecting input/output files, dirs
4 Requested action not supported: an attempt was made to manipulate 64-bit
files on a platform that cannot support them; or an option was specified
that is supported by the client and not by the server.
5 Error starting client-server protocol
6 Daemon unable to append to log-file
10 Error in socket I/O
11 Error in file I/O
12 Error in rsync protocol data stream
13 Errors with program diagnostics
14 Error in IPC code
20 Received SIGUSR1 or SIGINT
21 Some error returned by waitpid()
22 Error allocating core memory buffers
23 Partial transfer due to error
24 Partial transfer due to vanished source files
25 The --max-delete limit stopped deletions
30 Timeout in data send/receive
35 Timeout waiting for daemon connection
I've never seen a comprehensive "most common errors" list but I'm betting error code 1 would be at the top.

Change default console loglevel during boot up [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I setup a CentOS 6.3 setup, on which the console loglevel is set to 4, and default log level is set to 4. I know I can change the default console log level using the following steps:
cat /proc/sys/kernel/printk
4 4 1 7
echo 5 > /proc/sys/kernel/printk
cat /proc/sys/kernel/printk
5 4 1 7
However, upon reboot, the console log level reverts back to the original value. Do I need to recompile the kernel, or is there a way I can get the changed value to be persistent across reboot.
Do I need to recompile the kernel,
No.
or is there a way I can get the changed value to be persistent across reboot.
Yes.
Use the kernel command line parameter loglevel:
loglevel= All Kernel Messages with a loglevel smaller than the
console loglevel will be printed to the console. It can
also be changed with klogd or other programs. The
loglevels are defined as follows:
0 (KERN_EMERG) system is unusable
1 (KERN_ALERT) action must be taken immediately
2 (KERN_CRIT) critical conditions
3 (KERN_ERR) error conditions
4 (KERN_WARNING) warning conditions
5 (KERN_NOTICE) normal but significant condition
6 (KERN_INFO) informational
7 (KERN_DEBUG) debug-level messages
The entire list of parameters possible on the kernel command line are in the Linux/Documentation/kernel-parameters.txt file in the source tree.
Depending on your bootloader (e.g. Grub or U-Boot), you will have to edit text to add this new parameter to the command line. Use cat /proc/cmdline to view the kernel command line used for the previous boot.
Addendum
To display everything, the number supplied for the loglevel parameter would have be be greater than KERN_DEBUG.
That is, you would have to specify loglevel=8.
Or simply use the ignore_loglevel parameter to display all kernel messages.

Linux amount of swap displayed by "free" is different from "smem" [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I am trying to analyze from where the amount of swap is from, and looking at smem display I get a completely different amount of swap usage.
Free shows the following :
[root#server1 ~/smem-1.3]# free -k
total used free shared buffers cached
Mem: 24554040 24197360 356680 0 510200 14443128
-/+ buffers/cache: 9244032 15310008
Swap: 20980880 2473120 18507760
And smem shows :
PID User Command Swap USS PSS RSS
...
18829 oracle oracle_1 (LOCAL=NO) 0 3.9M 98.3M 10.1G
18813 oracle oracle_1 (LOCAL=NO) 0 3.9M 98.6M 10.1G
18809 oracle oracle_1 (LOCAL=NO) 0 4.1M 99.2M 10.0G
28657 oracle ora_lms0_1 56.0K 54.1M 100.3M 4.2G
29589 oracle ora_lms1_1 964.0K 69.7M 118.9M 4.5G
29886 oracle ora_dbw1_1 5.7M 20.8M 130.9M 10.2G
29857 oracle ora_dbw0_1 4.2M 22.6M 133.0M 10.3G
11075 ccm_user /usr/java/jre1.6/bin/java - 197.8M 133.9M 135.9M 140.7M
21688 bsuser /usr/local/java/bin/java -c 30.7M 145.1M 147.2M 152.1M
29930 oracle ora_lck0_1 2.3M 58.6M 169.8M 1.0G
29901 oracle ora_smon_1 0 78.0M 195.6M 4.3G
15604 oracle /var/oragrid/jdk/jre//bin/j 65.4M 253.9M 254.3M 262.2M
-------------------------------------------------------------------------------
359 10 678.8M 2.5G 13.5G 1.2T
Why free shows me "2.4G" and smem only shows me 679M? One of them is showing some wrong result.
I need to find out where are the remaining 1.8G, or prove that free is showing wrong results.
Last but not least, the kernel is 2.6.18.
Well, the main issue is RSS(resident set size) and PSS(proportional set size). From http://www.selenic.com/smem/ as it says - "PSS instead measures each application's "fair share" of each shared area to give a realistic measure". On the otherhand, RSS overestimates by calculating shared memory area of multiple applications as their own. And this is why, you see the difference. In simple word, smem can differentiate between applications shared memory and rather than treating shared area as every applications own!

Resources