Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I'm writing a script that does daily snapshots of users' home directories. First I do a dry run using:
rsync -azvrn --out-format="%M %f" source/dir dest/dir
and then the actual rsync operation (by removing the -n option).
I'm trying to parse the output of the dry run. Specifically, I'm interested in learning the exact cause of the rsync error (if one occurred). Does anyone know of
The most common rsync errors and their codes?
A link to a comprehensive rsync error code page?
Most importantly, rsync (at least on CentOs 5) does not return an error code. Rather it displays the errors internally and returns with 0. Like thus:
sending incremental file list
rsync: link_stat "/data/users/gary/testdi" failed: No such file or directory (2)
sent 18 bytes received 12 bytes 60.00 bytes/sec
total size is 0 speedup is 0.00 (DRY RUN)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1039) [sender=3.0.6]
Has anyone had to parse rsync errors and have a suggestion on how to store the rsync return state(s)? I believe, when transferring multiple files, that errors may be raised on a per file basis and are collected at the end as shown on the last line of code above.
Per the rsync "man" page, here are the error codes it could return and what they mean. If you're scripting it in bash, you could look at $?
0 Success
1 Syntax or usage error
2 Protocol incompatibility
3 Errors selecting input/output files, dirs
4 Requested action not supported: an attempt was made to manipulate 64-bit
files on a platform that cannot support them; or an option was specified
that is supported by the client and not by the server.
5 Error starting client-server protocol
6 Daemon unable to append to log-file
10 Error in socket I/O
11 Error in file I/O
12 Error in rsync protocol data stream
13 Errors with program diagnostics
14 Error in IPC code
20 Received SIGUSR1 or SIGINT
21 Some error returned by waitpid()
22 Error allocating core memory buffers
23 Partial transfer due to error
24 Partial transfer due to vanished source files
25 The --max-delete limit stopped deletions
30 Timeout in data send/receive
35 Timeout waiting for daemon connection
I've never seen a comprehensive "most common errors" list but I'm betting error code 1 would be at the top.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I was currently controlling this through an uptime.
The computer restarts if uptime is greater than 1h.
But I do not know how to control if the computer is one day on or more, because currently I only control the hours.
Is it possible to control days, hours and minutes with uptime?
I need to restart the computer when the power on time is greater than 1h.
If the time is 1 day and 0 hours gives failure.
Sorry for my explanation, it is a script that does a series of things and alfinal exists this function that is responsible for controlling this parameter.
thanks for reading me
Not sure I quite understand your issue.
If you want your computer to ALWAYS reboot after a specific amount of time, which is very unusual, then use cron. Add this to /etc/crontab (alternatively, if there is a /etc/cron.d directory on your machine, you can also create a file /etc/cron.d/reboot with this content) :
#reboot root sleep 1800; /sbin/reboot
(adapt reboot's path to match your system; 1800 is the number of seconds for 30 minutes, change it to whatever delay you need)
On the other hand, you may be writing a script that will reboot your server, and you may want to keep it from working if it is run before 30 minutes of uptime (which makes more sense).
Then, I understand you have difficulties parsing the result of uptime and you should use /proc/uptime which gives your uptime in seconds:
#!/bin/sh
not_before=1800 # Number of seconds - adapt to your needs
uptime=$(cut -d . -f 1 /proc/uptime)
[ "$uptime" -ge "$not_before" ] && exec reboot
echo "Sorry, only $uptime s of uptime; you must wait $((not_before - uptime)) seconds" >&2
exit 1
If you want to do it in a script, use the result of uptime | grep " day"to determine whether to execute things (in anifcondition), then do anything you want inside theif`.
Make that script executable and put it in crontab to run every 5min or so.
More information on Cron: https://wiki.archlinux.org/index.php/Cron
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I installed openstack via devstack on Ubuntu 14.04. I have got 8 gb of ram on my computer and i have created around 8 VM's which i don't use simultaneously as I use the VM differently.
Now i cannot create any more VM's. I get an error message
No Valid Host was found.
there are not enough hosts available.
Can someone advice what should i do?
Since you say that this is a devstack installation, I'm assuming that you aren't running this in a production environment. Openstack allows users to bump up their over-subscription ratio for the RAM. By default, it is kept at 1.5 times the physical RAM available in the machine. Hence, it should be 12 Gb of usable memory. To change the subscription ratio:
sudo vim /etc/nova/nova.conf
#Add these two lines
ram_allocation_ratio=2
cpu_allocation_ratio=20 # Default value here is 16
These values are just a rough estimate. Change the values around to make them work for your environment. Restart the Devstack.
To check if the changes were made, log into mysql (or whichever DB is supporting devstack) and check:
mysql> use nova;
mysql> select * from compute_nodes \G;
*************************** 1. row ***************************
created_at: 2015-09-25 13:52:55
updated_at: 2016-02-03 18:32:49
deleted_at: NULL
id: 1
service_id: 7
vcpus: 8
memory_mb: 12007
local_gb: 446
vcpus_used: 6
memory_mb_used: 8832
local_gb_used: 80
hypervisor_type: QEMU
disk_available_least: 240
free_ram_mb: 3175
free_disk_gb: 366
current_workload: 0
running_vms: 4
pci_stats: NULL
metrics: []
.....
1 row in set (0.00 sec)
The Scheduler looks at the free_ram_mb. If you have a free_ram_mb of 3175 and if you want to run a new m1.medium instance with 4096M of memory, the Scheduler will end up with this message in the logs:
WARNING nova.scheduler.manager Failed to schedule_run_instance: No valid host was found.
Hence, make sure to keep an eye out for those when starting a new VM after making those changes.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I setup a CentOS 6.3 setup, on which the console loglevel is set to 4, and default log level is set to 4. I know I can change the default console log level using the following steps:
cat /proc/sys/kernel/printk
4 4 1 7
echo 5 > /proc/sys/kernel/printk
cat /proc/sys/kernel/printk
5 4 1 7
However, upon reboot, the console log level reverts back to the original value. Do I need to recompile the kernel, or is there a way I can get the changed value to be persistent across reboot.
Do I need to recompile the kernel,
No.
or is there a way I can get the changed value to be persistent across reboot.
Yes.
Use the kernel command line parameter loglevel:
loglevel= All Kernel Messages with a loglevel smaller than the
console loglevel will be printed to the console. It can
also be changed with klogd or other programs. The
loglevels are defined as follows:
0 (KERN_EMERG) system is unusable
1 (KERN_ALERT) action must be taken immediately
2 (KERN_CRIT) critical conditions
3 (KERN_ERR) error conditions
4 (KERN_WARNING) warning conditions
5 (KERN_NOTICE) normal but significant condition
6 (KERN_INFO) informational
7 (KERN_DEBUG) debug-level messages
The entire list of parameters possible on the kernel command line are in the Linux/Documentation/kernel-parameters.txt file in the source tree.
Depending on your bootloader (e.g. Grub or U-Boot), you will have to edit text to add this new parameter to the command line. Use cat /proc/cmdline to view the kernel command line used for the previous boot.
Addendum
To display everything, the number supplied for the loglevel parameter would have be be greater than KERN_DEBUG.
That is, you would have to specify loglevel=8.
Or simply use the ignore_loglevel parameter to display all kernel messages.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have a read only partition who's data is changing.
The change occurs on the first mount only. Subsequent mounts do not change the partition data.
Tried with ext3 and ext2 incase journalling was an issue ... no help.
Tried tune2fs with -c -1 -i 0 in order to disable updating timestamps or any other data that maybe touched by a check being executed ... no help
Normally I wouldn't care, but I need to hashsum this partition for data integrity purposes.
Linux can do a write on read-only fs in some rare cases. E.g. when it detects a fs in inconsistent state (after cold reboot) and is able to do a quick, safe-for-data fix.
I had a kind of such fix when working with Ubuntu Rescue Remix and the write was on second harddrive, before even mounting anything on it (while booting). Information about this was in dmesg, so check the dmesg too.
E.g. here is an orphan cleanup possible on readonly fs, it will temporary DISABLE READONLY flag
1485 if (s_flags & MS_RDONLY) {
1486 ext3_msg(sb, KERN_INFO, "orphan cleanup on readonly fs");
1487 sb->s_flags &= ~MS_RDONLY;
1488 }
... writes...
1549 sb->s_flags = s_flags; /* Restore MS_RDONLY status */
This is done in *ext3_mount-> mount_bdev -> (callback) ext3_fill_super -> ext3_orphan_cleanup
If the block device is not read-protected itself, linux (ASKING YEAH!)
1463 if (bdev_read_only(sb->s_bdev)) {
1464 ext3_msg(sb, KERN_ERR, "error: write access "
1465 "unavailable, skipping orphan cleanup.");
1466 return;
1467 }
WILL COMMIT A WRITE ON READONLY FS
Update: here is a list
http://www.forensicswiki.org/wiki/Forensic_Linux_Live_CD_issues
Ext3 File system requires journal recovery To disable recovery: use "noload" flag, or use "ro,loop" flags, or use "ext2" file system type
Ext4 File system requires journal recovery To disable recovery: use "noload" flag, or use "ro,loop" flags, or use "ext2" file system type
ReiserFS File system has unfinished transactions "nolog" flag does not work (see man mount). To disable journal updates: use "ro,loop" flags
XFS Always (when unmounting) "norecovery" flag does not help (fixed in recent 2.6 kernels). To disable data writes: use "ro,loop" flags
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I typically end up in the following situation: I have, say, a 650 MB MPEG-2 .avi video file from a camera. Then, I use ffmpeg2theora to convert it into Theora .ogv video file, say some 150 MB in size. Finally, I want to upload this .ogv file to an ssh server.
Let's say, the ffmpeg2theora encoding process takes some 15 minutes on my PC. On the other hand, the upload goes on with a speed of about 60 KB/s, which takes some 45 minutes (for the 150MB .ogv). So: if I first encode, and wait for the encoding process to finish - and then upload, it would take approximately
15 min + 45 min = 1 hr
to complete the operation.
So, I thought it would be better if I could somehow start the upload, in parallel with the encoding operation; then, in principle - as the uploading process is slower (in terms of transferred bytes/sec) than the encoding one (in terms of generated bytes/sec) - the uploading process would always "trail behind" the encoding one, and so the whole operation (enc+upl) would complete in just 45 minutes (that is, just the time of the upload process +/- some minutes depending on actual upload speed situation on wire).
My first idea was to pipe the output of ffmpeg2theora to tee (so as to keep a local copy of the .ogv) and then, pipe the output further to ssh - as in:
./ffmpeg2theora-0.27.linux32.bin -v 8 -a 3 -o /dev/stdout MVI.AVI | tee MVI.ogv | ssh user#ssh.server.com "cat > ~/myvids/MVI.ogv"
While this command does, indeed, function - one can easily see in the running log in the terminal from ffmpeg2theora, that in this case, ffmpeg2theora calculates a predicted time of completion to be 1 hour; that is, there seems to be no benefit in terms of smaller completion time for both enc+upl. (While it is possible that this is due to network congestion, and me getting less of a network speed at the time - it seems to me, that ffmpeg2theora has to wait for an acknowledgment for each little chunk of data it sends through the pipe, and that ACK finally has to come from ssh... Otherwise, ffmpeg2theora would not have been able to provide a completion time estimate. Then again, maybe the estimate is wrong, while the operation would indeed complete in 45 mins - dunno, never had patience to wait and time the process; I just get pissed at 1hr as estimate, and hit Ctrl-C ;) ...)
My second attempt was to run the encoding process in one terminal window, i.e.:
./ffmpeg2theora-0.27.linux32.bin -v 8 -a 3 MVI.AVI # MVI.ogv is auto name for output
..., and the uploading process, using scp, in another terminal window (thereby 'forcing' 'parallelization'):
scp MVI.ogv user#ssh.server.com:~/myvids/
The problem here is: let's say, at the time when scp starts, ffmpeg2theora has already encoded 5 MB of the output .ogv file. At this time, scp sees this 5 MB as the entire file size, and starts uploading - and it exits when it encounters the 5 MB mark; while in the meantime, ffmpeg2theora may have produced additional 15 MB, making the .ogv file 20 MB in total size at the time scp has exited (finishing the transfer of the first 5 MB).
Then I learned (joen.dk ยป Tip: scp Resume) that rsync supports 'resume' of partially completed uploads, as in:
rsync --partial --progress myFile remoteMachine:dirToPutIn/
..., so I tried using rsync instead of scp - but it seems to behave exactly the same as scp in terms of file size, that is: it will only transfer up to the file size read at the beginning of the process, and then it will exit.
So, my question to the community is: Is there a way to parallelize the encoding and uploading process, so as to gain the decrease in total processing time?
I'm guessing there could be several ways, as in:
A command line option (that I haven't seen) that forces scp/rsync to continuously check the file size - if the file is open for writing by another process (then I could simply run the upload in another terminal window)
A bash script; say running rsync --partial in a while loop, that runs as long as the .ogv file is open for writing by another process (I don't actually like this solution, since I can hear the harddisk scanning for the resume point, every time I run rsync --partial - which, I guess, cannot be good; if I know that the same file is being written to at the same time)
A different tool (other than scp/rsync) that does support upload of a "currently generated"/"unfinished" file (the assumption being it can handle only growing files; it would exit if it encounters that the local file is suddenly less in size than the bytes already transferred)
... but it could also be, that I'm overlooking something - and 1hr is as good as it gets (in other words, it is maybe logically impossible to achieve 45 min total time - even if trying to parallelize) :)
Well, I look forward to comments that would, hopefully, clarify this for me ;)
Thanks in advance,
Cheers!
May be you can try sshfs (http://fuse.sourceforge.net/sshfs.html).This being a file system should have some optimization though I am not very sure.