Linux dd command: save file instead of upload file to server [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I have this command run on Finnix OS:
dd if=/dev/sda | pv | gzip -9 | ssh root#LinodeIP "gzip -d | dd of=/dev/sda"
I got it from this artical: https://github.com/ClickSimply/docs/blob/windows-on-linode/docs/tools-reference/windows-on-linode/installing-windows-on-linode-vps.md
And I understand that this command will compress a file using gzip, then upload it to a server and run gzip command in that server to extract it. My question is what is the right command to save the gzip file in local computer instead of sending it to a server?
Thank you so much.

dd if=/dev/sda | gzip -9 > /path/to/output/file.gz
should do it.
if you would still like to see the progress with pv then
dd if=/dev/sda | pv | gzip -9 > /path/to/output/file.gz
should be the way to go
EDIT: worth to note, cat is the best way in my opnion to do this nowadays, as it uses the full potential of the hardware. dd was OK where you were limited by the drive speed, (like tapes, still being used for backups nowadays in some places and dd is fine there)

Related

How do I run `forever` from Crontab? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I am trying to schedule node server restart on OS reboot (Ubuntu 16.04 LTS). I wrote:
crontab -u username -e
then I added following line:
#reboot /usr/local/bin/forever start -c /usr/bin/node /home/username/node/bin/www
I get the success message after saving or updating this file. There seems to be no effect on server reboot.
I'd wrap that into a bash script in the user's home directory's bin.
/home/username/bin/start_my_node_app.sh
Then in your crontab...
#reboot /home/username/bin/start_my_node_app.sh >/dev/null 2>&1
Though according to this article, #reboot may not work for non-root users.
https://unix.stackexchange.com/questions/109804/crontabs-reboot-only-works-for-root

scp or pscp transfer files to ssh machine but not showing any copied files in machine [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am using putty ssh client in windows to login to remote machine
I transfer files using scp and pscp from my local to server
scp:
scp -r script-1/ root#104.130.169.111:/mounarajan/script-1
Response in command line:
artist_dedup_urls1 100% 414KB 413.8KB/s 00:00
reverbnation_crawler.py 100% 21KB 21.0KB/s 00:00
pscp:
pscp -r script-1/ root#104.130.169.111:/mounarajan/script-1
Response in command line:
artist_dedup_urls1 | 413 kB | 413.8 kB/s | ETA: 00:00:00 | 100%
reverbnation_crawler.py | 21 kB | 21.0 kB/s | ETA: 00:00:00 | 100%
But after this the script-1 folder in server machine is empty.
Where is the problem actaully?
Did you deleted the "script-1" on 104.130.169.111 before? Your situation seems to be the same as this scenario:
On command window 1: cd to certain directory, say "a/b/c"
On another window: delete "a/b/c", recreate it and copy in some files
On command window 1: no change can be seen.
Solution is: on command window 1, cd to another directory and then cd back to "a/b/c" again to access the newly created directory

kill remote process by ssh [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 9 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
I can't directly access target host, need ssh as a proxy.
How can I kill a process from local use ssh ? I try this:
ssh root#$center "ssh root#$ip \"ps -ef|grep -v grep|grep $target_dir/$main|awk '{print \$2}'|xargs kill\""
it got error:
kill: can't find process "root"
And how to avoid error where process not exist ?
Suppose your process' name is name, then you can try something like this:
ssh hostname "pid=\$(ps aux | grep 'name' | awk '{print \$2}' | head -1); echo \$pid |xargs kill"
Use pkill -f to easily kill a process by matching its command-line.
ssh root#$center ssh root#$ip pkill -f "$target_dir/$main"

combing multiple commands when using ssh and scp [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I am having multiple ssh commands to do some tasks for me. For eg:
ssh a-vm "rm -f /home/dir/file1.xlsx"
ssh a-vm "rm -f /home/dir/file2.xml"
scp me#b-vm:/somedir/file1.xlsx .
scp me#b-vm:/somedir/file2.xml .
1) Is there a way to combine 2 ssh commands into 1 and two scp commands into 1?
2) Is there a cost if I do ssh and scp multiple times instead of 1 time?
Any help is appreciated.
You can just do:
ssh a-vm "rm -f /home/dir/file1.xlsx ; rm -f /home/dir/file2.xml"
scp "me#b-vm:/somedir/{file1.xlsx,file2.xml}" .
Each ssh/scp call will cost you the connection time and some cpu time (could be significant if you do that to hundreds of machines at the same time, otherwise unlikely).
Alternatively you can use a persistent master connection for ssh and tunnel others over it. That will save a couple of network roundtrips - see http://en.wikibooks.org/wiki/OpenSSH/Cookbook/Multiplexing

real estate linux back up solution [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I did a lot of research..but I couldnt find what I exactly want. Can anyone have any/some knowledge regarding how a real estate company backup strategy should be. I mean, there are different backup types such as full, incremental and differential backups.
Which solution(s) a real estate company should use to backup its resources and how frequently (daily, weekly, etc)?
assume that they have linux servers...
many thanks..
This belongs to serverfault, However you need to provide more details.
You should run incremental daily backups and a weekly fully backup.
for MySQL databases check : http://dev.mysql.com/doc/refman/5.1/en/backup-methods.html
for other files you can use rsync with hard copies.
Check this TLDPhowto and LJ article
Concider using encryption on the backup drive, either full disk using dmcrypt or if you use tar/cpio pipe it to openssl (ex : tar -xf - path1 path2 | openssl enc -aes-128-cbc -salt > backup.$(date --iso).tgz.aes
Example daily rsync backup script:
#!/bin/sh
BACKUP_DIR=/mnt/backups/
BACKUP_PATHES="/var /home"
cd ${BACKUP_DIR}
rm -rf backup.5 backup.5.log.bz2 &>/dev/null
recycle() {
i=$1; y=$(($i+1))
b=${2-backup}
mv "${b}.$i" "${b}.$y" &>/dev/null
mv "${b}.$i.log.bz2" "${b}.$y.log.bz2" &>/dev/null
}
recycle 4
recycle 3
recycle 2
recycle 1
recycle 0
OPTS="--numeric-ids --delete --delete-after --delete-excluded"
nice -n20 ionice -c2 -n2 rsync -axlHh -v --link-dest=../backup.1 ${OPTS} ${BACKUP_PATHES} backup.0/ --exclude-from=/root/.rsync-exclude 2>&1 | bzip2 -9 > backup.0.log.bz2
cd /root &>/dev/null

Resources