real estate linux back up solution [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I did a lot of research..but I couldnt find what I exactly want. Can anyone have any/some knowledge regarding how a real estate company backup strategy should be. I mean, there are different backup types such as full, incremental and differential backups.
Which solution(s) a real estate company should use to backup its resources and how frequently (daily, weekly, etc)?
assume that they have linux servers...
many thanks..

This belongs to serverfault, However you need to provide more details.
You should run incremental daily backups and a weekly fully backup.
for MySQL databases check : http://dev.mysql.com/doc/refman/5.1/en/backup-methods.html
for other files you can use rsync with hard copies.
Check this TLDPhowto and LJ article
Concider using encryption on the backup drive, either full disk using dmcrypt or if you use tar/cpio pipe it to openssl (ex : tar -xf - path1 path2 | openssl enc -aes-128-cbc -salt > backup.$(date --iso).tgz.aes
Example daily rsync backup script:
#!/bin/sh
BACKUP_DIR=/mnt/backups/
BACKUP_PATHES="/var /home"
cd ${BACKUP_DIR}
rm -rf backup.5 backup.5.log.bz2 &>/dev/null
recycle() {
i=$1; y=$(($i+1))
b=${2-backup}
mv "${b}.$i" "${b}.$y" &>/dev/null
mv "${b}.$i.log.bz2" "${b}.$y.log.bz2" &>/dev/null
}
recycle 4
recycle 3
recycle 2
recycle 1
recycle 0
OPTS="--numeric-ids --delete --delete-after --delete-excluded"
nice -n20 ionice -c2 -n2 rsync -axlHh -v --link-dest=../backup.1 ${OPTS} ${BACKUP_PATHES} backup.0/ --exclude-from=/root/.rsync-exclude 2>&1 | bzip2 -9 > backup.0.log.bz2
cd /root &>/dev/null

Related

Process gets killed when ssh disconnects [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
I'm running the script below on a gcp debian instance. When shutting down my computer, ssh disconnects, and the script stops. Below is my script:
wget -P/root -N --no-check-certificate "https://raw.githubusercontent.com/reeceyng/v2ray-agent/master/shell/install_en.sh" && mv /root/install_en.sh /root/install.sh && chmod 700 /root/install.sh &&/root/install.sh
I have tried Tmux and screen to prevent this based on other posts suggestions. None of them were helpful. All processes stop after some time.
Use nohup to detach your process (here, wget) from your shell. For example:
nohup wget -P/root -N --no-check-certificate "https://raw.githubusercontent.com/reeceyng/v2ray-agent/master/shell/install_en.sh" && mv /root/install_en.sh /root/install.sh && chmod 700 /root/install.sh &&/root/install.sh &
should do the trick.

Linux dd command: save file instead of upload file to server [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I have this command run on Finnix OS:
dd if=/dev/sda | pv | gzip -9 | ssh root#LinodeIP "gzip -d | dd of=/dev/sda"
I got it from this artical: https://github.com/ClickSimply/docs/blob/windows-on-linode/docs/tools-reference/windows-on-linode/installing-windows-on-linode-vps.md
And I understand that this command will compress a file using gzip, then upload it to a server and run gzip command in that server to extract it. My question is what is the right command to save the gzip file in local computer instead of sending it to a server?
Thank you so much.
dd if=/dev/sda | gzip -9 > /path/to/output/file.gz
should do it.
if you would still like to see the progress with pv then
dd if=/dev/sda | pv | gzip -9 > /path/to/output/file.gz
should be the way to go
EDIT: worth to note, cat is the best way in my opnion to do this nowadays, as it uses the full potential of the hardware. dd was OK where you were limited by the drive speed, (like tapes, still being used for backups nowadays in some places and dd is fine there)

Error applying chroot to group (groupmod: group 'www' does not exist) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
So I am trying to chroot all the users who are in group www to the directory /var/www. But I every time I try to do that, it comes back saying the group doesn't exist. (even though the group does exist)
[root#server var]# cat /etc/fedora-release
Fedora release 26 (Twenty Six)
[root#server var]# groupadd -r www
[root#server var]# groupmod -R /var/www www
groupmod: group 'www' does not exist
[root#server var]# ls -la
drwxrwxrwx. 5 root www 46 Jul 12 06:44 www
As you can see the error message is less than helpful. I have looked around on stackoverflow but haven't come across an answer to this specific question yet.
Can anyone shed some light on what I am doing wrong?
That is not what groupmod -R does. What it means is that the groupmod program will chroot into the directory, and then do everything. It’s intended for when you have one system mounted inside another, such as if you booted from a live USB drive to make changes to a broken system.
Once groupmod has run chroot, it looks in the /var/www/etc/group file to figure out what group ID www corresponds to, which of course fails because if your system is at all sanely set up you don’t have a var/www/etc/group file.
I do not know how to make sure all processes by a specific user run in a chroot, and I don’t think that’s the right way to achieve your goal. If a program is chrooted into /var/www, it doesn’t have access to any of the utilities it might expect, like the web server executable. Instead, I would look at the documentation of your web server and see if it supports this directly, or see if you can get a custom mount namespace using systemd.

Error mounting in fstab Ubuntu [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I'm working on a bash class project which needs me to create 2 partitions in Ubuntu and make them be automatically mounted each time the systems boots with fstab.
I got the following file which creates (I think correctly) the 2 partitions needed and adds them to the fstab file.
#!/bin/bash
#SVN Partition
(echo n; echo p; echo ; echo ; echo +20G; echo w;) | sudo fdisk /dev/sdb
#WEB Partition
(echo n; echo p; echo ; echo ; echo +5G; echo w;) | sudo fdisk /dev/sdb
sudo su -c "echo '/dev/sdb1 /svn ext4 rw,user,auto,utf8 0 0' >> /etc/fstab"
sudo su -c "echo '/dev/sdb2 /web ext4 rw,user,auto,exec,utf8 0 0' >> /etc/fstab"
When I reboot the system an error appears telling me the automatic mounting for /web and /svn failed.
Does anyone have a clue on what is happening? Thanks in advance.
You haven't formatted the filesystems...
Execute these before reboot.
sudo mkfs.ext4 /dev/sdb1
sudo mkfs.ext4 /dev/sdb2

combing multiple commands when using ssh and scp [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I am having multiple ssh commands to do some tasks for me. For eg:
ssh a-vm "rm -f /home/dir/file1.xlsx"
ssh a-vm "rm -f /home/dir/file2.xml"
scp me#b-vm:/somedir/file1.xlsx .
scp me#b-vm:/somedir/file2.xml .
1) Is there a way to combine 2 ssh commands into 1 and two scp commands into 1?
2) Is there a cost if I do ssh and scp multiple times instead of 1 time?
Any help is appreciated.
You can just do:
ssh a-vm "rm -f /home/dir/file1.xlsx ; rm -f /home/dir/file2.xml"
scp "me#b-vm:/somedir/{file1.xlsx,file2.xml}" .
Each ssh/scp call will cost you the connection time and some cpu time (could be significant if you do that to hundreds of machines at the same time, otherwise unlikely).
Alternatively you can use a persistent master connection for ssh and tunnel others over it. That will save a couple of network roundtrips - see http://en.wikibooks.org/wiki/OpenSSH/Cookbook/Multiplexing

Resources