physical Host = Ubuntu 18.04;
virtual Guest (VirtualBox) = Win 7 with installed Cygwin, Access to Host by Shared Folders of Virtualbox.
The above environment is working.
I use a shell script in Cygwin to save some files to the physical host. That works also.
Strange thing: when i start the same script by cron under Cygwin the mounted directories (shared folders) are not found / known by cron, only /cydrive/c is known.
Where is the issue? Virtualbox? cygwin? cron?
Thanks for any advise.
#not2qubit
crontab:
*/15 * * * * /home/sepp/my_backup_dubai
my_backup_dubai:
#!/usr/bin/csh
if (! -d /tmp/Backup) mkdir /tmp/Backup
rsync -avi --delete --delete-excluded --exclude-from=/home/sepp/list.dubai / /tmp/Backup
crontab -l >/tmp/Backup/crontab
tar czvf /cygdrive/z/Cloud/ownCloud/tmp/vmdubai.tgz /tmp/Backup
the issue is /cygdrive/z/.... this is not executed!?
Related
I'm using fstab to mount a samba share on boot
//ip/share /mnt/share cifs credentials=/home/user/.smbcredentials,uid=user 0 0
and scheduled rsync via cron job to copy the contents to a local drive once a week
0 2 * * 7 /usr/bin/rsync -av --delete /mnt/share/ /mnt/backup/ --log-file=/var/log/rsyncbackup.log
The thought came to mind if the host was unavailable /mnt/share would be empty- if the cron job ran it'd wipe all the data on my local backup mount because of the difference and --delete flag. I want to keep that as I want a clone of my share.
I'm relatively new with Linux and curious what approach might add a safeguard to this. could I run "ls" to check for content, if present continue? Otherwise what would ensure I don't inadvertently delete everything on my backup mount?
Solved my problem by reading the manual a little bit more on rsync and ssh.
Generated ssh key on client ssh-keygen
Copied to host ssh-copy-id user#host
Modified cron job 0 2 * * 7 /usr/bin/rsync -av --delete user#ip:/mnt/driveuid/share/ /mnt/backup/ --log-file=/var/log/rsyncbackup.log
Now if my computer can't connect to the host the job doesn't run.
Parrot is based on debian. All I do in Ubunto 18.04 lts and 20.04 lts works fine. In Parrot - not (at least not in my env). This is fresh installation, default, static IP, fully patched and after few reboots.
Windows is 8.1 pro in domain (2012R2 forest level), fully patched, no antivirus, firewall enables traffic. User is domain admin with no special chars in name and password, just to make it work.
So, to make it easier I do everything in command line as root (sudo -i).
nano /scripts/creds
username=user1
password=Password1
domain=test.local
The command:
mount -t cifs //192.168.1.10/d$ /mnt/disk_d -o credentials=/scripts/creds
In new Linux installations highest SBM version is taken by default, like other things (yey), so forcing these don`t change much (it works).
It works from command line (sudo). No errors and there are windows files and folders in /mnt/disk_d
It works from bash file: "./mount_windows.sh" with this line inside.
It doesn`t work in /etc/fstab. Command
mount -a -v
generates "parse error at line 19 -- ignored", this line is for mount. Physical disks are "already mounted".
So I tried to add one or more of them:
"file_mode=0777,dir_mode=0777", "serverino" or "noserverino", "sec=ntlmv2", "perm", "auto", "vers=3.0", " 0 0"
or just mix everything with different position with no success. Please remember it works from command line with no additional options.
It doesn`t work from /etc/crontab.
mount.cifs sits in /sbin so everything is ok.
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
added:
* * * * * root mount -t cifs //192.168.1.10/d$ /mnt/disk_d -o credentials=/scripts/creds
53 * * * * * root mount -t cifs //192.168.1.10/d$ /mnt/disk_d -o credentials=/scripts/creds
#reboot mount -t cifs //192.168.1.10/d$ /mnt/disk_d -o credentials=/scripts/creds
#reboot root mount -t cifs //192.168.1.10/d$ /mnt/disk_d -o credentials=/scripts/creds
#reboot sudo bash -x /scripts/mount_windows.sh
Restart cron shows no errors:
"systemctl restart cron"
None of these mounted disk after a full reboot.
So I added
echo "1" >> /scripts/log.txt
to check if anything is proccessed. File is created and "1" is added.
After each reboot there is nothing in /var/log/messages.
I don`t know why is this so hard to make it work. It works from command line and from sh.
I am pretty newbie to Linux and started LFS because I needed it for school. So my system is now perfectly running with an internet connection, but I still don't have any packet manager or something. The first binary I would like to have is basically wget, but I really don't know how to do...
Could someone explain to me please ?
I personally used (and would highly recommend) using the existing Linux system (the host) to download the wget package and its dependencies before booting your LFS system for the first time. However, seeing that you're already using your LFS system, if you still have the ability to log using the host, then use it to download wget as if it was one of the sources that you got when building the LFS system.
For me, I used a Linux Mint Host running in VirtualBox to build my LFS. To get wget I just had to re-add the Linux Mint host storage, and download wget and added it to the LFS sources. I then removed the Linux Mint host storage, logged in to my LFS machine, then followed the steps in BLFS.
Note: this is mainly just from parts of lfs and the wget page of blfs.
1. Boot into your host OS.
2. Enter the following commands in the command line to get into chroot(edit depending on your partitions and where you mount lfs):
sudo su -
export LFS=/mnt/lfs
mount -vt ext4 /dev/sda4 $LFS
mount -v --bind /dev $LFS/dev
mount -vt devpts devpts $LFS/dev/pts -o gid=5,mode=620
mount -vt proc proc $LFS/proc
mount -vt sysfs sysfs $LFS/sys
mount -vt tmpfs tmpfs $LFS/run
if [ -h $LFS/dev/shm ]; then
mkdir -pv $LFS/$(readlink $LFS/dev/shm)
fi
chroot "$LFS" /usr/bin/env -i \
HOME=/root TERM="$TERM" PS1='\u:\w\$ ' \
PATH=/bin:/usr/bin:/sbin:/usr/sbin \
/bin/bash --login
3. Download wget from http://ftp.gnu.org/gnu/wget/wget-1.19.1.tar.xz and copy it into /mnt/lfs/sources from your host os.
4. Unpack and cd into it with:
tar -xf wget-1.19.1.tar.xz
cd wget-1.19.1
5. Configure and install wget with:
./configure --prefix=/usr \
--sysconfdir=/etc \
--with-ssl=openssl &&
make
make install
6. Delete the wget-1.19.1 folder if you want and your done!
Somehow /root directory is missing(not mounted) in my rhel Linux box.
Can anyone suggest how to re-mount /root?
bash-3.1# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.1 (Tikanga).
Often /root will simply be a subdirectory of /
Have a look at the last column of df /root The output on my machine indicates that /root is a subdirectory of /
If /root is missing the solution may be as simple as logging as root for the first time or running a mkdir /root
I am trying to archive my localhost's root folder with tar and want to automate it's execution on a daily basis with crontab. For this purpose, I created a 'backupfolder' in my personal folder. I am running on Ubuntu 12.04.
The execution of tar in the command line works fine without problems:
sudo tar -cvpzf backupfolder/localhost.tar.gz /var/www
However, when I schedule the command for a daily backup (let's say at 17.00) in sudo crontab -e, it is not executing, i.e. the backup does not update using the following command:
0 17 * * * sudo tar -cpzf backupfolder/localhost.tar.gz /var/www
I already tried the full path home/user/backupfolder/localhost.tar.gz without success.
var/log/syslog gives me the following output for the scheduled execution:
Feb 2 17:00:01 DESKTOP-PC CRON[12052]: (root) CMD (sudo tar -cpzfbackupfolder/localhost.tar.gz /var/www)
Feb 2 17:00:01 DESKTOP-PC CRON[12051]: (CRON) info (No MTA installed, discarding output)
/etc/crontab specifies the following path:
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
I assume that crontab is not executing as this is a sudo command.
Is there a way how I can get this running? What is the recommended, safe way if I don't want to hardcode my root password?
Well, the command that works for you is
sudo tar -cvpzf backupfolder/localhost.tar.gz /var/www
Which means, you have to run the command with sudo access, and it will not work from within your crontab.
I would suggest adding the cron job to the root user's crontab.
Basically, do
sudo crontab -e
And add an entry there
0 17 * * * cd /home/user/backupfolder && tar -cpzf localhost.tar.gz /var/www
If that doesn't work, add the full path of tar (like /bin/tar).
Also, while debugging, set the cronjob to run every minute (* * * * *)
Basically the problem is the sudo command so we will allow sudo to run tar for the "user" without prompting for the password.
Add the following line in /etc/sudoers file.
user ALL=(ALL) NOPASSWD:/bin/tar
where user is the user installing the crontab.
I suspect a PATH problem, try to set some variables at the top of sudo crontab -e :
MAILTO=your_email#domain.tld # to get the output if there's errors
PATH=/usr/bin:/bin:/usr/local/bin:/usr/local/sbin:/sbin
You can write your command in a script like run.sh
#/bin/sh -l
tar -cvpzf backupfolder/localhost.tar.gz /var/www
then use the crontab to run the script.
IMPORTANT NOTE: the script's first line has the "-l" option.
Try it.