Can`t mount cifs shares in Parrot 4.10 after boot - linux

Parrot is based on debian. All I do in Ubunto 18.04 lts and 20.04 lts works fine. In Parrot - not (at least not in my env). This is fresh installation, default, static IP, fully patched and after few reboots.
Windows is 8.1 pro in domain (2012R2 forest level), fully patched, no antivirus, firewall enables traffic. User is domain admin with no special chars in name and password, just to make it work.
So, to make it easier I do everything in command line as root (sudo -i).
nano /scripts/creds
username=user1
password=Password1
domain=test.local
The command:
mount -t cifs //192.168.1.10/d$ /mnt/disk_d -o credentials=/scripts/creds
In new Linux installations highest SBM version is taken by default, like other things (yey), so forcing these don`t change much (it works).
It works from command line (sudo). No errors and there are windows files and folders in /mnt/disk_d
It works from bash file: "./mount_windows.sh" with this line inside.
It doesn`t work in /etc/fstab. Command
mount -a -v
generates "parse error at line 19 -- ignored", this line is for mount. Physical disks are "already mounted".
So I tried to add one or more of them:
"file_mode=0777,dir_mode=0777", "serverino" or "noserverino", "sec=ntlmv2", "perm", "auto", "vers=3.0", " 0 0"
or just mix everything with different position with no success. Please remember it works from command line with no additional options.
It doesn`t work from /etc/crontab.
mount.cifs sits in /sbin so everything is ok.
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
added:
* * * * * root mount -t cifs //192.168.1.10/d$ /mnt/disk_d -o credentials=/scripts/creds
53 * * * * * root mount -t cifs //192.168.1.10/d$ /mnt/disk_d -o credentials=/scripts/creds
#reboot mount -t cifs //192.168.1.10/d$ /mnt/disk_d -o credentials=/scripts/creds
#reboot root mount -t cifs //192.168.1.10/d$ /mnt/disk_d -o credentials=/scripts/creds
#reboot sudo bash -x /scripts/mount_windows.sh
Restart cron shows no errors:
"systemctl restart cron"
None of these mounted disk after a full reboot.
So I added
echo "1" >> /scripts/log.txt
to check if anything is proccessed. File is created and "1" is added.
After each reboot there is nothing in /var/log/messages.
I don`t know why is this so hard to make it work. It works from command line and from sh.

Related

mount_smbfs fails in crontab with "mount_smbfs: unable to open connection: syserr = Authentication error"

I want a FreeBSD machine to mount a SMB share from a Linux server automatically after boot. Hence I wrote a script to run in the root crontab to mount it. I have set the require credential and IP on the /root/.nsmbrc and script runs fine on command line. However, it fails when being called from crontab with the following error.
mount_smbfs: unable to open connection: syserr = Authentication error
The content of the file /root/.nsmbrc
[default]
workgroup=WORKGROUP
[UBUNTU]
addr=192.168.1.20
charsets=UTF-8:UTF-8
[UBUNTU:FREEBSD]
password=f(Xc4CVfx4HU7;9
The mounting line
/usr/sbin/mount_smbfs -N -f 666 -d 777 //freebsd#ubuntu/share /net/ubuntu/share
How do I fix it?
Many thanks!
Try /etc/fstab, for example with something like:
//u123#u123/foo /mnt/foo smbfs rw,late,-N 0 0
If the option "late" is specified, the file system will be automatically
mounted at a stage of system startup after remote mount points are
mounted. (man fstab)
Then in /etc/nsmb.conf you could have something like:
[U123]
addr=192.168.1.20
retry_count=100
timeout=30
[U123:U123]
password=secret

Crontab executes shell script: Mount error(13): Permission denied

I have got a RasPi and I actually try to execute a shellscript to automount a folder at every Reboot.
Script Command is:
sudo mount -t cifs 'folderpath' 'pointtomount' -o username=xxx,password=xxx,sec=ntlm
It works perfect if I use it manually but via cronjob it responses "Mount Error(13): Permission denied" and the mount can't be executed.
Means cronjob executes the file at least.
My idea was to mount it manually and check if automount is disabled in /etc/fstab or /etc/mtab. As it's just a folder I only found it in mtab.
I can't write in it but nowhere's "noauto" in the options so probabbly everything is correct.
Not certain if it has sth. in common with crontab execute rights but ls -lha /usr/bin/crontab output is -rwxr-sr-x 1.
If anyone of you got any clues how to solve this problem, i'd appreciate help.
Thanks
EDIT1:
Okay after hours and hours it seems to be working in /home/pi/.config/lxsession/LXDE-pi/autostart.sh (type "sudo nano /home/pi/.config/lxsession/LXDE-pi/autostart.sh"). In that file I wrote "#/home/pi/scripttoexecute.sh". In my executescript I wrote "sudo mount -t cifs 'foldertomount' 'directorypath' -o credentials=/root/.smbcredentials,iocharset=utf8,file_mode=0777,dir_mode=0777,sec=ntlm". Obviously to use the smbcredentials file, write "sudo nano /root/.smbcredentials" and in there "username=xxx" and next line "password=xxx" and optional domain.
Thanks to all and I hope that this might save someone elses time.
Not sure if it has sth in common with apt-get upgrade and apt-get update before.
Couple things here, first off every user can have their own crontab. For example:
crontab -e # Edit crontab of current user
crontab -u root -e # Edit crontab of root user (might need sudo for this)
crontab -u www-data -e # Edit crontab of www-data user
Another thing is that if you don't use crontab -e to edit the file, and actually edit the /etc/crontab file directly (do something like vim /etc/crontab), you can actually specify the user you'd like to run the cron as:
* * * * * root mount -t cifs /path/to/folder /point/to/mount -o username=xxx,password=xxx,sec=ntlm
To run via root's crontab at reboot, type:
sudo crontab -e
And add this line:
#reboot mount -t cifs 'folderpath' 'pointtomount' -o username=xxx,password=xxx,sec=ntlm
But really, shouldn't you be adding your auto-mounts to /etc/fstab?

Vagrant unable to mount in Linux guest with VirtualBox Guest Additions on Windows 7

I'm trying to get a Linux VM using Virtual Box, Virtual Box Guest Additions, and Vagrant running and to mount a folder on my Windows 7 machine. I've tried the suggestions in this question, but still get the same error.
I'm running the following versions:
Virtual Box: 4.3.18 r96516
Virtual Box Guest Additions: 4.3.18
Vagrant: 1.6.5
Vagrant Plug-ins:
vagrant-login: 1.0.1
vagrant-share: 1.1.2
vagrant-vbguest: 0.10.0
When I run vagrant reload I get the following error:
Failed to mount folders in Linux guest. This is usually because
the "vboxsf" file system is not available. Please verify that
the guest additions are properly installed in the guest and
can work properly. The command attempted was:
mount -t vboxsf -o uid=`id -u vagrant`,gid=`getent group vagrant | cut -d: -f3`,
nolock,vers=3,udp,noatime core /tbm
mount -t vboxsf -o uid=`id -u vagrant`,gid=`id -g vagrant`,nolock,vers=3,udp,noa
time core /tbm
The error output from the last command was:
stdin: is not a tty
unknown mount option `noatime'
valid options:
rw mount read write (default)
ro mount read only
uid =<arg> default file owner user id
gid =<arg> default file owner group id
ttl =<arg> time to live for dentry
iocharset =<arg> i/o charset (default utf8)
convertcp =<arg> convert share name from given charset to utf8
dmode =<arg> mode of all directories
fmode =<arg> mode of all regular files
umask =<arg> umask of directories and regular files
dmask =<arg> umask of directories
fmask =<arg> umask of regular files
I've tried un-installing, installing, updating the vagrant-vbguest plugin:
vagrant plugin install vagrant-vbguest
I've tried running the following command after running vagrant ssh, but still get the same error message:
sudo ln -s /opt/VBoxGuestAdditions-4.3.18/lib/VBoxGuestAdditions /usr/lib/VBoxGuestAdditions
I'm not super familiar with mount options, but I tried executing your command in a similar VM I'm running and got the same error regarding the noatime option.
I read through the documentation (man 8 mount) which states somewhere after line 300 or so, in the FILESYSTEM INDEPENDENT MOUNT OPTIONS that: Some of these options are only useful when they appear in the /etc/fstab file.
I suspect this is your problem. I edited my /ect/fstab file to change one of my mounts to /dev/mapper/precise64-root / ext4 noatime,errors=remount-ro 0 1 this option and then ran the following:
sudo mount -oremount /
vagrant#precise64:~$ mount
/dev/mapper/precise64-root on / type ext4 (rw,noatime,errors=remount-ro)
...
I edited the file again to remove the option and:
vagrant#precise64:~$ sudo mount -oremount /
vagrant#precise64:~$ mount
/dev/mapper/precise64-root on / type ext4 (rw,errors=remount-ro)
...
I don't know if you're providing these mount commands or if they come from a plugin, but it seems like (at least in your environment), the option works fine, but can't be specified on the command line.

Beaglebone inittab issue

I am developing an application in beaglebone.
I want to add start up scripts to my Beaglebone but I can not find /etc/inittab.
I am using the image : Angstrom-Cloud9-IDE-GNOME-eglibc-ipk-v2012.05-beaglebone-2012.06.18.img.xz
I think in the previous versions of image there is /etc/initab but for the new distributions I could not find the inittab :/
I want to apply this : Automatic login on Angstrom Linux
but I can not because there is no /etc/inittab.
Where is the inittab in new distributions.
When I write uname -r it gives:
3.2.23
Regards
inittab has been replaced by systemd
This is how I did it for the serial console. You can probably adapt it easily for tty1 by replacing "serial-getty#..." by "getty#...", but I haven't tested it.
cp /lib/systemd/system/serial-getty#.service /etc/systemd/system/autologin#.service
rm /etc/systemd/system/getty.target.wants/serial-getty#ttyO0.service
ln -s /etc/systemd/system/autologin#.service /etc/systemd/system/getty.target.wants/serial-getty#ttyO0.service
Create the following script file in any location (/home/root/autologin.sh in my case)
#!/bin/sh
exec /bin/login -f root
Make it executable
chmod a+x autologin.sh
Edit /etc/systemd/system/autologin#.service and update the ExecStart command by adding the -n (Do not prompt the user for a login name) and -l (Invoke the specified login_program instead of /bin/login) options.
ExecStart=-/sbin/agetty -n -l /home/root/autologin.sh -s %I 115200

How do you force a CIFS connection to unmount

I have a CIFS share mounted on a Linux machine. The CIFS server is down, or the internet connection is down, and anything that touches the CIFS mount now takes several minutes to timeout, and is unkillable while you wait. I can't even run ls in my home directory because there is a symlink pointing inside the CIFS mount and ls tries to follow it to decide what color it should be. If I try to umount it (even with -fl), the umount process hangs just like ls does. Not even sudo kill -9 can kill it. How can I force the kernel to unmount?
I use lazy unmount: umount -l (that's a lowercase L)
Lazy unmount. Detach the filesystem
from the filesystem hierarchy now, and
cleanup all references to the
filesystem as soon as it is not busy
anymore. (Requires kernel 2.4.11 or
later.)
umount -a -t cifs -l
worked like a charm for me on CentOS 6.3. It saved me a server reboot.
On RHEL 6 this worked:
umount -f -a -t cifs -l
This works for me (Ubuntu 13.10 Desktop to an Ubuntu 14.04 Server) :-
sudo umount -f /mnt/my_share
Mounted with
sudo mount -t cifs -o username=me,password=mine //192.168.0.111/serv_share /mnt/my_share
where serv_share is that set up and pointed to in the smb.conf file.
I had this issue for a day until I found the real resolution. Instead of trying to force unmount an smb share that is hung, mount the share with the "soft" option. If a process attempts to connect to the share that is not available it will stop trying after a certain amount of time.
soft Make the mount soft. Fail file system calls after a number of seconds.
mount -t smbfs -o soft //username#server/share /users/username/smb/share
stat /users/username/smb/share/file
stat: /users/username/smb/share/file: stat: Operation timed out
May not be a real answer to your question but it is a solution to the problem
There's a -f option to umount that you can try:
umount -f /mnt/fileshare
Are you specifying the '-t cifs' option to mount? Also make sure you're not specifying the 'hard' option to mount.
You may also want to consider fusesmb, since the filesystem will be running in userspace you can kill it just like any other process.
Try umount -f /mnt/share. Works OK with NFS, never tried with cifs.
Also, take a look at autofs, it will mount the share only when accessed, and will unmount it afterworlds.
There is a good tutorial at www.howtoforge.net
I had a very similar problem with davfs. In the man page of umount.davfs, I found that the -f -l -n -r -v options are ignored by umount.davfs. To force-unmount my davfs mount, I had to use umount -i -f -l /media/davmount.
umount -f -t cifs -l /mnt &
Be careful of &, let umount run in background.
umount will detach filesystem first, so you will find nothing abount /mnt. If you run df command, then it will umount /mnt forcibly.
Approaching this problem sideways:
If you can't unmount because the filesystem is busy, is your ssh/terminal session cd'd into the mount directory, therefore making the filesystem busy?
For me, the solution was to cd into my home, then sudo umount worked flawlessly.
cd ~
umount /path/to/my/share
I would post this as a comment, but I have insufficient reputation. Hoping to spare someone else the forehead slap.
I experienced very different results regarding unmounting a dead cifs mount and found several tricks to bypass the problem temporarily.
Let's start with the mountpoint command. It can be useful to analyze the status of a mount:
mountpoint /mnt/smb_share
Usually it returns is a mountpoint or / is not a mountpoint.
But it can even return:
No such device
Transport endpoint is not connected
<nothing / stale>
For every result expect of is not a mountpoint there is a chance of unmounting.
You could try the usual way:
umount /mnt/smb_share
or force mode:
umount /mnt/smb_share -f
But often the force does not help. It simply returns the same nasty device is busy message.
Then the only option is to use the lazy mode:
umount /mnt/smb_share -l
BUT: This does not unmount anything. It only "moves" the mount to the root of the system, which can be seen as follows:
# lsof | grep mount | grep cwd
mount.cif 3125 root cwd unknown / (stat: No such device)
mount.cif 3150 root cwd unknown / (stat: No such device)
It is even noted in the documentation:
Lazy unmount. Detach the filesystem from the file hierarchy
now, and clean up all references to this filesystem as soon
as it is not busy anymore.
Now if you are unlucky, it will stay there forever. Even killing the process probably does not help:
kill -9 $pid
But why is this a problem? Because mount /mnt/smb_share does not work until the lazy unmounted path is really cleaned up by the Linux Kernel. And this is even mentioned in the documentation of umount. "lazy" should only be used to avoid a long shutdown / reboot times:
A system reboot would be expected in near future if you’re
going to use this option for network filesystem or local
filesystem with submounts. The recommended use-case for
umount -l is to prevent hangs on shutdown due to an
unreachable network share where a normal umount will hang due
to a downed server or a network partition. Remounts of the
share will not be possible.
Workarounds
Use a different SMB version
If you still have hopes that the lazy unmounted path will ever be not busy anymore and cleaned up by the Linux Kernel or you can't reboot at the moment, then you are maybe lucky and your SMB server supports different protocol versions. By that we can use the following trick:
Lets say you mounted your share as follows:
mount.cifs //smb.server/share /mnt/smb_share -o username=smb_user,password=smb_pw
By that Linux automatically tries the maximum support SMB protocol version. Maybe 3.1. Now, you can force this version and it won't mount as expected:
mount.cifs //smb.server/share /mnt/smb_share -o username=smb_user,password=smb_pw,vers=3.1
But then simply try a different version:
mount.cifs //smb.server/share /mnt/smb_share -o username=smb_user,password=smb_pw,vers=3.0
or maybe 2.1:
mount.cifs //smb.server/share /mnt/smb_share -o username=smb_user,password=smb_pw,vers=2.1
Change the IP of the SMB server
If you are able to change the IP address or add a second IP to your SMB server, you can use this to mount the same server.
Dirty: Forward the traffic
Lets say the SMB server has the IP address 10.0.0.1 and the mount is really dead. Then create this iptables rule:
iptables -t nat -A OUTPUT -d 10.0.0.250 -j DNAT --to-destination 10.0.0.1
Now change your mount rule accordingly, so it mounts the samba server through IP 10.0.0.250 instead of 10.0.0.1 and voila, its mounted without server reboot. Dirty, but it works. PS This rule does not survive a reboot, so you should mount the SMB server manually and leave the /etc/fstab as usual.
More debugging
If you want to check if samba connection itself is theoretically working, you could try to list all SMB shares of the server through SMB3 as follows:
smbclient //smb.server -U "smb_user" -m SMB3 -L
or to view the content of a share with SMB1:
smbclient //smb.server -U "smb_user" -m NT1 -c ls
On RHEL 6 this worked for me also:
umount -f -a -t cifs -l FOLDER_NAME
A lazy unmount will do the job for you.
umount -l <mount path>

Resources