Can not login to system after start server:
openpam_check_desc_owner_perms() : /etc/pam.d/login insecure perms
Started system in single-user mod. There are all system dirs (/boot , /lib ...)
nobody:nogroup
ownership.
Ofc, i cannot just chown system dirs. So, how can i restore ownership to root?
Firstly, you should use a live boot downloaded from official repository. I don't think rescue mode (in this case) can save you.
In live the live environment, just mount your impacted filesystems somewhere:
mount /dev/${your_partition} /mnt
Now, you can use mtree to set good rights on all default files and directory. This command will re-create your tree with good rights. Before executing it, you can run it to check state of your filesystem.
# check before act
mtree -f /etc/mtree/BSD.root.dist -p /mnt
# you can now apply change
mtree -u -f /etc/mtree/BSD.root.dist -p /mnt
You can find more mtree files in /etc/mtree:
BSD.debug.dist
BSD.include.dist
BSD.sendmail.dist
BSD.usr.dist
BSD.var.dist
After doing that, you can now run mergemaster. mergemaster will warn when some files aren't well configured with good rights:
mergemaster -iD /mnt
If you have still issue, you can download or fetch FreeBSD source from SVN, extract them, and reinstall your configuration files manually (do a backup of your configuration before doing that and please read official FreeBSD documentation about using source).
cd /usr/src
svnlite https://svnweb.freebsd.org/base/releng/${your_freebsd_release} .
cd /usr/src/etc
make install DESTDIR=/mnt
You can now reboot your computer or server without live CD.
Related
Hi I'm very new to Linux..
Once I have changed ownership of /usr my sudo command failed working ..
Once I changed ownership of /var some other things broke ..
1: I just want to know which are folders where one should never change it's default ownership..
2: what if someone gets permission_denied for /var during installing some packages .. chmod or chown should be used ...
I would never change the ownership in folders unlike /home/* /opt/. Sometimes you have to change the owner if you put your own stuff to /etc/, but you should now what to do.
To install software, even in Ubuntu use the provided tools, 'apt' and 'dpkg' for example. Often the installation needs root rights. Give them with the usage of and additional 'sudo'.
# f.e. installation of a command line browser
sudo apt install w3m
I am trying to install and run the Datastax cassandra community edition on Redhat Linux but I don't have root privileges. I extracted the tar in my home directory but I'm unable to do ./cassandra
I am doing this on a HPC cluster and thought I'd install Cassandra in my home directory and save the data in a scratch space we've been provided (home directory doesn't have enough space to hold entire data)
I would appreciate any help! Thanks!
From the installation docs for DataStax community edition, the only other step you need is to create the data and log directories:
$ sudo mkdir /var/lib/cassandra
$ sudo mkdir /var/log/cassandra
$ sudo chown -R $USER: $GROUP /var/lib/cassandra
$ sudo chown -R $USER: $GROUP /var/log/cassandra
If you are using a different location, that's fine. Just make sure to create the dirs and assign owners (like above) and also set the appropriate values in cassandra.yaml (data_file_directories, commitlog_directories, saved_caches_directory) and log4j-server.properties.
A more detailed log of the results you're seeing would confirm whether this is the problem.
Yes, you can run Cassandra without having root or sudo privileges. Extract Cassandra tar file into your local user directory, configure cassandra.yaml as single node. Then you run Cassandra from bin directory, either in foreground or background and login using cql shell.
bin/cassandra -f
OR
bin/cassandra
AND
cqlsh
This is for Cassandra version 2.1x
You can run Cassandra without root or sudo privileges, Besides extracting the
tar file, you need to modify the conf/logback.xml to redirect the log to
your home or somewhere you can write.
<file>/home/xxxx/system.log</file>
<fileNamePattern>/home/xxxx/system.log.%i.zip</fileNamePattern>
The only minor issue of not running with root is - the ULIMIT -l (RLIMIT on
max locked memory) will need to be increased and I cannot increase it with my account.
But this does not prevent it to run..
In my opinion, almost all the java-written apache projects need not the root privilege. Cassandra has the same feature.
Firstly, download apache-cassandra-bin.tar.gz from http://cassandra.apache.org/download/. Remember that do not use .deb or .rpm or others.
Secondly, run tar -xzf cassandra-bin.tar.gz to unzip it to any folder, suppose the folder is $cassandra_home
Thirdly, just go to $cassandra_home/bin, run ./cassandra, done! The data is stored in $cassandra_home/data and the logs are in $cassandra_home/logs.
If you want to set the position of data and logs:
1st, go to $cassandra_home/conf, modify cassandra.yaml file.
Set these directories to the folder which you have read and write access:
data_file_directories:
commitlog_directory:
cdc_raw_directory:
hints_directory:
saved_caches_directory:
(different cassandra version may have different parameters. You can just search director in the yaml file.)
2nd, if you want to enable the log, modify the log file position, modify $cassandra_home/conf/logback.xml (or log4j or others), and set the log folder to another position.
Enjoy it.
I'm sure this has been asked before but I have no clue what to search for
I am trying to create a custom Linux image (for the Raspberry Pi) - I am currently manipulating the filesystem of the .img but I've discovered it's not as simple as dropping in the binary :( if only...
What is the accepted way to "pre-install" a package on a disk image where you can only manipulate the filesystem and ideally not run it first? Am I best to boot up, install, and then create the image from that, or is there a way of doing it beforehand in the same way you can change configuration settings etc?
Usually, when I have to change something in a disk image, I do the following:
sudo mount --bind /proc /mnt/disk_image/proc
sudo mount --bind /sys /mnt/disk_image/sys
sudo mount --bind /dev /mnt/disk_image/dev
These action are needed as this folder are create during boot process, mounting them in your system image will emulate a full boot. Then, you can chroot on it safely:
sudo chroot /mnt/disk_image
You're now able to issue commands in the chroot environment:
sudo apt-get install chromium
Of course, change /mnt/disk_image to the path where you have mounted your filesystem. apt-get will only works on Debian based system, change it according to your distribution.
You could find problem connecting to the internet and it can be cause by DNS configuration. The best thing you can do, is to copy your /etc/resolv.conf file in the remote filesystem as this file is usually changed by dhcp and it's empty on chroot environment.
This is the only solution that gives you full access to the command line of the system you're trying to modify.
This is an untested idea:
The dpkg tool, which can install .deb packages, has a --root option which can set a different filesystem than the local / path.
From the man page:
--instdir=dir
Change default installation directory which refers to the
directory where packages are to be installed. instdir is
also the directory passed to chroot(2) before running
package’s installation scripts, which means that the
scripts see instdir as a root directory. (Defaults to /)
--root=dir
Changing root changes instdir to dir and admindir to
dir/var/lib/dpkg.
If you mount your image and pass its mountpoint as --root, it should work.
There are things like the Ubuntu Customization Kit which allow you to create your own version of the distro with your own packages.
Crunchbang even has a utility like this, which is the distro I have personally selected for experimenting with my Pi.
I'm a newbie in linux. I installed Fedora 16 OS as guest in virtualbox on Window 7. Now, I want to access share folder from Fedora. Here is something I did:
Install guest addtion [OK]
Make share folder link to virtualbox [OK]. Share folder path in Window 7: D:\share_folder_vm
In terminal program in fedora, I just run some commands:
[hoangphi#localhost ~]$ su
Password:
[root#localhost hoangphi]# cd Desktop/
[root#localhost Desktop]# mkdir share_folder
[root#localhost Desktop]# sudo mount -t vboxsf D:\share_folder_vm \share_folder
/sbin/mount.vboxsf: mounting failed with the error: Protocol error
[root#localhost Desktop]#
I got this message: /sbin/mount.vboxsf: mounting failed with the error: Protocol error
share_folder_vm is the folder in Win7 Host and share_folder is the folder in Fedora Guest.
My question: How can I fix this problem?
Install Oracle Guest Additions:
[host-hotkey (usually right Ctrl)] + [d],
Then:
sudo /media/VBOXADDITIONS_4.*/VBoxLinuxAdditions.run
You can now enjoy:
A guest that can run at native screen resolution
Ability to share files between host and guest
Share the clipboard (allowing you to copy and paste between host and guest).
To share folders set them up to be shared. Consider the permissions. Note that the host file permissions are transient. IOW if you can't write to file on host, the guest can't either.
After setting up the file to be shared create a destination if you don't have one:
mkdir -p ~/destination
Now mount it under the name you configured it with:
sudo mount -t vboxsf myFileName ~/destination
As an extra tip you can really exploit this feature to do things like:
- Use guest subversion client to create repository to mounted directory (you won't have a full svn client but the repo can be used in an IDE on the host).
- I personally use my guest to download and unpack binaries like Tomcat to a targeted mount. Yes you can use Linux to install things on Windows!
To unmount all shares:
sudo umount -f -a -t vboxsf
This thread has some great tips. However....
#GirishB's answer isn't correct - sorry. Jartender's is best.
Also, every post in here seems to assume you're logging in to the Linux guest as root, except for #tomoguisuru. Yuck! Don't use root, use a separate user account and "sudo" when you need root privileges. Then this user (or any other user who needs the shared folder) should have membership in the vboxsf group, and #tomoguisuru's command is perfect, even terser than what I use.
Forget running mount yourself. Set up the shared folder to auto mount and you'll find the shared folder - it's under /media in my OEL (RH and Centos probably the same). If it's not there, just run "mount" with no arguments and look for the mounted directory of type vboxsf.
For accessing a shared folder, YOU have to have "Oracle VM extension pack" installed.
Look at the bottom of this link, you can download it from there.
http://www.oracle.com/technetwork/server-storage/virtualbox/downloads/index.html
I just figured. You need to add a shared folder using VirtualBox before you access it with the guest.
Click "Device" in the menu bar--->Shared File--->add a directory and name it
then in the guest terminal, use:
sudo mount -t vboxsf myFileName ~/destination
Dont directly refer to the host directory
There's a simpler way I found when running Linux Mint.
Ensure you install the Guest Additions from the command line and that you have the folder(s) shared with "automount" and "make permanent" settings selected within "Shared Folders" tab of the Machine Settings
Launch the User management application from Application/Settings/System Setting/ menu selection (requires sudo) from within the Mint menu
In the "Privileges and Groups" tab, check the box next to the "vboxsf" group, and then apply and ok your way back out.
Any user within the vboxsf group has full access to any shared folders on each boot with no manual mounting or unmounting
I usually do the following in addition to the above just to have quick access
Open the Dolphin file manager and navigate to /media/
Right-Click on the shared folder and click "Add to Places"
You probably need to change your mount command from:
[root#localhost Desktop]# sudo mount -t vboxsf D:\share_folder_vm \share_folder
to:
[root#localhost Desktop]# sudo mount -t vboxsf share_name \share_folder
where share_name is the "Name" of the share in the VirtualBox -> Shared Folders -> Folder List list box. The argument you have ("D:\share_folder_vm") is the "Path" of the share on the host, not the "Name".
May be this can help other guys:
I had the same problem, and after looking with Google I found that can be because of the permissions of the folder... So, you need first to add permissions...
$ chmod 777 share_folder
Then run again
$ sudo mount -t vboxsf D:\share_folder_vm \share_folder
Check the answers here: Error mounting VirtualBox shared folders in an Ubuntu guest...
VirtualBox version has many uncompatibilities with Linux version, so it's hard to install by using "Guest Addition CD image". For linux distributions it's frequently have a good companion Guest Addition package(equivalent functions to the CD image) which can be installed by:
sudo apt-get install virtualbox-guest-dkms
After that, on the window menu of the Guest, go to Devices->Shared Folders Settings->Shared Folders and add a host window folder to Machine Folders(Mark Auto-mount option) then you can see the shared folder in the Files of Guest Linux.
There is a really simple tuturial here : http://my-wd-local.wikidot.com/otherapp:configure-virtualbox-shared-folders-in-a-windows-ho
telling to do:
sudo mkdir /mnt/vbox_share
sudo mount.vboxsf nameAddesAsShared /mnt/vbox_share
These are the steps to share a folder from Windows to Linux Virtual Box
Step 1 : Install Virtual Box Extension Pack from this link
Step 2: Install Oracle Guest Additions:
By pressing -> Right Ctrl and d together
Run the command
sudo /media/VBOXADDITIONS_4.*/VBoxLinuxAdditions.run
Step 3 : Create Shared Folder by Clicking Settings in Vbox
Then Shared Folders -> + and give a name to the folder (e.g. VB_Share)
Select the Shared Folder path on Windows (e.g. D:\VBox_Share)
Step 4: Create a folder in named VB_share in home\user-name (e.g. home\satish\VB_share) and share
mkdir VB_Share
chmod 777 VB_share
Step 5: Run the following command
sudo mount –t vboxsf vBox_Share VB_Share
We're trying to use oprofile to track down performance problems on a server cluster. However, the servers in question have a read-only file system, where /var/tmp is the only writeable directory.
OProfile wants to create two directories whenever it runs: /root/.oprofile and /var/lib/oprofile, but it can't, because the filesystem is read-only. I can use the --session-dir command line option to make it write its logs to elsewhere than /var/lib, but I can't find any such option to make it use some other directory than /root/.oprofile.
The filesystem is read-only because it is on nonwriteable media, not because of permissions -- ie, not even superuser can write to those directories. We can cook a new ROM image of the filesystem (which is how we installed oprofile, obviously), but there is no way for a runtime program to write to /root, whether it is superuser or not.
I tried creating a symlink in the ROM that points /root/.oprofile -> /var/tmp/oprofile, but apparently oprofile doesn't see this symlink as a directory, and fails when run:
redacted#redacted:~$ sudo opcontrol --no-vmlinux --start --session-dir=/var/tmp/oprofile/foo
mkdir: cannot create directory `/root/.oprofile': File exists
Couldn't mkdir -p /root/.oprofile
We must run our profilers on this particular system, because the performance issues we're trying to investigate don't manifest if we build and run the app on a development server. We can't just run our tests on a programmer's workstation and profile the app there, because the problem doesn't happen there.
Is there some way to configure oprofile so that it doesn't use /root ?
I guess it should be as simple as overriding the HOME environment variable:
HOME=/tmp/fakehome sudo -E opcontrol --no-vmlinux --start --session-dir=/var/tmp/oprofile/foo
If that doesn't work out, you could have a look at
unionfs
aufs
to create a writable overlay. You might even just mount tmpfs on /root,or something simple like that.
It turns out that this directory is hardcoded into the opcontrol bash script:
# location for daemon setup information
SETUP_DIR="/root/.oprofile"
SETUP_FILE="$SETUP_DIR/daemonrc"
Editing those lines seemed to get it working, more or less.