How to know who is deleting the files/directory - linux

We have a directory /home/test/abc
Sometimes we found that the directory is not present. Most probably it is deleted by someone. We have lots of users who log in and out from the system.
I have checked the bash_history of all the users but nobody seems to have executed the rm command.
I would like to know if there is a way to monitor this directory and notify if a user or a script is trying to modify this directory.
I am using Centos

You can do two things:
You can install a utility that called acct (psacct), to monitoring on the user's activity on your machine.
You can install a tool that called inotify-tool, and after that, run the command: sudo inotifywait -m <your_file_path_here>, and it will monitor on your file activity in LIVE.

Related

Debian Package Creation postinst as non-root

I have created quite a few deb files, i have no problems doing that and they all run beautifully. However, if i want to replace a file in users home directory I am unsure on how to do that.
I have tried making a postinst to rsync the files from a predefined location to home directory, but since the postinst file is being run as root ( due to the debian installer running as root ) it is being sent to the root home directory and not the user's home directory....
Here's an example of the deb file contents :
Debian Directory ---> Control File ----> Postint File
usr/share/desktop (directory with files inside)
The postinst file has the sync command to send those files to users home:
#!/bin/sh
rsync -av /usr/share/desktop/ ~/.config/desktop/
The problem is it is sending the files to Root/home... not the default users home :(
I don't have the username of the user since this will be used on many computers with different users, therefore I can't use sudo -u username.
So what do I do? how do i replace files in users home directory from deb install? Any help is much appreciated.
In a Bash script, ~ refers to the current user's home directory. The package installation scripts are always run as root, so that's what "current user" means in this context.
(You could argue that the package installation is probably initiated by a user running su or sudo, but in the general case, you cannot assume this to be the case.)
Modifying user files from a system package appears extremely suspicious in any event. If the need is genuine, this should probably not be approached as a system package installation question in the first place. What are you actually trying to accomplish?
Not only are you violating the basic principle that package management should not meddle with user files; a consequence of this arrangement is that the operation can only be performed once: If the user has installed the package, attempting to install it again does nothing (at least until you uninstall).
A more manageable and predictable approach would seem to be making the package provide this functionality, but leave it to the user to invoke the actual sync (overwriting) script as needed. Perhaps you want to hook it into the desktop startup scripts somehow.
Having said that, sudo exposes the invoking user's identity in $SUDO_USER so you could look for that, and simply fail if it is not set.
As an aside, package scripts should work with dash so you need to avoid bashisms - prefer $HOME over ~, for example.
I managed to find a workaround, although it is not exactly what I was looking for, but here is my solution, at least for now.
#!/bin/sh
#This will move the desktop settings to required folder.
szAnswer=$(zenity --entry --text "Enter your login username\nThis must be entered correctly\n" --entry-text "Enter name of profile to use:")
xfce4-terminal -e "sudo rsync -av /usr/share/Desktop/ /home/$szAnswer/.config/xfce4/"
exit 0
In other words, the user gets asked to enter his username, and the files get copied to that user's home directory. The advantage is that if he does have multiple users, it will use the correct user. The disadvantage is if he enters username wrong, even a spelling mistake, the install will fail.
But it does work, I have tested. If anyone has a better solution I eagerly await your suggestions.

node.js read protected files without running as root

I'd like to read/write files using nodejs that live in a protected directory (/etc/apache2/sites-available). I understand that I can run the script with sudo but the idea of that makes me worried. Is there some way I can have node try to elevate for certain functions/calls without having the whole script run with root access?
If you do not provide elevated rights to your script, the script will be unable to mysteriously obtain those rights out of the thin air.
Granted you still need to modify the files, then consider giving write permissinos to your app.
If you are running app as user joe, and owner of sites-available files is root, then do: chown -R joe:joe sites-available.
But if some other user already uses those files, then you might get into permissions conflict. In this case, you can workaround using shared group, or SSH as that user.
Shortly, there are several ways of achieving your goal. But it is completely unrelated to Node.js technology, and all about linux, chown and chmod.

Installing software on Ubuntu as non-root

I've been stuck on a problem for two days now where the software I'm trying to install will not proceed unless I make a separate user which is non-root.
Keep in mind I'm a big linux noob and not very experienced with the OS.
I make a user called "smrtanalysis" in a group called "smrtanalysis".
I put him in the sudoers file.
I made a folder called smrtanalysis in my home/nick/ directory
I downloaded the software from the PacBio website and put the .run files into this directory I noted above.
I used chmod 777 and chown (to user smrtanalysis) on the directory
noted above, and the .run file
I logged into smrtanalysis user by su smrtanalysis, password, and typed
./smrtanalyis-2.2.0.133377.run
the file runs, but then aborts with the following error message:
We recommend running this script as a designated SMRT Analysis user
(e.g. smrtanalysis) who will own all smrtpipe jobs and daemon
processes.
Current user is 'root' (primary group: root)
Installing as 'root' is currently not supported Switch to the desired
user and restart the install. Aborting installation...
Here is the install documentation:
https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Software-Installation-v2.2.0
It seems pretty straightforward but I can't seem to get it working. If you guys look at the install docs, you'll probably be able to tell me what I'm doing wrong.
Thanks for any help!
Regards,
Nick
change
SMRT_ROOT=/opt/smrtanalysis
on
SMRT_ROOT=/home/nick/smrtanalysis
the rest should be easy.
Be very careful installing binaries from internet, make sure you're confident in the source.
Just don't use root for that, you ran the script as root by accident.
(update: pacbio team can help from the github page at https://github.com/PacificBiosciences/SMRT-Analysis/issues as well.)

Transferring CouchDB .couch files from Windows to Linux

Am currently working on a CouchDB project, and have recently decided to switch to a Linux environment for development as I plan to deploy on a Linux server.
I was hoping to copy over my .couch files straight from - Program Files/Apache Software Foundation/CouchDB-1-1-1/var/lib/couchdb - and paste them directly into what I guess should be - var/lib/couchdb - But I keep running into file/folder permission errors each time I try to access var/lib/couchdb.
Is it even possible to transfer .couch files in the way I envisage?
...
Update - Follwing up on Dominic's comments, I managed to apply the fix found in the answer below.
After some investigative work, I found it to be a permissions error, exactly as Dominic Barnes had suggested in the comments...
The issue is also discussed here - Staging setup with couchdb
To fix it, I first ran;
sudo chmod -R 755 var/lib/couchdb
I may have also changed the permissions on the relevant parent folders too. I was then able to copy my .couch files into var/lib/couchdb/COUCH-VERSION-NUMBER. After doing that, I then had to use chmod to set favourable write permissions on the newly copied files, but also had to run:
sudo chown couchdb var/lib/couchdb/COUCH-VERSION-NUMBER/
To open those files up to the user group (the "couchdb" group) that the couchdb installation sets up for internal use (I think...). After that, I restarted couchdb, forcing it to stop with:
ps -U couchdb -o pid= | xargs kill -9
and restarting with:
/etc/init.d/couchdb start
After that, everything seemed to work as expected.
Hope that helps anyone else running into the same problem.

Can oprofile be made to use a directory other than /root/.oprofile?

We're trying to use oprofile to track down performance problems on a server cluster. However, the servers in question have a read-only file system, where /var/tmp is the only writeable directory.
OProfile wants to create two directories whenever it runs: /root/.oprofile and /var/lib/oprofile, but it can't, because the filesystem is read-only. I can use the --session-dir command line option to make it write its logs to elsewhere than /var/lib, but I can't find any such option to make it use some other directory than /root/.oprofile.
The filesystem is read-only because it is on nonwriteable media, not because of permissions -- ie, not even superuser can write to those directories. We can cook a new ROM image of the filesystem (which is how we installed oprofile, obviously), but there is no way for a runtime program to write to /root, whether it is superuser or not.
I tried creating a symlink in the ROM that points /root/.oprofile -> /var/tmp/oprofile, but apparently oprofile doesn't see this symlink as a directory, and fails when run:
redacted#redacted:~$ sudo opcontrol --no-vmlinux --start --session-dir=/var/tmp/oprofile/foo
mkdir: cannot create directory `/root/.oprofile': File exists
Couldn't mkdir -p /root/.oprofile
We must run our profilers on this particular system, because the performance issues we're trying to investigate don't manifest if we build and run the app on a development server. We can't just run our tests on a programmer's workstation and profile the app there, because the problem doesn't happen there.
Is there some way to configure oprofile so that it doesn't use /root ?
I guess it should be as simple as overriding the HOME environment variable:
HOME=/tmp/fakehome sudo -E opcontrol --no-vmlinux --start --session-dir=/var/tmp/oprofile/foo
If that doesn't work out, you could have a look at
unionfs
aufs
to create a writable overlay. You might even just mount tmpfs on /root,or something simple like that.
It turns out that this directory is hardcoded into the opcontrol bash script:
# location for daemon setup information
SETUP_DIR="/root/.oprofile"
SETUP_FILE="$SETUP_DIR/daemonrc"
Editing those lines seemed to get it working, more or less.

Resources