linux delete user account without userdel - linux

I'd like to delete a user from a tarball that contains the files for a Linux OS (it's a tarball of the root [/] filesystem). Is there a way to do this completely and properly such that it would mimic the steps taken by the userdel command? I suppose I have two choices:
Work within the OS on an actual target, use userdel and then re-tar
the files. Not a problem, but I was curious about acting directly
on the tarball, hence...
I could mimic the steps taken by userdel: un-tar and delete all entries related to the user...according to the man
page of userdel I would delete entries in /etc/group,
/etc/login.defs, /etc/passwd, and /etc/shadow. Then, re-tar.
Approach (2) is attractive because I could programmatically add or delete users directly on the tarball. I'll try (2), but wondering if there would be any unintended consequences or leftover bookkeeping that I should do? Or is there another way to do this?

/etc/login.defs is only called when a new user is created. That file does not need to be modified. However, a mail spool will be created for the user in the location listed in login.defs
Deleting the user from /etc/shadow and /etc/passwd will work. /etc/group is not a requirement however it cant hurt. Those three files will take care of it, You may delete the mail spool if desired.

Related

How to automatically change the ownership of every files in a directory

There's an external server that deposits some files inside a folder of my computer.
I need to automatically change the user ownership of every file created inside that folder.
I've seen a lot of answers saying that I could just change the GROUP ownership through setfacl (from my research it is not possible to change the USER ownership through setfacl).
In this case, I can't, because there's a script owned by the user A (not root) that is going to chmod this deposited file owned by the user B, and you can only chmod a file that you own.
Instead of this (inside the script):
chmod 777 /folder/file.txt
I tried this:
sudo chmod 777 /folder/file.txt
But a password is asked and BAM!
Do you have any ideas on how to deal with this?
Am I missing something?
I am not sure what you want to achieve, so here's a couple of suggestions:
if the goal is simply for the file to have specific permission settings, and ownership change is secondary, you could set the permissions correctly on the source computer and then use a transfer method that preserves permissions (scp -p or some such)
note that a script owned by A can still be run by B if A has set the permissions correctly (group and other executable bit). When B runs A's script, it runs with B's permissions, so changing permission bits of B's file will work
if ownership change is imperative, you could transfer the file to B, make sure A has permission to read (see above), and then have A make a copy of the file to A's own directory using the cp command. The copy will be owned by A and thus A can change permissions of the copy. After that, run some regular process to clean up B's directory in order to save space if that's an issue
alternatively, you could have B on the source computer log into A's account on the receiving computer, and then the file ends up in A's ownership. If you do not want to give B the password of A, you could do this using an ssh certificate. But of course this has other security risks, so the cp method above is safer
finally, on unix it is generally not a good idea to make files writable for "other", and often not even for "group", so permission settings of 775 or 755 are much safer and preferred. Otherwise anyone on the same computer could modify the file or make it an empty file. Even on a private computer this is good practice, because sooner or later one of these files will get copied to a multi-user system and no-one remembers to check the permissions
Hopefully some of these hints are useful.
One way may be to use the group sticky bit on the parent directory. This ensures that when created the files are given the same group ownership as the directory that they are in.
In most cases this is enough as you can ensure that the group is one that the file creator and the users all are members of, so they all have access to the files.
You can sticky the user ID of the directory, but this is almost always the wrong thing to do in an organisation, as it only takes that person to be away and the directory overflows.
chown owner:group directory -R
Example:
chown root:root sysadmin -R
try this.

CHMOD vs UMASK - Linux file permissions

In a script, should I create a file first and then, use CHMOD to assign permissions (Example- first using TOUCH command to create a file and then, using CHMOD to edit permissions on that file) "OR" should I mask permissions using UMASK as I create a file ? Also, please explain what are the pros and cons of choosing one over another ?
Note: This file doesn't need to be executed.
As with most things, it depends on far more information than you've given :-)
However, it's usually a good idea to do things in a closed manner then open them up, rather than the other way around. This is basic "Security 101".
For example, let's say you're creating a file for the user and the user has foolishly selected a umask of zero (effectively all files created will have full permissions for everyone).
In that case, the file is fully open for anyone to change between the creation and chmod stage and, while you can minimise this time, you cannot really remove it totally.
For the truly paranoid among us, it would be better to actually create the file in as closed a manner as possible (probably just rw for owner), do whatever you have to do to create the content of that file, then use chmod to open it up to whatever state it needs to be, something like:
( umask 177 ; create_file myfile.txt ; chmod 644 myfile.txt )
Briefly saying - it doesn't matter. And in most cases approach depends on your needing.
If you need the same file permissions over whole your script logic, I would prefer to setup it in the beginning of the script and just create file rather than create and run chmod command. However you can set file permissions at once at the end of script running chmod 0XXX -R /path/to/folder
You should always have UMASK for specific user as you don't want to be dealing with setting permissions every-time you or an application create a file. You can further protect/release any specific files if you want using CHMOD (these cases will be very rare). Unless the file you are creating needs to be protected/accessed specifically, you should have a UMASK working for it's permissions.
Create a separate user and specified directory for the application that is running the script.
set it's appropriate UMASK.
Specify extra permissions if you need it

How to lock file without checking out in perforce

So I have some resource files I use for unit testing that I don't want changed (otherwise the unit tests will break).
Is there a way to lock these files using p4v without checking out the file?
I do not have admin rights btw.
If you are willing to check the files out, it's pretty easy to solve. Just open the files for edit, and then lock them. As long as you keep them open and locked, they should be safe - though I believe an admin could forcibly unlock them.
You can always create a second workspace for locking them if you don't want to clutter up your main one.
There are two solutions that could work without needing to check the file out / locking it.
Using P4 Permissions you can assign read access to the files or directory in the depot. Everyone will still be able read the files, which is essential for their work, however no-one without the correct permissions will be allowed to submit. Read more about the p4 protect command in manual.
Write a P4 Trigger that checks for the files the files on pre-submit, if found reject the changelist. Read more about the p4 triggers command in the manual.
To do both of these, you will need help from your friendly superuser/administrator. Option 1 is by far the best solution as triggers can slow your server down if you have too many or them doing too much. It will up-to you administrator if they want to add the permission to the protect table.
Note: Permissions/Protect as synonymous, like Workspace/Client.

linux script, standard directory locations

I am trying to write a bash script to do a task, I have done pretty well so far, and have it working to an extent, but I want to set it up so it's distributable to other people, and will be opening it up as open source, so I want to start doing things the "conventional" way. Unfortunately I'm not all that sure what the conventional way is.
Ideally I want a link to an in depth online resource that discusses this and surrounding topics in depth, but I'm having difficulty finding keywords that will locate this on google.
At the start of my script I set a bunch of global variables that store the names of the dirs that it will be accessing, this means that I can modify the dir's quickly, but this is programming shortcuts, not user shortcuts, I can't tell the users that they have to fiddle with this stuff. Also, I need for individual users' settings not to get wiped out on every upgrade.
Questions:
Name of settings folder: ~/.foo/ -- this is well and good, but how do I keep my working copy and my development copy separate? tweek the reference in the source of the dev version?
If my program needs to maintain and update library of data (gps tracklog data in this case) where should this directory be? the user will need to access some of this data, but it's mostly for internal use. I personally work in cygwin, and I like to keep this data on separate drive, so the path is wierd, I suspect many users could find this. for a default however I'm thinking ~/gpsdata/ -- would this be normal, or should I hard code a system that ask the user at first run where to put it, and stores this in the settings folder? whatever happens I'm going ot have to store the directory reference in a file in the settings folder.
The program needs a data "inbox" that is a folder that the user can dump files, then run the script to process these files. I was thinking ~/gpsdata/in/ ?? though there will always be an option to add a file or folder to the command line to use that as well (it processed files all locations listed, including the "inbox")
Where should the script its self go? it's already smart enough that it can create all of it's ancillary/settings files (once I figure out the "correct" directory) if run with "./foo --setup" I could shove it in /usr/bin/ or /bin or ~/.foo/bin (and add that to the path) what's normal?
I need to store login details for a web service that it will connect to (using curl -u if it matters) plan on including a setting whereby it asks for a username and password every execution, but it currently stores it plane text in a file in ~/.foo/ -- I know, this is not good. The webservice (osm.org) does support oauth, but I have no idea how to get curl to use it -- getting curl to speak to the service in the first place was a hack. Is there a simple way to do a really basic encryption on a file like this to deter idiots armed with notepad?
Sorry for the list of questions, I believe they are closely related enough for a single post. This is all stuff that stabbing at, but would like clarification/confirmation over.
Name of settings folder: ~/.foo/ -- this is well and good, but how do I keep my working copy and my development copy separate?
Have a default of ~/.foo, and an option (for example --config-directory) that you can use to override the default while developing.
If my program needs to maintain and update library of data (gps tracklog data in this case) where should this directory be?
If your script is running under a normal user account, this will have to be somewhere in the user's home directory; elsewhere, you'll have no write permissions. Perhaps ~/.foo/tracklog or something? Again, add a command line option, and also an option in the configuration file, to override this.
I'm not a fan of your ~/gpsdata default; I don't want my home directory cluttered with all sorts of directories that programs created without my consent. You see this happen on Windows a lot, and it's really annoying. (Saved games in My Documents? Get out of here!)
The program needs a data "inbox" that is a folder that the user can dump files, then run the script to process these files. I was thinking ~/gpsdata/in/ ?
As stated above, I'd prefer ~/.foo/inbox. Also with command-line option and configuration file option to change this.
But do you really need an inbox? If the user needs to run the script manually over some files, it might be better just to accept those file names on the command line. They could just be processed wherever, without having to move them to a "magic" location.
Where should the script its self go?
This is usually up to the packaging system of the particular OS you're running on. When installing from source, /usr/local/bin is a sensible default that won't interfere with package managers.
Is there a simple way to do a really basic encryption on a file like this to deter idiots armed with notepad?
Yes, there is. But it's better not to, because it creates a false sense of security. Without a master password or something, secure storage is not possible! Pidgin, for example, explicitly stores passwords in plain text, so that users won't make any false assumptions about their passwords being stored "securely". So it's best just to store them in plain text, complain if the file is world-readable, and add a clear note to the manual to warn the user what's going on.
Bottom line: don't try to reinvent the wheel. There have been thousands of scripts and programs that faced the same issues; most of them ended up adopting the same conventions, and for good reasons. Look at what they do, and mimic them instead of reinventing the wheel.
You can start with the Filesystem Hierarchy Standard. I'm not sure how well followed it is, but it does provide some guidance. In general, I try to use the following:
$HOME/.foo/ is used for user-specific settings - it is hidden
$PREFIX/etc/foo/ is for system-wide configuration
$PREFIX/foo/bin/ is for system-wide binaries
sym-links from $PREFIX/foo/bin are added to $PREFIX/bin/ for ease of use
$PREFIX/foo/var/ is where variable data would live - this is where your input spools and log files would live
$PREFIX should default to /opt/foo even though almost everyone seems to plop stuff in /usr/local by default (thanks GNU!). If someone wants to install the package in their home directory, then substitute $HOME for $PREFIX. At least that is my take on how this should all work.

Linux directory permissions read write but not delete

Is it possible to setup directory permissions such that a group is able to read and write files and subdirectories but not delete anything?
It might be enough to set the sticky bit on the directories. Users will be able to delete any files they own, but not those of other users. This may be enough for your use case. On most systems, /tmp is setup this way (/tmp is set 1777)
chmod 1775 /controlled
However, If you want more control, you'll have to enable ACL on the filesystem in question.
In /etc/fstab, append acl to the flags:
/dev/root / ext3 defaults,acl 1 1
You can then use setfacl/getfacl to control and view acl level permissions.
Example: (Create files, once written, they are read only, but CAN be deleted by owner, but not others.)
setfacl --set u::rwxs,g::rwx /controlled
setfacl -d --set u::r-x,g::r-x,o::- /controlled
You can set a default acl list on a directory that will be used by all files created there.
As others have noted, be careful to specify exactly what you want. You say "write" - but can users overwrite their own files? Can they change existing content, or just append? Once written, it's read only? Perhaps you can specify more detail in the comments.
Lastly, selinux and grsecurity provide even more control, but that's a whole other can of worms. It can be quite involved to setup.
Well, it would be r-x for this directory.
And files in it would have rw-.
This is because a file can be written if its permissions allow Write, but it can only be deleted if its directory's permissions allow Write.
Possible or not, make sure that overwriting with a 0-byte file isn't equivalent to deleting the file in your particular context.

Resources