Unshare and newuidmap: update - linux

I'm trying to create a process with its own namespace and then make a uid (and possibly gid) mapping. I'm following this question with this answer, but, as indicated in this recent comment, it no longer works.
Here's the skinny. First, you create a process in a new namespace with unshare:
unshare -U bash
And obtain the process it runs, with echo $$ or somesuch. You grab that PID and then, from another shell, you go:
newuidmap 12394 0 0 1
The answer, as indicated in the comment above, is:
newuidmap: uid range [0-1) -> [0-1) not allowed
In an update to the answer, Arks mentions:
it is something with settings in /etc/subuid and /etc/subguid files
I can't figure out, however, what they mean. Any idea?

Still don't understand why newuidmap does not work. But this article shows that writing to /proc/$$/uid_map does
echo '5 1000 1' > /proc/14671/uid_map
This is an one-time operation that can't be repeated, and in a single command, establishes the mapping for UIDs and GIDs.

Related

How to get Effective User Name and Schduling Class for a process using bash - linux

I'm trying to write a bash script that takes a process ID as the user input and prints out information about that process such as the nice value, priority...etc.
I almost got all what I need with help from this site: http://linux.die.net/man/5/proc
However, I couldn't find from where I can get the effective user name and the scheduling class of the process.
Any help would be appreciated.
The negated scheduling priority can be found as the 18th field of /proc/[pid]/stat as detailed in your link.
cat /proc/207/stat | cut -d' ' -f18
As for the owner of the process, it's the owner of the /proc/[pid] directory, or of any file in it.
stat -c "%U" /proc/207/
Edit : removed my scheduling priority calculation because I don't know the first thing about it and might misinterpret the documentation. I'll just leave where the negated one can be found.

How can you hide passwords in command line arguments for a process in linux

There is quite a common issue in unix world, that is when you start a process with parameters, one of them being sensitive, other users can read it just by executing ps -ef. (For example mysql -u root -p secret_pw
Most frequent recommendation I found was simply not to do that, never run processes with sensitive parameters, instead pass these information other way.
However, I found that some processes have the ability to change the parameter line after they processed the parameters, looking for example like this in processes:
xfreerdp -decorations /w:1903 /h:1119 /kbd:0x00000409 /d:HCG /u:petr.bena /parent-window:54526138 /bpp:24 /audio-mode: /drive:media /media /network:lan /rfx /cert-ignore /clipboard /port:3389 /v:cz-bw47.hcg.homecredit.net /p:********
Note /p:*********** parameter where password was removed somehow.
How can I do that? Is it possible for a process in linux to alter the argument list they received? I assume that simply overwriting the char **args I get in main() function wouldn't do the trick. I suppose that maybe changing some files in /proc pseudofs might work?
"hiding" like this does not work. At the end of the day there is a time window where your password is perfectly visible so this is a total non-starter, even if it is not completely useless.
The way to go is to pass the password in an environment variable.

Find out ID of 'at' job from within it

When I schedule a job with 'at' it is assigned an id, viz:
job 44 at 2014-01-28 17:30
When that job runs I would like to get at that id from within it. This is on Centos, FWIW. I have established that no environment variable contains the ID. When the Perl code in that job runs I would like it to be able to print the job ID (44 in this example).
Yes, I know that atq shows an = next to jobs that are executing, but there might be more than one of those at a time.
I could do something like pass a unique argument to the job when scheduling it, capture the ID, save that and the argument to a file somewhere, read that from the job. That's a lot of work I'd rather not go to if I don't have to, and it seems like this should be simple but I'm drawing a blank.
What follows is figured out by reading sources of at-3.14. The way at puts job id and the time when it is run into the file name should be similar for any version, but I haven't checked this.
To begin whith at encodes the job id and the time when a particular job should be run into the file name describing a job. The file name has format aJJJJJTTTTTTTT, where JJJJJ is 5 character hexadecimal string, the job id, and TTTTTTTT is an 8 character hexadecimal string, the time when the job should be run. The time is stored as seconds from the epoch.
At jobs are run by feeding a job description file as the standard input to sh -c. Fortunately the Linux kernel provides a symbolic link, /proc/self/fd/0, which will point to the standard input of the process currently being executed (play with ls -l /proc/self/fd/0 in case you need to assure yourself that this indeed is so).
A file describing a job has been deleted by the time a job is run. However, the file is still available for the kernel because it has been duplicated with dup(2) before being used as the standard input for a job. So, actually we are resolving a symbolic link to a file name which is not visible any more. In the perl script at the end we need to take this into account as readlink will return something like /foo/bar/baz (deleted) instead of /foo/bar/baz. And we're interested in just the file name which has all the information we need.
The reason why the symbolic link points to a deleted file is because at daemon unlinks the original before executing the job. Unlinking gets done only after creating a copy, a hard link, which begins with = instead of a. With this the at daemon tries to ensure there will be only one copy of a job running: the daemon will not execle(2), ie. it will bail out, should the link(2) fail. Because the original file has been subject to open(2) and dup(2) the inode is still there for the kernel to use because it still has hard links pointing to it.
After a fairly long and possibly confusing introduction, here is how to put it all together:
#!/usr/bin/perl
use strict;
use warnings;
my $job_file = readlink("/proc/self/fd/0");
if (index($job_file, " ") > 0) {
$job_file = substr($job_file, 0, index($job_file, " ") - 1);
}
my $tmp = substr($job_file, rindex($job_file, "/") + 1);
$tmp =~ s/^a([0-9a-f]{5})[0-9a-f]+/$1/;
my $job_id = hex($tmp);
if ($job_id > 0) {
printf("My AT job id is %d.\n", $job_id);
}
# end of file.

Handle "race-condition" between 2 cron tasks. What is the best approach?

I have a cron task that runs periodically. This task depends on a condition to be valid in order to complete its processing. In case it matters this condition is just a SELECT for specific records in the database. If the condition is not satisfied (i.e the SELECT does not return the result set expected) then the script exits immediately.
This is bad as the condition would be valid soon enough (don't know how soon but it will be valid due to the run of another script).
So I would like somehow to make the script more robust. I thought of 2 solutions:
Put a while loop and sleep constantly until the condition is
valid. This should work but it has the downside that once the script
is in the loop, it is out of control. So I though to additionally
after waking up to check is a specific file exists. If it does it
"understands" that the user wants to "force" stop it.
Once the script figures out that the condition is not valid yet it
appends a script in crontab and stops. That seconds script
continually polls for the condition and if the condition is valid
then restart the first script to restart its processing. This solution to me it seems to work but I am not sure if it is a good solution. E.g. perhaps programatically modifying the crontab is a bad idea?
Anyway, I thought that perhaps this problem is common and could have a standard solution, much better than the 2 I came up with. Does anyone have a better proposal? Which from my ideas would be best? I am not very experienced with cron tasks so there could be things/problems I could be overseeing.
instead of programmatically appending the crontab, you might want to consider using at to schedule the job to run again at some time in the future. If the script determines that it cannot do its job now, it can simply schedule itself to run again a few minutes (or a few hours, as it may) later by way of an at command.
Following up from our conversation in comments, you can take advantage of conditional execution in a cron entry. Supposing you want to branch based on time of day, you might use the output from date.
For example: this would always invoke the first command, then invoke the second command only if the clock hour is currently 11:
echo 'ScriptA running' ; [ $(date +%H) == 11 ] && echo 'ScriptB running'
More examples!
To check the return value from the first command:
echo 'ScriptA' ; [ $? == 0 ] echo 'ScriptB'
To instead check the STDOUT, you can use as colon as a noop and branch by capturing output with the same $() construct we used with date:
: ; [ $(echo 'ScriptA') == 'ScriptA' ] && echo 'ScriptB'
One downside on the last example: STDOUT from the first command won't be printed to the console. You could capture it to a variable which you echo out, or write it to a file with tee, if that's important.

Automated ACL Check in shell script

I'd like to get some ideas from you on how to implement that. Let me explain a little bit my problem:
Scenario:
We have a system that must have some especific ACLs set in order to run it. So, before running it would be great if I could run a sort of pre check in order to verify if everything was set correctly.
Goal:
Create a script that checks those ACLs before starting the system alerting in case one of them is wrong based in a list of files/folder and its ACLs.
Problems:
Since the getfacl result is not a simple return, the only way I found to do such checking was parsing the result and analising each piece of it, that not as elegant as I'd like it could be.
I doubt many of you had to do something ACLs check but for sure you guys can contribute to my cause :)
Thanks everybody in advance
How about using Python module pylibacl
>>> import posix1e
>>> acl1 = posix1e.ACL(file="file1.txt")
>>> print acl1
user::rw-
group::r--
other::r--
Since the getfacl result is not a simple return, the only way I found to do such checking was parsing the result and analising each piece of it, that not as elegant as I'd like it could be.
What exactly are you trying to do? If you're just comparing the result of calling getfacl to a desired ACL, it should be easy. For example, assuming that you have stored your desired ACL in a file named acl-i-want, you could do something like this:
getfacl /path > acl-i-have
if ! diff -q acl-i-have acl-i-want; then
echo "ACLs are different."
fi

Resources