SELinux: Creating customized environment to specified user accounts - linux

We could map standard linux user into SElinux user accounts. Consider I am having a standard linux user with name "Steve".
Now, I am having 2 questions.
a.) If I map "Steve" into user_u (SElinux account), then he will get execution permission on $HOME & /tmp directory. Can I restrict "Steve" from executing applications in $HOME or /tmp. I tried using "neverallow" statement in policy file (*.te), and ended in following error message.
Error Message:
"libsepol.check_assertion_helper: neverallow violated by allow user_t bin_t:file { read getattr open };"
How could I override default permissions like user_u is having execution permission in $HOME in SELinux??
b.) I have created a file and changed it's type to "mytype_t" using chcon command. Then added "allow user_t mytype_t: file { read write execute };" into my policy. I have added mytype_t to /etc/selinux/default/contexts/files/file_contexts & /etc/selinux/default/modules/active/file_context. "seinfo -t" doesn't list mytype_t.
I could successfully create *.pp file. When I tried to install this policy using "semodule -i myPolicy.pp", I have ended with the following error message. It seems, mytype_t is not recognized by SElinux Policy.
Error Message:
libsepol.print_missing_requirements: user_execution_permission's global requirements were not met: type/attribute mytype_t (No such file or directory).
libsemanage.semanage_link_sandbox: Link packages failed (No such file or directory).
semodule: Failed!
Simply I just want to create user "Steve" who can execute aare having standard linux users that are mapped into SElinux user-account. For example "Steve" is a standard linux user-account that can be mapped to any of the following, such as user_u (or) staff_u (or) system_u (or) unconfined_u.ll files with type "mytype_t" in anywhere in system, But should not be able to execute applications with other types.
I am working on debian 6 with policy.24, Thanks in advance for help!

Related

Opensips-cli -x command not working in opensips 3.3

Recently I am working on upgrading my opensips version manually from 2.2 to 3.3.
Upgradation is done from my side but in old opensips(2.2) I was able to show registered user(SIP) using opensipsctl ul show command but in new version 3.3 opensipsctl is deprecated(I guess not sure).
So I am trying to get details using opensips-cli but I didn't find out correct command for show register and show dump list, I try to follow below link but did not find correct command.
https://www.opensips.org/Documentation/Interface-CoreMI-3-0
Also, my opensips-cli -x command not working giving the below error. (mi_fifo module loaded correctly)
# opensips-cli -o output_type=yaml -x mi uptime
ERROR: cannot access fifo file /tmp/opensips_fifo: [Errno 13] Permission denied: '/tmp/opensips_fifo'
ERROR: starting with Linux kernel 4.19, processes can no longer read from FIFO files
ERROR: that are saved in directories with sticky bits (such as /tmp)
ERROR: and are not owned by the same user the process runs with.
ERROR: To fix this, either store the file in a non-sticky bit directory (such as /var/run/opensips),
ERROR: or disable fifo file protection using 'sysctl fs.protected_fifos=0' (NOT RECOMMENDED)
/tmp/opensips_fifo file also created correctly.
# ls -l /tmp/opensips_fifo
prw-rw-rw- 1 opensips opensips 0 Dec 29 06:52 /tmp/opensips_fifo
Using opensips-cli command I am able to create database and add table but not able to perform -x command.
Can anyone help me to find out a command for show register and show dump list also any suggestion related -x command not working on opensips-cli.
I had a similar error and i found the following:
if you state in the opensips-cli.cfg file that the fifo_file is located at /tmp/opensips_fifo, it will produce this error, try changing this setting to /var/run/opensips/opensips_fifo

missing permissions to create folder from java application

I am setting up a spring boot application and when running it, it should generate a folder in the source directory (see step 3: https://www.baeldung.com/spring-boot-h2-database)
But when running the application I receive the following error:
org.h2.message.DbException: Log file error: "/data/sample.trace.db", cause: "org.h2.message.DbException: Error while creating file ""/data"" [90062-200]" [90034-200]
at org.h2.message.DbException.get(DbException.java:194)
at org.h2.message.TraceSystem.logWritingError(TraceSystem.java:294)
at org.h2.message.TraceSystem.openWriter(TraceSystem.java:315)
at org.h2.message.TraceSystem.writeFile(TraceSystem.java:263)
at org.h2.message.TraceSystem.write(TraceSystem.java:247)
at org.h2.message.Trace.error(Trace.java:194)
it seems to be a permission problem but I do not understand why. My current user, has admin permissons. What am I missing here?
When I encounter this problem on my machine I proceed through following steps:
If I don't know what user & group I am right now: $whoami && groups
What user is the program executed with (I'm not into Java so eg. PHP "echo exec('whoami');")
Who has access to the directory: $ls -la
3.1 If only owner has access and you are not the owner: $chown user:group file
3.2 If group and owner should have access consider: $chmod 770 file

Oracle Unable to Read Globally Accessible (777) Dump File

I'm trying to import an Oracle dump file, and despite granting global rwx permissions on the files, I'm still getting a permission errors when running the import.
Here's the whole process I've run through:
# Create the dump directory with the dump file, and grant 777 permissions
mkidr -p /home/vagrant/dump
mv /home/vagrant/data.dmp /home/vagrant/dump
chmod -R 777 /home/vagrant/dump
# Check the file permissions
# drwsrwsrwx. vagrant vagrant dump
# -rwxrwxrwx. vagrant vagrant dump/data.dmp
# Add the directory to Oracle
sqlplus system/vagrant
CREATE DIRECTORY DUMP_DIR AS '/home/vagrant/dump';
exit
# Try importing the data
impdp system/vagrant dumpfile=data.dmp directory=DUMP_DIR nologfile=y
And let the keyboard smashing begin...
Connected to: Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "/home/vagrant/dump/data.dmp" for read
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 13: Permission denied
Additional information: 3
Note: I'm entirely aware that these permissions and passwords are terrible for security, but since I'm just trying to run some experimental analysis on a publicly available data set, I don't really care.
I think the problem is that your script says mkidr instead of mkdir.
This way, you don't create the directory, when you move the file to the supposed dir, it only renames the file, making it appear (as a file, not a directory) /home/vagrant/dump with the right permissions (except the d char at the beginning) and, of course, you cannot search it for files, as it's not a directory, but a file. This will also impede oracle to execute successfully the CREATE DIRECTORY DUMP_DIR AS '/home/vagrant/dump'; as there's a file there with that name.
By the way, to access a file, you don't only need read access in the file inode, but also execute permission x in all the directories followed along the path (in this case /home, /home/vagrant and /home/vagrant/dump ---this last one is a file, not a directory---). In this case, it's ora (the user oracle runs as) the user that must be checked.
I suggest you to impersonate as the user ora and try to read the file if that doesn't work, from the same directory where the database runs, and using the same path as it uses to open the file.

Problems with EXEC pplcd from PeopleSoft Application Engine

On a Unix server, I am running an application engine via the process scheduler.
In it, I am attempting to use a "zip" Unix command from within an "Exec" pplcode function.
However, I only get the error
PS_Exec(P): Error executing batch command with reason: No such file or directory (2)
I have tried it several ways. The most logical approach I thought was to change directory back to the root, then change to the specified directory so that I could easily use the zip command, such as the following...
Exec("cd / && cd /opt/psfin/pt850/dat/PSFIN1/PYMNT && zip INVREND INVREND.XML");
1643 12.20.34 0.000048 72: Exec("cd /opt/psfin/pt850/dat/PSFIN1/PYMNT");
1644 12.20.34 0.001343 PS_Exec(P): Error executing batch command with reason: No such file or directory (2)
I've even tried the following....just to see if anything works from within an Exec...
Exec("ls");
Sure enough, it gave the same error.
Now, some of you may be wondering, does the account that is associated with the process scheduler actually have authority on this particular directory path on the server ? Well, I was able to create the xml file given in the previous command with no problems.
I just cannot seem to be able to modify it with the Exec issuance of Unix commands.
I'm wondering if this is an error of rights and permissions from the unix server with regards to the operator id that the process scheduler is running from. However, given that it can create and write to a file there, I cannot understand why the Exec command would be met with any resistance....Just my gut shot in the dark...
Any help would be GREATLY appreciated!!!
Thanks,
Flynn
Not sure if you're still having an issue, but in your Exec code, adding the optional %FilePath_Absolute constant should help. When that constant is left off, PS automatically prefixes all commands with <PS_HOME>. You'll have to specify absolute paths with this flag on though. I've changed the command to something that should work.
Exec("zip /opt/psfin/pt850/dat/PSFIN1/PYMNT/INVREND /opt/psfin/pt850/dat/PSFIN1/PYMNT/INVREND.XML", %FilePath_Absolute);
The documentation at PeopleBooks is a little confusing sometimes, but it explains it fairly well in this case.
You can always store the absolute location in a variable and prefix that to your commands so you don't have to keep typing out /opt/psfin/pt850/dat/PSFIN1/PYMNT/.

Cleaner way to restart daemontools services

In our product, we had created services using daemontools. One of my service looks like this,
/service/test/run
/service/test/log/run (has multilog command to log into ./main dir)
/service/test/log/main/..
All the process and its directories are owned by root user. Now there is a security requirement to change like this,
Service should run in non-root user.
Log main directory should be readable only to user and groups.
For this, I have to change the 'run' file under 'log' directory. Also I need to change the permissions of 'main' directory under it.
Note that all these files under '/service' were owned by test-1.0-0.rpm. When I update my rpm, it overrides the existing run file and got error like this,
multilog: fatal: unable to lock directory ./main: access denied
I know we shouldn't override the 'run' file at run time. I have planned to follow these steps in my rpm script %post section,
//Stop service
svc -d /service/test/log
//Moving the main directory
mv /service/test/log/main /service/test/log/main_old
//Updated run file has code to create main with limited permissions.
//Start service
svc -u /service/test/log
In some articles, they suggested to recreate the 'lock' file under 'log/main'. Is there any other cleaner way of doing this without moving 'main' directory ? If not, is it safe to go with the above steps ?

Resources