Oracle Unable to Read Globally Accessible (777) Dump File - linux

I'm trying to import an Oracle dump file, and despite granting global rwx permissions on the files, I'm still getting a permission errors when running the import.
Here's the whole process I've run through:
# Create the dump directory with the dump file, and grant 777 permissions
mkidr -p /home/vagrant/dump
mv /home/vagrant/data.dmp /home/vagrant/dump
chmod -R 777 /home/vagrant/dump
# Check the file permissions
# drwsrwsrwx. vagrant vagrant dump
# -rwxrwxrwx. vagrant vagrant dump/data.dmp
# Add the directory to Oracle
sqlplus system/vagrant
CREATE DIRECTORY DUMP_DIR AS '/home/vagrant/dump';
exit
# Try importing the data
impdp system/vagrant dumpfile=data.dmp directory=DUMP_DIR nologfile=y
And let the keyboard smashing begin...
Connected to: Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "/home/vagrant/dump/data.dmp" for read
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 13: Permission denied
Additional information: 3
Note: I'm entirely aware that these permissions and passwords are terrible for security, but since I'm just trying to run some experimental analysis on a publicly available data set, I don't really care.

I think the problem is that your script says mkidr instead of mkdir.
This way, you don't create the directory, when you move the file to the supposed dir, it only renames the file, making it appear (as a file, not a directory) /home/vagrant/dump with the right permissions (except the d char at the beginning) and, of course, you cannot search it for files, as it's not a directory, but a file. This will also impede oracle to execute successfully the CREATE DIRECTORY DUMP_DIR AS '/home/vagrant/dump'; as there's a file there with that name.
By the way, to access a file, you don't only need read access in the file inode, but also execute permission x in all the directories followed along the path (in this case /home, /home/vagrant and /home/vagrant/dump ---this last one is a file, not a directory---). In this case, it's ora (the user oracle runs as) the user that must be checked.
I suggest you to impersonate as the user ora and try to read the file if that doesn't work, from the same directory where the database runs, and using the same path as it uses to open the file.

Related

Strange Behavior with clamd scan function

I have a simple python3 script running on ubuntu server 20.04 that tries to call clamd (clamav-daemon process) library to scan a file. The scan ping() and version() function all work correctly. However when I actually do a test write and scan, i get the following error:
{'/filedrop/test.doc': ('ERROR', "Can't open file or directory")}
This is the code that I used to call the test write and scan, and this is all standard sample from the clamd website:
open('/filedrop/test.doc','wb').write(clamd.EICAR)
print(cd.scan('/filedrop/test.doc'))
After the code is run, i get the following string in the test file which indicates that the python3 script was able to successfully write to the file, yet i keep getting the error that the file can't be opened when i use the clamd scan function.
This is the string that was written to the file:
X5O!P%#AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*
I am also able to run clamscan from command line on the folder and it successfully scans the files as well.
I'm running as root user while the service is using clamav:clamav.
I did give read/write permission to the folder and the files to "other users", and also indicated by the fact that the file could be written by the python script.
I believe the solution to the problem here is that AppArmour is blocking clamd for that particular directory. I would look at the AppArmour profile for clamd. It should be called something like /etc/apparmor.d/clamav or similar. You can adjust that profile or alternatively disable it (according to Ubuntu):
sudo ln -s /etc/apparmor.d/profile.name /etc/apparmor.d/disable/
sudo apparmor_parser -R /etc/apparmor.d/profile.name
More complete instructions available here:
https://help.ubuntu.com/community/AppArmor
You can also disable AppArmour, for the purposes of testing (I don't like to advise anyone to remove security features permanently), with:
sudo systemctl stop apparmor
sudo systemctl disable apparmor

missing permissions to create folder from java application

I am setting up a spring boot application and when running it, it should generate a folder in the source directory (see step 3: https://www.baeldung.com/spring-boot-h2-database)
But when running the application I receive the following error:
org.h2.message.DbException: Log file error: "/data/sample.trace.db", cause: "org.h2.message.DbException: Error while creating file ""/data"" [90062-200]" [90034-200]
at org.h2.message.DbException.get(DbException.java:194)
at org.h2.message.TraceSystem.logWritingError(TraceSystem.java:294)
at org.h2.message.TraceSystem.openWriter(TraceSystem.java:315)
at org.h2.message.TraceSystem.writeFile(TraceSystem.java:263)
at org.h2.message.TraceSystem.write(TraceSystem.java:247)
at org.h2.message.Trace.error(Trace.java:194)
it seems to be a permission problem but I do not understand why. My current user, has admin permissons. What am I missing here?
When I encounter this problem on my machine I proceed through following steps:
If I don't know what user & group I am right now: $whoami && groups
What user is the program executed with (I'm not into Java so eg. PHP "echo exec('whoami');")
Who has access to the directory: $ls -la
3.1 If only owner has access and you are not the owner: $chown user:group file
3.2 If group and owner should have access consider: $chmod 770 file

Node-RED docker problem reading directory contents

I have a Node-RED app running in a docker container, with the aim to periodically read contents of a directory where .csv files are constantly updated and new .csv files are sometimes added. The point is to read new entries periodically, parse data, and send it onward.
I have not utilized the numerous 'contrib' nodes, as I have enabled the NodeJS 'fs' module and played with it. Additionally the built-in 'file' and 'file in' Node-RED modules are useful when reading the .csv files' contents, so that is not an issue.
The problem comes with the new .csv files being added into the directory where all the .csv files are. I want be able to read all the file names and subsequently read all the .csv files.
I have mounted the .csv file directory into the docker container, and when testing whether I'm able to read the file names, weird things happen. Even though the files are visible in the container (viewed using docker exec -it CONTAINER /bin/bash) a piece of code containing fs.readdir does not list the files. When I try the fs.readdir too see the contents of /data directory, which is mounted into the container, it lists the contents like 10 % of the time (injecting a timestamp into the node to run it)
As you can see from the image, the contents of the directorty in question are not listed on every execution of the node. The contents of the mounted directory containing the .csv files are never listed upon running this node with the correct path as parameter.
The operating system is CentOS 7, where I am not a sudoer. I have managed to make it so that none of the mounted files or directories are owned by root, so they are owned by user node-red within the container. I managed to pull this directory file listing through on my ubuntu where I am a sudoer, but as none of the stuff is root-owned there either, I am not sure if that is the problem. I have a feeling this might be an operating system -relating thing.
Notes:
All relevant files and directories have permissions rwxr-xr-x
I have tried to mount the .csv files containing directory under /data directory, and as its own directory directly under root as /files
I am able to read the file contents with the Node-RED file nodes, just not the directories. Reading static file names is not enough as the directory contents keep changing
I have enabled NodeJS 'fs' module from the settings.js file which is mounted into the container
The Node-RED node (in image) does not output any errors (I tried this by adding an error return to the function in the image)
I have tried to run the Node-RED container as root user and without defining the user
I am running the Node-RED container using docker-compose
I hope this was not too much text or too unclear, I just wanted to make sure at least most of the stuff I have tried would be written here. If someone has some insight on the workings of Node-RED under docker and using the NodeJS fs module, it would be most appreciated :)
The core Watch node should do all of this for you, no need to write function nodes.
If you want walk subdirectories make sure you tick the right box in the config.
From the Sidebar docs for the watch node:
The full filename of the file that actually changed is put into
msg.payload and msg.filename, while a stringified version of the watch
list is returned in msg.topic.
msg.file contains just the short filename of the file that changed.
msg.type has the type of thing changed, usually file or directory,
while msg.size holds the file size in bytes.
To answer my question of why Node-RED was unable to read directory contents most of the time, it was because of using the asynchronous fs.readdir module. When I switched to using the synchronous version fs.readdirSync, Node-RED was able to read directory contents without problems.

SELinux: Creating customized environment to specified user accounts

We could map standard linux user into SElinux user accounts. Consider I am having a standard linux user with name "Steve".
Now, I am having 2 questions.
a.) If I map "Steve" into user_u (SElinux account), then he will get execution permission on $HOME & /tmp directory. Can I restrict "Steve" from executing applications in $HOME or /tmp. I tried using "neverallow" statement in policy file (*.te), and ended in following error message.
Error Message:
"libsepol.check_assertion_helper: neverallow violated by allow user_t bin_t:file { read getattr open };"
How could I override default permissions like user_u is having execution permission in $HOME in SELinux??
b.) I have created a file and changed it's type to "mytype_t" using chcon command. Then added "allow user_t mytype_t: file { read write execute };" into my policy. I have added mytype_t to /etc/selinux/default/contexts/files/file_contexts & /etc/selinux/default/modules/active/file_context. "seinfo -t" doesn't list mytype_t.
I could successfully create *.pp file. When I tried to install this policy using "semodule -i myPolicy.pp", I have ended with the following error message. It seems, mytype_t is not recognized by SElinux Policy.
Error Message:
libsepol.print_missing_requirements: user_execution_permission's global requirements were not met: type/attribute mytype_t (No such file or directory).
libsemanage.semanage_link_sandbox: Link packages failed (No such file or directory).
semodule: Failed!
Simply I just want to create user "Steve" who can execute aare having standard linux users that are mapped into SElinux user-account. For example "Steve" is a standard linux user-account that can be mapped to any of the following, such as user_u (or) staff_u (or) system_u (or) unconfined_u.ll files with type "mytype_t" in anywhere in system, But should not be able to execute applications with other types.
I am working on debian 6 with policy.24, Thanks in advance for help!

Cleaner way to restart daemontools services

In our product, we had created services using daemontools. One of my service looks like this,
/service/test/run
/service/test/log/run (has multilog command to log into ./main dir)
/service/test/log/main/..
All the process and its directories are owned by root user. Now there is a security requirement to change like this,
Service should run in non-root user.
Log main directory should be readable only to user and groups.
For this, I have to change the 'run' file under 'log' directory. Also I need to change the permissions of 'main' directory under it.
Note that all these files under '/service' were owned by test-1.0-0.rpm. When I update my rpm, it overrides the existing run file and got error like this,
multilog: fatal: unable to lock directory ./main: access denied
I know we shouldn't override the 'run' file at run time. I have planned to follow these steps in my rpm script %post section,
//Stop service
svc -d /service/test/log
//Moving the main directory
mv /service/test/log/main /service/test/log/main_old
//Updated run file has code to create main with limited permissions.
//Start service
svc -u /service/test/log
In some articles, they suggested to recreate the 'lock' file under 'log/main'. Is there any other cleaner way of doing this without moving 'main' directory ? If not, is it safe to go with the above steps ?

Resources