I am setting up a spring boot application and when running it, it should generate a folder in the source directory (see step 3: https://www.baeldung.com/spring-boot-h2-database)
But when running the application I receive the following error:
org.h2.message.DbException: Log file error: "/data/sample.trace.db", cause: "org.h2.message.DbException: Error while creating file ""/data"" [90062-200]" [90034-200]
at org.h2.message.DbException.get(DbException.java:194)
at org.h2.message.TraceSystem.logWritingError(TraceSystem.java:294)
at org.h2.message.TraceSystem.openWriter(TraceSystem.java:315)
at org.h2.message.TraceSystem.writeFile(TraceSystem.java:263)
at org.h2.message.TraceSystem.write(TraceSystem.java:247)
at org.h2.message.Trace.error(Trace.java:194)
it seems to be a permission problem but I do not understand why. My current user, has admin permissons. What am I missing here?
When I encounter this problem on my machine I proceed through following steps:
If I don't know what user & group I am right now: $whoami && groups
What user is the program executed with (I'm not into Java so eg. PHP "echo exec('whoami');")
Who has access to the directory: $ls -la
3.1 If only owner has access and you are not the owner: $chown user:group file
3.2 If group and owner should have access consider: $chmod 770 file
Related
I'm trying to import an Oracle dump file, and despite granting global rwx permissions on the files, I'm still getting a permission errors when running the import.
Here's the whole process I've run through:
# Create the dump directory with the dump file, and grant 777 permissions
mkidr -p /home/vagrant/dump
mv /home/vagrant/data.dmp /home/vagrant/dump
chmod -R 777 /home/vagrant/dump
# Check the file permissions
# drwsrwsrwx. vagrant vagrant dump
# -rwxrwxrwx. vagrant vagrant dump/data.dmp
# Add the directory to Oracle
sqlplus system/vagrant
CREATE DIRECTORY DUMP_DIR AS '/home/vagrant/dump';
exit
# Try importing the data
impdp system/vagrant dumpfile=data.dmp directory=DUMP_DIR nologfile=y
And let the keyboard smashing begin...
Connected to: Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "/home/vagrant/dump/data.dmp" for read
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 13: Permission denied
Additional information: 3
Note: I'm entirely aware that these permissions and passwords are terrible for security, but since I'm just trying to run some experimental analysis on a publicly available data set, I don't really care.
I think the problem is that your script says mkidr instead of mkdir.
This way, you don't create the directory, when you move the file to the supposed dir, it only renames the file, making it appear (as a file, not a directory) /home/vagrant/dump with the right permissions (except the d char at the beginning) and, of course, you cannot search it for files, as it's not a directory, but a file. This will also impede oracle to execute successfully the CREATE DIRECTORY DUMP_DIR AS '/home/vagrant/dump'; as there's a file there with that name.
By the way, to access a file, you don't only need read access in the file inode, but also execute permission x in all the directories followed along the path (in this case /home, /home/vagrant and /home/vagrant/dump ---this last one is a file, not a directory---). In this case, it's ora (the user oracle runs as) the user that must be checked.
I suggest you to impersonate as the user ora and try to read the file if that doesn't work, from the same directory where the database runs, and using the same path as it uses to open the file.
When configuring JRebel on my remote server (JBoss on linux) I have configured the JVM arg as
-javaagent:/home/user/jrebel.jar" -Drebel.remoting_plugin=true
The jrebel.jar is absolutely definitely in that location, yet the server fails to start with the error:
Error opening zip
file or JAR manifest missing : /home/user/jrebel.jar Error occurred
during initialization of VM agent library failed to init: instrument
So the arg is oviously being passed to the JVM correctly, but for the life of me I can't work out why it can't find the jar. I've been through every Zero Turnaround article I can find + looked at the solutions that have resolved it for other people, but no luck. Any ideas?
Turned out to be a permissions problem - the JBoss user didn't have the permissions to access the directory that I had placed jrebel.jar into.
Would have been nice to have a more meaningfull error - e.g. 'permissions denied'. Shows my lack of Linux knowledge though I guess.
After the jar was moved to a directory within the JBoss installation + the jar owner was changed to the JBoss user and Read/Write/Execute permissions added, all is well.
Yes , the permission is the reason that this error happens to me when I tried to open PHPSTORM and that error was :
Error opening zip file or JAR manifest missing : ${JetbrainsIdesCrackPath}
Error occurred during initialization of VM
agent library failed to init: instrument
so before running PHPSTORM I had to run the command : sudo -i to get the root permission to run the program.
We could map standard linux user into SElinux user accounts. Consider I am having a standard linux user with name "Steve".
Now, I am having 2 questions.
a.) If I map "Steve" into user_u (SElinux account), then he will get execution permission on $HOME & /tmp directory. Can I restrict "Steve" from executing applications in $HOME or /tmp. I tried using "neverallow" statement in policy file (*.te), and ended in following error message.
Error Message:
"libsepol.check_assertion_helper: neverallow violated by allow user_t bin_t:file { read getattr open };"
How could I override default permissions like user_u is having execution permission in $HOME in SELinux??
b.) I have created a file and changed it's type to "mytype_t" using chcon command. Then added "allow user_t mytype_t: file { read write execute };" into my policy. I have added mytype_t to /etc/selinux/default/contexts/files/file_contexts & /etc/selinux/default/modules/active/file_context. "seinfo -t" doesn't list mytype_t.
I could successfully create *.pp file. When I tried to install this policy using "semodule -i myPolicy.pp", I have ended with the following error message. It seems, mytype_t is not recognized by SElinux Policy.
Error Message:
libsepol.print_missing_requirements: user_execution_permission's global requirements were not met: type/attribute mytype_t (No such file or directory).
libsemanage.semanage_link_sandbox: Link packages failed (No such file or directory).
semodule: Failed!
Simply I just want to create user "Steve" who can execute aare having standard linux users that are mapped into SElinux user-account. For example "Steve" is a standard linux user-account that can be mapped to any of the following, such as user_u (or) staff_u (or) system_u (or) unconfined_u.ll files with type "mytype_t" in anywhere in system, But should not be able to execute applications with other types.
I am working on debian 6 with policy.24, Thanks in advance for help!
In our product, we had created services using daemontools. One of my service looks like this,
/service/test/run
/service/test/log/run (has multilog command to log into ./main dir)
/service/test/log/main/..
All the process and its directories are owned by root user. Now there is a security requirement to change like this,
Service should run in non-root user.
Log main directory should be readable only to user and groups.
For this, I have to change the 'run' file under 'log' directory. Also I need to change the permissions of 'main' directory under it.
Note that all these files under '/service' were owned by test-1.0-0.rpm. When I update my rpm, it overrides the existing run file and got error like this,
multilog: fatal: unable to lock directory ./main: access denied
I know we shouldn't override the 'run' file at run time. I have planned to follow these steps in my rpm script %post section,
//Stop service
svc -d /service/test/log
//Moving the main directory
mv /service/test/log/main /service/test/log/main_old
//Updated run file has code to create main with limited permissions.
//Start service
svc -u /service/test/log
In some articles, they suggested to recreate the 'lock' file under 'log/main'. Is there any other cleaner way of doing this without moving 'main' directory ? If not, is it safe to go with the above steps ?
I'm setting up the Puppet Dashboard for the first time. I have it running with the passenger module in Apache.
sudo rake RAILS_ENV=production reports:import
When I run this command, the tasks appear in the dashboard as failed.
630 new failed tasks
The details for each failure look something like this:
Importing report 201212270754.yaml at 2012-12-27 09:21 UTC
Permission denied - /var/lib/puppet/reports/rb-db1/201212270754.yaml
Backtrace
/usr/share/puppet-dashboard/app/models/report.rb:86:in `read'
/usr/share/puppet-dashboard/app/models/report.rb:86:in `create_from_yaml_file'
The report files were owned by puppet:puppet with a 640 permission by default.
I ran chmod a+rw on the reports directory, but I still get the same errors.
Any ideas on what I might be doing wrong here?
If you are running the puppet-dashboard server as root instead of as the puppet-dashboard user, you will see this error. My system is using /usr/share/puppet-dashboard/script/server on centos 6.4 using the puppet-dashboard-1.2.23-1.el6.noarch rpm from puppetlabs.
[root#hadoop01 puppet-dashboard]# cat /etc/sysconfig/puppet-dashboard
#
# path to where you installed puppet dashboard
#
DASHBOARD_HOME=/usr/share/puppet-dashboard
#DASHBOARD_USER=puppet-dashboard
DASHBOARD_USER=root
DASHBOARD_RUBY=/usr/bin/ruby
DASHBOARD_ENVIRONMENT=production
DASHBOARD_IFACE=0.0.0.0
DASHBOARD_PORT=3000
edit the file like above and then run the command
/etc/init.d/puppet-dashboard restart && /etc/init.d/puppet-dashboard-workers restart
my puppet-dashboard version is 1.2.23