WAL-enabled SQLite database blocking user on read - linux

Here is the background:
I am currently running an ETL process on a linux server (CentOS 8) which also hosts applications which read from a local SQLite database.
At certain times when the ETL is running and writing to the SQLite database, applications are also reading from the database.
In order to avoid database locking when the SQLite database is in use by the applications, I have enabled WAL on the SQLite database so that the ETL may write to the database while applications are in use.
However there is now the following issue whereby the ETL process is unable to query the database after the connection has been established. I have logged the following information when this occurs:
The 'shinysuite' user runs the ETL process.
The 'shiny' user runs the applications.
According to the admin, these users belong to the same group.
Output from /etc/groups
First, I do not understand why the 'shiny' user owns the -wal file even though it only reads.
Second, I do not understand why the ETL process ('shinysuite') would be unable to read from the -wal file even it did not own the file.
What could be the problem here?

First, I do not understand why the 'shiny' user owns the -wal file even though it only reads.
When reading from a WAL-mode sqlite3 database, the helper -wal and -shm files are created if they don't already exist.
They're owned by the shiny user and belong to the shiny group, but shinysuite is not a member of that group so it doesn't have permission to use the files. If your application being run by shiny does so in the shinysuite group instead of shiny (If it's a binary executable, using chgrp(1) to change the group of the file and then making it set-gid with chmod g+s shinyapp is one way, or maybe just add shinysuite to the shiny group.) it should work.

Related

NodeJS accessing folders above

I have nodejs running on Centos 7. What bothers me is that any node apps could walk through whole server, changing any files and writing new one. How can I block node app from accessing folders above the level?
This is not specifically a node.js issue, but more a question about how to secure a server from any form of potentially misbehaving program. A massive subject about which whole books have been written.
But to answer your question: any software, including node.js programs, runs in the context of a process that has a user (uid) and group (gid), and standard operating system facilities: file permissions, Access Control Lists (ACLs), etc., determine what a process with a specific uid and gid can access.
If you believe that the node.js program can access the whole server’s file system that suggests that the process is running as the root user, or at least has superuser (su) privileges; which would be considered bad security practice in almost all circumstances.
So run the node.js program as a user with no access outside of an area of the file system to which it has been explicitly granted the minimal possible access.
This CentOS documenation (https://www.centos.org/docs/5/html/5.1/Deployment_Guide/sec-sel-context-checking-processanduser.html) might help a little if you are new to the subject.

DB2 ZOS Mainframe- Archive Logs Disable

I'm working in DB2 ZOS Version 10, I have been working under data masking project. For this project I have been executing over 100k DDL statements (delete, update,insert) .
So I need to do disable the transaction logs before the whole SCRAMBLE PROCESS starts.
In DB2 iSeries AS400, I already handle the same issue by calling the procedure which helps to disable the TRANSACTION LOG DISABLE.
Like wise, I need to do in DB2 ZOS.
You can use the NOT LOGGED attribute for all affected tablespaces, specifies that changes that are made to data in the specified tablespace are not recorded in the DB2 log
Take the following steps for your data masking process:
Take an imagecopy so you can recover
ALTER TABLESPACE database-name.table-space-name NOT LOGGED
Execute data masking process
ALTER TABLESPACE database-name.table-space-name LOGGED
Take an imagecopy to establish a recovery point
You will also probably want to lock all tables with exclusive access so that if you have to recover no one else is affected by your changes
N.B. Make sure you're aware of the recovery implications for objects that are not logged!!!

List of Postgres Instances running on a Server

I am working as a Postgres DBA at my organization. We are currently working on Postgres 9.5 on SUSE Linux servers. I wanted a specific solution. We have multiple SUSE Linux servers and each server can host one or more Postgres database. My requirement is I need to find the list of all the database available on a particular server irrespective of the database is up and running or its shut down.
Is there any file or any location where Postgres keeps note of all the databases that is created on a server. Is there a way I can get the required details without connecting to the Postgres instance and without running any PSQL commands?
If not what would be the best way to get the list. Any hints, solutions and thoughts would help me to resolve this issue.
Thanks a lot for the help in advance.
In PostgreSQL, a server process serves a database cluster, which physically resides in a data directory.
A PostgreSQL cluster contains several databases, all of which are shut down when the server process is shut down.
So your question can be split into two parts:
How to find out which database clusters are on a machine, regardless if they are started or not?
How to find out all databases inside a cluster even if the cluster is shut down?
There is no generic answer to the first question, since a data directory could in principle be everywhere and could belong to any user.
You could get an approximation with
find / -name PG_VERSION
but it is probably better to find a special solution that fits your setup (about which you didn't tell us).
The second question is even harder.
To even get a list of numeric IDs, you would have to know all tablespace directories. Those in the default tablespace are the names of the subdirectories in the base subdirectory of the data directory.
In order to get the names of the databases you have to have the PostgreSQL server started, because it would be very hard to find the file that belongs to pg_database and parse it.

In node.js how would I follow the Principle of Least Privilege?

Imagine a web application that performs two main functions:
Serves data from a file that requires higher privileges to read from
Serves data from a file that requires lower privileges to read from
My Assumption: To allow both files to be read from, I would need to run node using an account that could read both files.
If node is running under an account that can access both files, then a user who should not be able to read any file that requires higher privileges could potentially read those files due to a security flaw in the web application's code. This would lead to disastrous consequences in my imaginary web application world.
Ideally the node process could run using a minimal set of rights and then temporarily escalate those rights before accessing a system resource.
Questions: Can node temporarily escalate privileges? Or is there a better way?
If not, I'm considering running two different servers (one with higher privileges and one with lower) and then putting them both behind a proxy server that authenticates/authorizes before forwarding the request.
Thanks.
This is a tricky case indeed. In the end file permissions are a sort of meta-data. Instead of directly accessing the files, my recommendation would be to have some layer between the files in the form of a database table, or anything that could map the type of user to the file, and stream the file to the user if it exists.
That would mean that the so called web application couldn't just circumvent the file system permissions as easy. You could even set it up so that said files did not have server readable permissions, and instead were only readable by the in between layer. All it could do is make a call, and see if the user with given permissions could access the files. This lets you also share between multiple web applications should you choose. Also because of the very specific nature of what the in between layer does, you can enforce a very restricted set of calls.
Now, if a lower privileged user somehow gains access to a higher privileged user's account, they'll be able to see the file, and there's no way to really get around that short of locking the user's account. However that's part of the development process.
No, I doubt node.js-out of the box-could guarantee least privilege.
It is conceivable that, should node.js be run as root, it could twiddle its operating system privileges via system calls to permit or limit access to certain resources, but then again running as root would defeat the original goal.
One possible solution might be running three instances of node, a proxy (with no special permissions) to direct calls to one or the other two servers run at different privilege levels. (Heh, as you already mention. I really need to read to the end of posts before leaping into the fray!)

Getting rails to execute root level file edits on system files without compromising security

I'm writing a Rails 3 application that needs to be able to trigger modifications to unix system config files. I'd like to insulate the file modifications from the consumer side by running them in a background process. I've considered writing out a temp file in rails and then copying the file with a bash script but that doesn't really insulate the system. I've also considered pulling from the database manually with a cron based script and updating the configs.
But what I would really like is a component that can hook into the rails environment, read out what is needed from the database, and update the config files. This process needs to be run as root because the config files mostly live in /etc/whatever.
Any suggestions?
Thanks!
My brother the network admin says you should write a Ruby/Perl/whatever script to do the actual modifications that validates the input and actually makes the modification. You could call this script from rails with. You will still want to sanitize #parameters. System.exec("/usr/bin/sudo", ["/path/to/script"] + #parameters)
As far as getting the data from the database you could either make a new SQL connection, and get the info from database.yml in rails. note data in the database should be validated since you are running as root.

Resources