Detect if have got sudo right info - node.js

Is it possible to detect that I have the right sudo when I run
node app.js
I hope that when I run the command
sudo node app.js
it will tell me that the app.js node is running with the right sudo.

I can't ensure this is correct since I currently have no way of testing it but if I remember correctly you should be able to use the getuid() function from the process package (Documentation).
(Note: This only works on POSIX platforms, which means no Windows)
This should return "root" when you run the command with super user permissions.
IMPORTANT You should never run a webserver like node with super user permissions. If you need the permissions for some setup work you should revert the granted root permissions by doing something like this at the end of your initialization work:
var uid = parseInt(process.env.SUDO_UID);
if (uid) process.setuid(uid);

I found the accepted answer confusing. Coming from that, and based on some personal testing, the following code works:
function isRoot() {
return !!process.env.SUDO_UID; // SUDO_UID is undefined when not root
}
as well as this:
function isRoot() {
return !process.getuid(); // getuid() returns 0 for root
}

Related

How to build docker image without having to use the sudo keyword

I'm building a node.js app which allows people to run code on my server and I'm using Docker to containerise the user's code so that it can't steal data or in general do something they shouldn't. I have a Docker image template that is copied into the user's personal app directory and I want to build the image using this function I've written:
const util = require("util");
const exec = util.promisify(require("child_process").exec);
async function buildContainer(path, dockerUser) {
return await exec(`sudo docker build -t user_app_${dockerUser} ${path}`);
}
However when I go to use it, it requires me to enter my sudo password as if I was executing it manually in a terminal window.
Is there anyway I can run this function without having to include the sudo keyword?
Thanks in advance.
you can use podman instead of docker.
There you don´t need sudo.
You have the most commands like docker.
example:
podman build
podman run
and so on...
hope that helps :)
Regards

ENOENT, no such file or directory on fs.mkdirSync

I'm currently starting up my NodeJS application and I have the following if-statement:
Error: ENOENT, no such file or directory './realworks/objects/'
at Object.fs.mkdirSync (fs.js:654:18)
at Object.module.exports.StartScript (/home/nodeusr/huizenier.nl/realworks.js:294:7)
The weird thing, however, is that the folder exists already, but the check fails on the following snippet:
if(fs.existsSync(objectPath)) {
var existingObjects = fs.readdirSync(objectPath);
existingObjects.forEach(function (objectFile) {
var object = JSON.parse(fs.readFileSync(objectPath+objectFile));
actualObjects[object.ObjectCode] = object;
});
}else{
fs.mkdirSync(objectPath); // << this is line 294
}
I fail to understand how a no such file or directory can occur on CREATING a directory.
When any folder along the given path is missing, mkdir will throw an ENOENT.
There are 2 possible solutions (without using 3rd party packages):
Recursively call fs.mkdir for every non-existent directory along the path.
Use the recursive option, introduced in v10.12:
fs.mkdir('./path/to/dir', {recursive: true}, err => {})
Solve here How to create full path with node's fs.mkdirSync?
NodeJS version 10.12.0 has added a native support for both mkdir and mkdirSync to create a directory recursively with recursive: true option as the following:
fs.mkdirSync(targetDir, { recursive: true });
And if you prefer fs Promises API, you can write
fs.promises.mkdir(targetDir, { recursive: true });
When you are using fs.mkdir or fs.mkdirSync, while passing the path like folder1/folder2/folder3, folder1 and folder2 must exist otherwise you will get the above error.
The following worked for me:
fs.mkdir( __dirname + '/realworks/', err => {})
Problem was caused by forever running the application relative to the working directory the forever start command is called in, not the location of the application entrypoint.
Try:
fs.mkdir('./realworks/', err => {})
The reason for the error is that if any of the folders exist along the path given to fs.mkdir or fs.mkdirSync these methods will throw/callback with an ENOENT error.
ENOENT is described in the linux documentation as the following:
No such file or directory (POSIX.1-2001).
Typically, this error results when a specified path‐
name does not exist, or one of the components in the
directory prefix of a pathname does not exist, or the
specified pathname is a dangling symbolic link.
Another possible reason for ENOENT is that you lack sufficient privileges to create the directory.
This happened to me while building a docker image where I didn't have sufficient privilege to create a subfolder in the current WORKDIR. Changing the owner of the folder using --chown=user:usergroup OR changing the USER to the root user for the directive were both valid solutions to the problem.
WHAT WORKED FOR ME WAS ;
Deleting my yarn.lock, package-lock.json, and nodemodules
reinstalling with yarn build
restarting my local server
so... you probably might be using ubuntu terminal to create your react app.
It happens to me that I am testing the ubuntu terminal that windows recently launched to be installed on windows computer, like a virtual machine but actually not so messy.
I occured to have the same error as you folk, after testing all the options that the community has given before, none of them work. However, i did find a solution for my problem. It was giving me the ENOENT error, like, test file or directory not found. but I was there indeed. I was using npm start, and came up with the idea of using sudo npm start... and it worked.

Buildstops not creating file with exec in node js

I'm running two commands
indexer idx_name --rotate
indexer idx_name --buildstops dict_file 10
Everything is fine when I run these commands from command line. However, when I pass these two commands through my node application using exec, first command works successfully and for second command the dict_file is not getting generated.
I tried some combinations with sudo, but it didn't help. I checked the stdout from both these ways(node and shell) and it looked same.
Here is my node js code:
var exec = require('child_process').exec;
var cmd = 'indexer idx_name --rotate && indexer idx_name --buildstops dict_file 10';
exec(cmd, function(err, stdout, stderr) {
console.log(stdout);
});
Is there something I'm missing ?
Which ever user is running node, will need permission to write dict_file.
Might even find it easier to delete the file, and let it be created by the right user via node (assuming that user can write the the folder)
Sudo could also work, but will need to make sure the user running node, has sudo permissions. Sorting that is definitly outside the remit of stackoverflow.
... do also check you looking in the right place. In your example you dont show a path, so the file dict_file will just be created in the current working directory (not sure how node configures that)

clone: operation not permitted

I am using isolate, an isolator to isolate the execution of another program using Linux Containers. It's very handy and it works very well locally on my computer (I can run fork bombs and infinite loops and it protects everything).
Now I'm trying to get this to work on an Ubuntu 12.04 server I have, but I'm having some difficulties with it. It's a fresh server too.
When I run:
sudo isolate --run -- mycommand
(mycommand I usually try python3 or something), I get:
clone: Operation not permitted
So, I dug up on the clone function (called like this in isolate.c):
box_pid = clone(
box_inside, // Function to execute as the body of the new process
argv, // Pass our stack
SIGCHLD | CLONE_NEWIPC | CLONE_NEWNET | CLONE_NEWNS | CLONE_NEWPID,
argv); // Pass the arguments
if (box_pid < 0)
die("clone: %m");
if (!box_pid)
die("clone returned 0");
box_keeper();
Here's the Return Value of the function clone:
On success, the thread ID of the child process is returned in the caller's thread of execution. On failure, -1 is returned in the caller's context, no child process will be created, and errno will be set appropriately.
And this is the error I'm getting:
EPERM Operation not permitted (POSIX.1)
And then I also found this:
EPERM CLONE_NEWNS was specified by a non-root process (process without CAP_SYS_ADMIN).
The clone function is indeed passing CLONE_NEWNS to run the program in a new namespace. I actually tried removing but I keep getting clone: Operation not permitted.
So, it all seems to point out to not having root privileges, but I actually ran the command as root (with and without sudo just to be sure), and also with a normal user in the sudoers group. None of that worked, but it works very well locally. Root privileges work for everything else but for some reason when I run this isolate program, it doesn't work.
I tried both with isolate in /usr/bin and running ./isolate in a local folder too.
I had this issue because I was trying to use isolate within a docker container.
Rerunning the container with the --privileged flag fixed it for me.
You can also pass --cap-add=SYS_ADMIN to docker run.
Or also --security-opt seccomp=unconfined
Or also provide your own white list of allowed system calls, see https://docs.docker.com/engine/security/seccomp/#pass-a-profile-for-a-container

Authentication error from server: SASL(-13): user not found: unable to canonify

Ok, so I'm trying to configure and install svnserve on my Ubuntu server. So far so good, up to the point where I try to configure sasl (to prevent plain-text passwords).
So; I installed svnserve and made it run as a daemon (also installed it as a startup script with the command svnserve -d -r /var/svn).
My repository is in /var/svn and has following configuration (to be found in /var/svn/myrepo/conf/svnserve.conf) (I left comments out):
[general]
anon-access = none
auth-access = write
realm = my_repo
[sasl]
use-sasl = true
min-encryption = 128
max-encryption = 256
Over to sasl, I created a svn.conf file in /usr/lib/sasl2/:
pwcheck_method: auxprop
auxprop_plugin: sasldb
sasldb_path: /etc/my_sasldb
mech_list: DIGEST-MD5
I created it in that folder as the article at this link suggested: http://svnbook.red-bean.com/nightly/en/svn.serverconfig.svnserve.html#svn.serverconfig.svnserve.sasl (and also because it existed and was listed as a result when I executed locate sasl).
Right after that I executed this command:
saslpasswd2 -c -f /etc/my_sasldb -u my_repo USERNAME
Which also asked me for a password twice, which I supplied. All going great.
When issuing the following command:
sasldblistusers2 -f /etc/my_sasldb
I get the - correct, as far as I can see - result:
USERNAME#my_repo: userPassword
Restarted svnserve, also restarted the whole server, and tried to connect.
This was the result from my TortoiseSVN client:
Authentication error from server: SASL(-13): user not found: unable to canonify
user and get auxprops
I have no clue at all in what I'm doing wrong. I've been scouring the web for the past few hours, but haven't found anything but that I might need to move the svn.conf file to another location - for example, the install location of subversion itself. which svn results in /usr/bin/svn, thus I moved the svn.conf to /usr/bin (although that doesn't feel right to me).
Still doesn't work, even after a new reboot.
I'm running out of ideas. Anyone else?
EDIT
I tried changing this (according to what some other forums on the internet told me to do): in the file /etc/default/saslauthd, I changed
START=no
MECHANISMS="pam"
to
START=yes
MECHANISMS="sasldb"
(Actually I had already changed START=no to START=yes before, but I forgot to mention it). But still no luck (I did reboot the whole server).
It looks like svnserve uses default values for SASL...
Check /etc/sasl2/svn.conf to be readable by the svnserver process owner.
If /etc/sasl2/svn.conf is owned by user root, group root and --rw------, svnserve uses the default values.
You will not be warned by any log file entry..
see section 4 of https://svn.apache.org/repos/asf/subversion/trunk/notes/sasl.txt:
This file must be named svn.conf, and must be readable by the svnserve process.
(it took me more than 3 days to understand both svnserve-sasl-ldap and this pitfall at the same time..)
I recommend to install the package cyrus-sasl2-doc and to read the section Cyrus SASL for System Administrators carefully.
I expect this is caused by the SASL API for the call
result = sasl_server_new(SVN_RA_SVN_SASL_NAME,
hostname, b->realm,
localaddrport, remoteaddrport,
NULL, SASL_SUCCESS_DATA,
&sasl_ctx);
if (result != SASL_OK)
{
svn_error_t *err = svn_error_create(SVN_ERR_RA_NOT_AUTHORIZED, NULL,
sasl_errstring(result, NULL, NULL));
SVN_ERR(write_failure(conn, pool, &err));
return svn_ra_svn__flush(conn, pool);
}
as you may see, handling the access failure by svnserve is not foreseen, only Ok or error is expected...
I looked in /var/log/messages and found
localhost svnserve: unable to open Berkeley db /etc/sasldb2: No such file or directory
When I created the sasldb to the above file and got the permissions right, it worked. Looks like it ignores or does not use the sasl database path.
There was another suggestion that rebooting solved the problem but that option was not available to me.

Resources