How to use Sudo with a Jenkins bash script over SSH? - linux

I have a physical machine (Win7) and a virtual machine (Red Hat) sharing a network. I am executing a bash script as a Jenkins job using the SSH plugin for Jenkins to deploy an application from the physical machine to the virtual machine.
I cannot use Root users due to security policy on the machine I want the script executed on and am instead limited to using standard users with sudo access.
However, I want the script to run without interruption (I don't think Jenkins even allows you to enter user passwords when running bash scripts? Also, this doesn't seem like best practice anyway).
Is there any method of bypassing the sudo request or configuration of the script that would allow this process to run the way I wish?

You can use a NOPASSWD directive in /etc/sudoers; see the sudoers manual page.
If that is not possible, you can always use expect to feed the password (stored somewhere in the account) to the sudo process. This is probably against your security policy as well. You need to talk to the people who make the policy for advice on how you can automate these jobs (“digital transformation” and all that).

Related

Docker Command Restrictions on Ubuntu

I am currently prototyping Docker hosted on WSL and Ubuntu that will be located on a compliant workstation. Being an early prototype, we want it setup heavily restricted to side step compliancy.
Now a piece of the puzzle is being able to restrict users to only a few commands that will allow them to accomplish their job. For example, can I use Unix permissions to restrict Docker commands such as: docker network create and flags such as --privileged, --mount, etc? The goal here is to deploy a specific configuration and ensure that it cannot be changed by non-admin users. Thank you.

Disable certain Docker run options

I'm currently working on a setup to make Docker available on a high performance cluster (HPC). The idea is that every user in our group should be able to reserve a machine for a certain amount of time and be able to use Docker in a "normal way". Meaning accessing the Docker Daemon via the Docker CLI.
To do that, the user would be added to the Docker group. But this imposes a big security problem for us, since this basically means that the user has root privileges on that machine.
The new idea is to make use of the user namespace mapping option (as described in https://docs.docker.com/engine/reference/commandline/dockerd/#/daemon-user-namespace-options). As I see it, this would tackle our biggest security concern that the root in a container is the same as the root on the host machine.
But as long as users are able to bypass this via --userns=host , this doesn't increase security in any way.
Is there a way to disable this and other Docker run options?
As mentioned in issue 22223
There are a whole lot of ways in which users can elevate privileges through docker run, eg by using --privileged.
You can stop this by:
either not directly providing access to the daemon in production, and using scripts,
(which is not what you want here)
or by using an auth plugin to disallow some options.
That is:
dockerd --authorization-plugin=plugin1
Which can lead to:

Allowing custom scp/sftp/rsync in sandbox shell

I am currently building a sandbox environment utilizing docker where all users will be directly guided into a docker container with home dir linked as data vol upon login. This is achieved through the use of a custom user shell instead of /bin/bash csh etc.
However, scp/rsync/sftp fails with this custom shell unfortunately. My current solution is to make a separate no-login account with the same home dir as the sandbox user and only allow scp/rsync/sftp through rssh in this account where users can then upload their data.
Just wondering if I can streamline this process somehow and use only 1 account to redirect users into docker containers directly AND allow sftp/scp processes as well?
Thanks!
Edit:
upon more poking around, I have discovered that the newest sshd internal-ftp-server subsystem do not invoke the users login shell, and I can enable sftp this way. However, i dont think rsync and scp works this way unfortunately.
The ultimate goal i want is basically to be able to sandbox users directly into a container upon login and to allow sftp/scp/rsync upload ideally using the SAME credentials instead of setting up a separate one for them.

How to write a shell script to run scripts on several remote machines without ssh?

Can anyone please tell me how I can write a bash shell script that executes another script on several remote machines without ssh.
The scenario is I've a couple of scripts that I should run on 100 Amazon Ec2 cloud instances. The naive approach is to write a script to scp both the source scripts to all the instances and then run them by doing a ssh on each instance. Is there a better way of doing this?
Thanks in advance.
If you just want to do stuff in parallel, you can use Parallel SSH or Cluster SSH. If you really don't want to use SSH, you can install a task queue system like celery. You could even go old school and just have a cron job that periodically checks a location in s3 and if the key exists, download the file and run it, though you have to be careful to only run it once. You can also use tools like Puppet and Chef if you're generally trying to manage a bunch of machines.
Another option is rsh, but be careful, it has security implications. See rsh vs. ssh.

best approah (security) to do some admin work through web page in Linux?

I want to build a web based admin tools that allow the system admin to run pre-configured commands and scripts through a web page (simple and limited webmin), what is the best approach?
I already started with Ubuntu installing LAMP and give the user www-data root's privileges !!!
as I learned (please check the link) this is a really bad move !!!, so how to build such web-based system without the security risk?
cheers
I did something like this a couple of years ago. It was (I like think) fairly secure and only accessible to a limited number of pre-vetted, authenticated users, but it still left me with an uneasy feeling! If you can avoid doing it, I'd recommend you do :)
I had a database sitting between the frontend web-tier and the script which was actually executing actions. The relevant table contained a symbolic command name and an optional numeric argument, which was sufficient for my needs. This allows you to audit what's been executed, provides a quick and dirty way to have a non-www user do things, and means if the website is compromised they're constrained by the DB structure (somewhat) and the script which pulls data from it.
The data from the DB can be read by a daemon running in a separate, unprivileged account. The daemon pulls and sanitises data from the DB and maps the 'command' to an actual executable (with a hard-coded map, so commandA executes A, commandB executes foo, and anything else would get flagged as an error). The account can be locked down using AppArmor (or SELinux, I imagine) to prevent it from executing, reading or writing anything you don't expect it to. Have a system in place to alert you of any errors from either the daemon or AppArmor/SELinux.
The executables which the daemon runs can be setuid'd if appropriate, or you can use the sudoers mechanism to allow the unprivileged account to execute them without a password.
I already started with Ubuntu installing LAMP and give the user www-data root's privileges
Don't do this.
If you really want to execute some very specific scripts under root privileged. Create such predefined very limited scripts, allow their password-less execution with sudo for specific user and then run them via script and don't forget authentication.
Generally this is bad idea.
SSH is your best friend.

Resources