I am currently building a sandbox environment utilizing docker where all users will be directly guided into a docker container with home dir linked as data vol upon login. This is achieved through the use of a custom user shell instead of /bin/bash csh etc.
However, scp/rsync/sftp fails with this custom shell unfortunately. My current solution is to make a separate no-login account with the same home dir as the sandbox user and only allow scp/rsync/sftp through rssh in this account where users can then upload their data.
Just wondering if I can streamline this process somehow and use only 1 account to redirect users into docker containers directly AND allow sftp/scp processes as well?
Thanks!
Edit:
upon more poking around, I have discovered that the newest sshd internal-ftp-server subsystem do not invoke the users login shell, and I can enable sftp this way. However, i dont think rsync and scp works this way unfortunately.
The ultimate goal i want is basically to be able to sandbox users directly into a container upon login and to allow sftp/scp/rsync upload ideally using the SAME credentials instead of setting up a separate one for them.
Related
I need to create tools so that a non-experienced/non-technical users can use (which means connect and start/stop) a Virtual machine on Azure. For connection, the RDP connection is doing a good enough job and is easy to take a hand-on. On the other side, to start / stop a virtual machine you normally need to access to the Azure portal which (on top of being not straightforward for a non-technical user) causes some access policy problems. One option could be to just let the virtual machine always "on" but then we are billed for 100% of time even though the user only needs it for a couple hours a week.
That's why I investigated the possibility to create a script that could be put into an executable file that would launch automatically the virtual machine by just clicking the exec. I have already seen this stackoverflow question :
Start azure virtual machine without azure portal
which suggests to create an Azure PowerShell script that would start the virtual machine. Only problem is that launching a powershell script is out of the technical level of the person who would use it. On top of that, there is a need to install Azure add-on for powershell (if I understand correctly) which would not be possible depending on the machine and the rights the user have on it.
So my question : Do you have any idea on how I could make a simple program (in the form for example of an executable that would run on any machine without any dependency) that would start an azure virtual machine ?
One solution I thought about but it seemed very complicated : create a "super low cost" virtual machine that would be on 100% of time and just create an exec that instruct this VM to start the other virtual machine on demand ?
Thanks for your help
I have a problem with the idea that a powershell script is outside of the scope of a user that can run an exe file. If built properly, a ps1 should just be a double-click, exactly like an exe.
Aside from that, you have a couple hurdles to look at.
Your user can't have access to the resources that they need to interact with.
This can be done by passing custom PScredential objects through the script and pulling the credentials from a file. You would build the credential file with ConvertFrom-SecureString and then import it in with CovertTo-SecureString. The biggest problem with this is that if the user can see where that file is stored, they could potentially write a script to access that file and gain privileged access.
Your user doesn't have permission to run the powershell resources needed to execute the script. For this, you'd need to build in runas permission on the script, and I think creating an exe might be the best avenue for that. Although you could have the initial script call another shell with elevated permissions and work through that.
There are tools out there like PowerGUI, that will compile a ps1 file into an exe format. A properly compiled and secure exe file would hide the scripts that call out to secure string files and also allow for custom runas permissions built into the program.
I have a physical machine (Win7) and a virtual machine (Red Hat) sharing a network. I am executing a bash script as a Jenkins job using the SSH plugin for Jenkins to deploy an application from the physical machine to the virtual machine.
I cannot use Root users due to security policy on the machine I want the script executed on and am instead limited to using standard users with sudo access.
However, I want the script to run without interruption (I don't think Jenkins even allows you to enter user passwords when running bash scripts? Also, this doesn't seem like best practice anyway).
Is there any method of bypassing the sudo request or configuration of the script that would allow this process to run the way I wish?
You can use a NOPASSWD directive in /etc/sudoers; see the sudoers manual page.
If that is not possible, you can always use expect to feed the password (stored somewhere in the account) to the sudo process. This is probably against your security policy as well. You need to talk to the people who make the policy for advice on how you can automate these jobs (“digital transformation” and all that).
I'm currently working on a setup to make Docker available on a high performance cluster (HPC). The idea is that every user in our group should be able to reserve a machine for a certain amount of time and be able to use Docker in a "normal way". Meaning accessing the Docker Daemon via the Docker CLI.
To do that, the user would be added to the Docker group. But this imposes a big security problem for us, since this basically means that the user has root privileges on that machine.
The new idea is to make use of the user namespace mapping option (as described in https://docs.docker.com/engine/reference/commandline/dockerd/#/daemon-user-namespace-options). As I see it, this would tackle our biggest security concern that the root in a container is the same as the root on the host machine.
But as long as users are able to bypass this via --userns=host , this doesn't increase security in any way.
Is there a way to disable this and other Docker run options?
As mentioned in issue 22223
There are a whole lot of ways in which users can elevate privileges through docker run, eg by using --privileged.
You can stop this by:
either not directly providing access to the daemon in production, and using scripts,
(which is not what you want here)
or by using an auth plugin to disallow some options.
That is:
dockerd --authorization-plugin=plugin1
Which can lead to:
I've this need, I have to install ubuntu on a machine for a specific purpose, and I have to create a particular locked user account.
On startup i need to display the login box (so I have to admin the machine, only reboot and login as root) in the format with username and password fields.
After the login of this user, I have to auto open Google Chrome on a specific page.
Stop, this specific user doesn't have to do more. This machine is connected to a display with show ads in the expo of my client.
How to do this? I don't have any idea. Can anyone tell me ALL the correct step to achieve this?
Thanks in advance, Francesco
You have to setup a kiosk mode. You can find a good tutorial and all needed steps at http://www.alandmoore.com/blog/2011/11/05/creating-a-kiosk-with-linux-and-x11-2011-edition/
This may be an "old hat" answer...but yes, it's pretty common in practice to simply create a login shell that does a specific task (kind of similar to FTP or backup user accounts).
This means - simply put - in the /etc/passwd where you normally put the shell for the user (/bin/bash or whatever) you actually put a script that does whatever you want it to. When the script ends, the user is booted off.
If this is combined with a properly configured selinux, its pretty safe as long as the script is not hackable (I.e. does not request input which can have appended commands (I.e. "input name:" Mike; rm -rf /) or that can lead to a buffer overrun.
For this reason, its good practice to put the script in an isolated directory, chroot the user, put the user in its own group, and have the user/group only have permissions to that dir.
I want to build a web based admin tools that allow the system admin to run pre-configured commands and scripts through a web page (simple and limited webmin), what is the best approach?
I already started with Ubuntu installing LAMP and give the user www-data root's privileges !!!
as I learned (please check the link) this is a really bad move !!!, so how to build such web-based system without the security risk?
cheers
I did something like this a couple of years ago. It was (I like think) fairly secure and only accessible to a limited number of pre-vetted, authenticated users, but it still left me with an uneasy feeling! If you can avoid doing it, I'd recommend you do :)
I had a database sitting between the frontend web-tier and the script which was actually executing actions. The relevant table contained a symbolic command name and an optional numeric argument, which was sufficient for my needs. This allows you to audit what's been executed, provides a quick and dirty way to have a non-www user do things, and means if the website is compromised they're constrained by the DB structure (somewhat) and the script which pulls data from it.
The data from the DB can be read by a daemon running in a separate, unprivileged account. The daemon pulls and sanitises data from the DB and maps the 'command' to an actual executable (with a hard-coded map, so commandA executes A, commandB executes foo, and anything else would get flagged as an error). The account can be locked down using AppArmor (or SELinux, I imagine) to prevent it from executing, reading or writing anything you don't expect it to. Have a system in place to alert you of any errors from either the daemon or AppArmor/SELinux.
The executables which the daemon runs can be setuid'd if appropriate, or you can use the sudoers mechanism to allow the unprivileged account to execute them without a password.
I already started with Ubuntu installing LAMP and give the user www-data root's privileges
Don't do this.
If you really want to execute some very specific scripts under root privileged. Create such predefined very limited scripts, allow their password-less execution with sudo for specific user and then run them via script and don't forget authentication.
Generally this is bad idea.
SSH is your best friend.