Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am looking for some help on how to dockerize user sessions in Linux. What I am looking for is how would I make it so when someone ssh's into an account and does anything, when they exit anything they did isn't saved; it's how I have it set up next time someone else ssh's into it.
It's for a CTF event I've been tasked with setting up and with really no knowledge of most of what I have to do this whole process is a learning experience for me.
A good explanation of how I am hoping to have it set up is explained here: http://overthewire.org/help/sshinfra.html
So you can do that by creating a new docker based shell for the user
Creating the user
First we create the user using below command
sudo useradd --create-home --shell /usr/local/bin/dockershell tarun
echo "tarun:tarunpass" | sudo chpasswd
sudo usermod -aG docker tarun
Creating the shell
Next create a shell file /usr/local/bin/dockershell
#!/bin/bash
docker run -it --rm ubuntu:latest /bin/bash
And then chmod +x /usr/local/bin/dockershell. Now you can ssh to your vm with the new user
$ ssh tarun#vm
tarun#vm's password:
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-66-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
0 packages can be updated.
0 updates are security updates.
Last login: Sun Oct 1 06:50:06 2017 from 192.168.33.1
Starting shell for tarun
root#79c12f002708:/#
This takes me to the docker container and no session changes are saved. If you want to secure it even more, you should be user namespace remapping
https://success.docker.com/KBase/Introduction_to_User_Namespaces_in_Docker_Engine
when they exit anything they did isn't saved
That is because the writable layer of a container is discarded when the container stops.
You should make sure your container is run with a bind mount or (better) a volume: that way, the modification done during the ssh, if done in the right (mounted) path, would persists.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
i am using aws rhel 8 linux instance with tomcat folder mounted from EFS in AWS and using
user ec2-user with full permissions on nfs mount data (tomcat folder) able to create delete files on /opt/mytomcat. (directory)
when i try to start directly from bin /opt/mytomcat/tomcat/bin/startup.sh working fine but when i try to run from /etc/systemd/system/tomcat.service it didnt worked i am sure my settings are correct , if i disable my selinux tomcat.service works fine.
Error :sudo systemctl start tomcat.service
tomcat.service: Failed to execute command: Permission denied
Dec 14 10:58:02 ip-xxxxxxxx-2.compute.internal systemd[8827]: `
tomcat.service: Failed at step EXEC spawning
` `/opt/mytomcat/bin/startup.sh: Permission denied`
/opt/mytocmat/tomcat/startup.sh has context system_u:object_r:nfs_t:s0, should be system_u:object_r:usr_t:s0
when run audit2allow to find more details its show below to enabl nfs_t file
sudo audit2allow -a
#============= init_t ==============
allow init_t nfs_t:file { execute open read };
based on above i have tried to change permission on my nfs(efs aws drive) drive
[Tuesday 16:48] Rajesh Dandamudi
sudo semanage fcontext -a -t init_t "/opt/mytomcat(/.*)?"
sudo restorecon -R -v /opt/mytomcat/
above command runs but nothing changes when check its still shows same its still same
system_u:object_r:nfs_t:s0. instead of init_t not sure how to fix it i am new selinux
my nfs drive mounting and working fine able to edit / update files without any issues
[ec2-user#ip-10-xxxxx ~]$ ls -lZ /opt/mytomcat
total 2273208
drwxr-x---. 2 ec2-user ec2-user system_u:object_r:**nfs_t**:s0 6144 Dec 10 18:11 audit
-rw-r-----. 1 ec2-user ec2-user system_u:object_r:nfs_t:s0 7727677 Dec 9 21:58 mytomcat
can some please help me , as i am new to selinux security bits
A few things you could try doing are:
Is the startup.sh file executable by the root user. (chmod +x startup.sh)
Consider changing the owner or group of your tomcat files so that it is accessible by the service. (using chown)
Check the tomcat service configuration and see if there are any issues in that.
In my experience these kinds of problems seem to have a very simple root cause that may have been overlooked.
This question already has answers here:
How to run Java application on startup of Ubuntu Linux
(2 answers)
Closed 3 years ago.
We have an application and to get that started we need to do steps each time the server gets security patches. I wonder if it is possible to start the application automatic each time the server boots.
Currently we are doing this:
Login in to the server with putty
sudo su - user
Now is the tricky point, during this "sudo su - user", the .profile of user is loaded and in this .profile this is done:
export JAVA_HOME="/opt/java"
. /opt/application/config/profile
umask 077
And then we start the applications:
/opt/tomcat/bin/startup.sh
/opt/config/start
/opt/config/start-base prod
Does anybody know if this is possibe?
The last three steps are no problem but I don't know about the step that is done about loading the extra profile that is in the .profile of the user "user".
Is it possible to put this all into a script, so that we only have to execute the script during the startup of the server?
Try to use cron daemon.
sudo corntab -e
This will open a kind of editor, then, you have to write these lines:
#reboot /opt/tomcat/bin/startup.sh
#reboot /opt/tomcat/bin/startup.sh
#reboot /opt/config/start-base prod
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Using five lines below install gcsfuse on a brand new Ubuntu14 instance:
export GCSFUSE_REPO=gcsfuse-`lsb_release -c -s`
echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" | sudo tee /etc/apt/sources.list.d/gcsfuse.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get install gcsfuse
Now create a folder on a local disk (this folder is to be used as the mounting point for Google Bucket). Give this folder a full access:
sudo mkdir /home/shared
sudo chmod 777 /home/shared
Using gcsfuse command mount Google bucket to the mounting-point folder we created earlier. But first, list the names of the Google Buckets that are linked to your Google Project:
gsutil ls
The Google Project I work on has a single bucket named "my_bucket". Knowing a bucket name I can run gcsfuse command that will mount my_bucket Bucket on to a local /home/shared mounting-folder:
gcsfuse my_bucket /home/shared
The execution of this command logs that it was successful:
Using mount point: /home/shared
Opening GCS connection...
Opening bucket...
Mounting file system...
File system has been successfully mounted.
But now when I try to create another folder inside of mapped /home/shared mounting-point folder I get the error message:
mkdir /home/shared/test
Error:
mkdir: cannot create directory ‘/home/shared/test’: Input/output error
Trying to fix the problem I successfully un-mount it using:
fusermount -u /home/shared
and map it back but now using another gcsfuse command-line:
mount -t gcsfuse -o rw,user my_bucket /home/shared
But it results to the exactly same permission issue.
At very last I have made an attempt to fix this permission issue by editing /etc/fstab configuration file with:
sudo nano /etc/fstab
and then appending a line to the end of the file:
my_bucket /home/shared gcsfuse rw,noauto,user
but it did not help to solve this issue.
What needs to be changed to allow all the users a full access to the mapped Google Bucket so the users are able to create, delete and modify the files and folders stored on Google Bucket?
I saw your question because I was having exactly the same problem and I also did the same steps as you.
The solution to give user root full control of the mounted cloud folder :
You have to go to your Google Cloude place, search for "Service account" and clic on it.
Then you have to export the key file of your service account (.json)
(I have created a new service account with the Google Cloud Shell consol using this command : gcloud auth application-default login
And then followed the steps when I was prompted by the shell.)
Clic on Create Key and choose JSON
Upload the .JSON keyfile to your linux server.
Then on your Linux server, run this command : gcsfuse -o allow_other --gid 0 --uid 0 --file-mode 777 --dir-mode 777 --key-file /path_to_your_keyFile_that_you_just_uploaded.json nameOfYourBucket /path/to/mount
To find your root user GID & UID, login in root to your server and in terminal type : id -u root for UID & id -g root for GID
Hope I helped, because I've been struggling for long and no resource on internet really helped. Cheers.
The answer #Keytrap gave is a correct one. But since 2017, gcsfuse as well as GCP have evolved and there are some more (maybe easier) options to let gscfuse connect with a Google account:
If you are running on a Google Compute Engine instance with scope storage-full configured, then Cloud Storage FUSE can use the Compute Engine built-in service account.
If you installed the Google Cloud SDK and ran gcloud auth application-default login, then Cloud Storage FUSE can use these credentials.
If you set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the path of a service account's JSON key file, then Cloud Storage FUSE will use these credentials.
Source: Cloud Storage FUSE
I am having a problem keeping SSH running on the Windows Subsystem for Linux. It seems that if a shell is not open and running bash, all processes in the subsystem are killed. Is there a way to stop this?
I have tried to create a service using nssm but have not be able to get it working. Now I am attempting to start a shell and then just send it to the background but I haven't quite figured out how.
You have to keep at least one bash console open in order for background tasks to keep running: As soon as you close your last open bash console, WSL tears-down all running processes.
And, yes, we're working on improving this scenario in the future ;)
Update 2018-02-06
In recent Windows 10 Insider builds, we added the ability to keep daemons and services running in the background, even if you close all your Linux consoles!
One remaining limitation with this scenario is that you do have to manually start your services (e.g. $ sudo service ssh start in Ubuntu), though we are investigating how we might be able to allow you to configure which daemons/services auto-start when you login to your machine. Updates to follow.
To maintain WSL processes, I place this file in C:\Users\USERNAME\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\wsl.vbs
set ws=wscript.createobject("wscript.shell")
ws.run "C:\Windows\System32\bash.exe -c 'sudo /etc/rc.local'",0
In /etc/rc.local I kick off some services and finally "sleep" to keep the whole thing running:
/usr/sbin/sshd
/usr/sbin/cron
#block on this line to keep WSL running
sleep 365d
In /etc/sudoers.d I added a 'rc-local' file to allow the above commands without a sudo password prompt:
username * = (root) NOPASSWD: /etc/rc.local
username * = (root) NOPASSWD: /usr/sbin/cron
username * = (root) NOPASSWD: /usr/sbin/sshd
This worked well on 1607 but after the update to 1704 I can no longer connect to wsl via ssh.
Once you have cron running you can use 'sudo crontab -e -u username' to define cron jobs with #reboot to launch at login.
Just read through this thread earlier today and used it to get sshd running without having a wsl console open.
I am on Windows 10 Version 1803 and using Ubuntu 16.04.5 LTS in WSL.
I needed to make a few changes to get it working. Many thanks to google search and communities like this.
I modified /etc/rc.local as such:
mkdir /var/run/sshd
/usr/sbin/sshd
#/usr/sbin/cron
I needed to add the directory for sshd or I would get an error "Missing privilege separation directory /var/run/sshd
I commented out cron because I was getting similar errors and haven't had the time or need yet to fix it.
I also changed the sudoers entries a little bit in order to get them to work:
username ALL = ....
Hope this is useful to someone.
John Butler
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
This my shell script:
scp -r -i ~/.ssh/id_rsa_mbox /home/mbox/Desktop/qtworkspace/mbox_gui/Debug.tar.gz mbox#111.11.11.118:/mbox/deployment/mbox_gui/
...........................
Started by user Vikash
Building on master in workspace /var/lib/jenkins/jobs/Copy_Mbox_Gui_Files/workspace
next nodes: [][workspace] $ /bin/sh -xe /tmp/hudson6656909050940929806.sh
+ scp -r -i /home/mbox/.ssh/id_rsa_mbox /home/mbox140/Desktop/qtworkspace/mbox_gui/Debug.tar.gz mbox#111.11.11.118:/mbox/deployment/mbox_gui/
Host key verification failed.
lost connection
Build step 'Execute shell' marked build as failure
Finished: FAILURE
..................
On Ubuntu
Jenkins uses its own user. There are two ways of achieving what you want to achieve.
1) From regular terminal emulator, login as the Jenkins user and ssh to the target host and create the host key.
2) Use JSch
I vote for #2.