I am having a problem keeping SSH running on the Windows Subsystem for Linux. It seems that if a shell is not open and running bash, all processes in the subsystem are killed. Is there a way to stop this?
I have tried to create a service using nssm but have not be able to get it working. Now I am attempting to start a shell and then just send it to the background but I haven't quite figured out how.
You have to keep at least one bash console open in order for background tasks to keep running: As soon as you close your last open bash console, WSL tears-down all running processes.
And, yes, we're working on improving this scenario in the future ;)
Update 2018-02-06
In recent Windows 10 Insider builds, we added the ability to keep daemons and services running in the background, even if you close all your Linux consoles!
One remaining limitation with this scenario is that you do have to manually start your services (e.g. $ sudo service ssh start in Ubuntu), though we are investigating how we might be able to allow you to configure which daemons/services auto-start when you login to your machine. Updates to follow.
To maintain WSL processes, I place this file in C:\Users\USERNAME\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\wsl.vbs
set ws=wscript.createobject("wscript.shell")
ws.run "C:\Windows\System32\bash.exe -c 'sudo /etc/rc.local'",0
In /etc/rc.local I kick off some services and finally "sleep" to keep the whole thing running:
/usr/sbin/sshd
/usr/sbin/cron
#block on this line to keep WSL running
sleep 365d
In /etc/sudoers.d I added a 'rc-local' file to allow the above commands without a sudo password prompt:
username * = (root) NOPASSWD: /etc/rc.local
username * = (root) NOPASSWD: /usr/sbin/cron
username * = (root) NOPASSWD: /usr/sbin/sshd
This worked well on 1607 but after the update to 1704 I can no longer connect to wsl via ssh.
Once you have cron running you can use 'sudo crontab -e -u username' to define cron jobs with #reboot to launch at login.
Just read through this thread earlier today and used it to get sshd running without having a wsl console open.
I am on Windows 10 Version 1803 and using Ubuntu 16.04.5 LTS in WSL.
I needed to make a few changes to get it working. Many thanks to google search and communities like this.
I modified /etc/rc.local as such:
mkdir /var/run/sshd
/usr/sbin/sshd
#/usr/sbin/cron
I needed to add the directory for sshd or I would get an error "Missing privilege separation directory /var/run/sshd
I commented out cron because I was getting similar errors and haven't had the time or need yet to fix it.
I also changed the sudoers entries a little bit in order to get them to work:
username ALL = ....
Hope this is useful to someone.
John Butler
Related
When our users want to schedule a job using the at command it doesn't work on our SLES 11 server.
If they do the exact same on our RedHat Enterprise Linux server it works perfectly.
I've tested it on both servers with their account:
at 11:50
ls -al >/home/USERS/username/justtesting.txt
<<Ctrl+D>>
and on the RHEL server it creates that file, and a subsequent atq command gives an empty list.
If I do the exact same on the Suse machine, the file is never created, and the atq command list all attempts we did in the following format:
23 2020-03-05 11:50 a USERS\username
or
24 2020-03-05 11:50 = USERS\username
The user is in the etc/at.allow file on the Suse machine (there was no /etc/at.allow or /etc/at.deny file to start with, but I added it anyway), and while scheduling the job there is NO errormessage whatsoever.
If I try the at command as my admin user, it works flawlessly on the SLES machine, so it is probably related to userrights somewhere. But again: the user doesn't get any errormessage indication they don't have the needed permissions.
I have two questions:
First of all, obviously: how do I get this to work? Any help would be greatly appreciated
Second: what does the 'a' or '=' mean in the atq list? If searched but can't seem to find the answer. (the 'at' command is an annoying one to google... :) )
best regards, and thanks for any and all help.
'at' commands are executed by the 'atd' daemon. Check if the daemon is up and running; it seems that default SuSE configuration is set not to run the daemon during startup.
Quick and dirty: ps ax | grep atd
More proficiency: systemctl status atd
as per suggestion of Andrew, I've posted this question on unix.stackexchange.com.
https://unix.stackexchange.com/questions/571490/at-command-on-suse-sles-11-does-nothing-works-perfectly-on-other-rhel-serve
This question already has answers here:
How to run Java application on startup of Ubuntu Linux
(2 answers)
Closed 3 years ago.
We have an application and to get that started we need to do steps each time the server gets security patches. I wonder if it is possible to start the application automatic each time the server boots.
Currently we are doing this:
Login in to the server with putty
sudo su - user
Now is the tricky point, during this "sudo su - user", the .profile of user is loaded and in this .profile this is done:
export JAVA_HOME="/opt/java"
. /opt/application/config/profile
umask 077
And then we start the applications:
/opt/tomcat/bin/startup.sh
/opt/config/start
/opt/config/start-base prod
Does anybody know if this is possibe?
The last three steps are no problem but I don't know about the step that is done about loading the extra profile that is in the .profile of the user "user".
Is it possible to put this all into a script, so that we only have to execute the script during the startup of the server?
Try to use cron daemon.
sudo corntab -e
This will open a kind of editor, then, you have to write these lines:
#reboot /opt/tomcat/bin/startup.sh
#reboot /opt/tomcat/bin/startup.sh
#reboot /opt/config/start-base prod
I have downloaded the sandbox from hortonworks (Centos OS), then tried to follow the tutorial. It seems like the ambari-admin-password-reset command is not there and missing. I tried also to login with putty, the console asked me to change the password so I did.
now it seems like the command is there, but I have different passwords for the console and one for the putty for the same user.
I have tried to look for the reason why for the same user 'root' I have 2 different passwords (one for the virtual box console and one for the putty) that I can login with. I see different commands on each box. more than that when I share folder I can only see it on the virtual box console but not on the putty console) which is really frustrating.
How can I enforce that what I would see from putty would be the same as what I see from the virtual box console.
I think it somehow related to TTY but I am not sure.
EDIT:
running commands from the virtual box machine output:
grep "^passwd" /etc/nsswitch.conf
OUT: passwd: files sss
grep root /etc/passwd
OUT: rppt"x"0"0"root:/root:/bin/bash
operator:x:11:0:operator:/root:/sbin/nologin
getent passwd root
OUT: root:x:0:0:root:/root:/bin/bash
EDIT:
I think this is all about docker containers. It seems like the machine 2222 port is the ssh port for the hdp 2.5 container and not for the hosting machine.
Now I get another problem. when running
docker exec sandbox ls
it is getting stuck. any help ?
Thanks for helpers
So now I had the time to analyze the sandbox vm, and write it up for other users.
As you stated correctly in your edit of the question, its the docker container setup of the sandbox, which confuses with two separate root users:
via ssh root#127.0.0.1 -p 2222 you get into the docker container called "sandbox". This is a CentOS release 6.8 (Final), containing all the HDP services, especially the ambari service. The configuration enforces a password change at first login for the root user. Inside this VM you can also execute the ambari-admin-password-reset and set there a password for the ambari admin.
via console access you reach the docker host running a Centos 7.2, here you can login with the default root password for the VM as found in the HDP docs.
Coming to your sub-question with the hanging docker exec, it seems to be a bug in that specific docker version. If you google that, you will find issues discussing this or similar problems with docker.
So I thought that it would be a good idea to just update the host via yum update. However this turned out to be a difficult path.
yum tried to update the kernel, but complained that there is not enough space on the boot partion.
So I moved the boot partion to the root partition:
edit /etc/fsab and comment out the boot entry
unmount /boot
mv /boot
cp -a /boot.org /boot
grub2-mkconfig -o /boot/grub2/grub.cfg
grub2-install /dev/sda
reboot
After that I have found out that the docker configuration is broken and docker does not start anymore. In the logs it complained about
"Error starting daemon: error initializing graphdriver:
\"/var/lib/docker\" contains other graphdrivers: devicemapper; Please
cleanup or explicitly choose storage driver (-s )"
So I edited /etc/systemd/system/multi-user.target.wants/docker.service and changed the ExecStart setting to:
ExecStart=/usr/bin/dockerd --storage-driver=overlay
After a service docker start and a docker start sandbox. The container worked again and I could could login to the container and after a ambari-server restart everything worked again.
And now - with the new docker version 1.12.2, docker exec sandbox ls works again.
So to sum up the docker exec command has a bug in that specific version of the sandbox, but you should think twice if you want to upgrade your sandbox.
I ran into the same issue.
The HDP 2.5 sandbox runs all of its components in a docker container, but commands like docker exec -it sandbox /bin/bash or docker attach sandbox got stuck.
When I ran a simple ps aux, I found several /usr/bin/docker-proxy commands which looked like :
/usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 60000 -container-ip 172.17.0.2 -container-port 60000
They probably forward the HTTP ports of the various UIs of HDP components.
I could ssh into the container ip (here 172.17.0.2) using root/hadoop to authenticate. From there, I could use all "missing" commands like ambari-admin-password-reset.
$ ssh root#172.17.0.2
... # change password
$ ambari-admin-password-reset
NB: I am new to docker, so there's probably a better way to deal with this.
I'd like to post here the instructions for 3.0.1 here.
I followed the instructions of installing hortonworks version 3.0.1 here: https://youtu.be/5TJMudSNn9c
After running the docker container, go to your browser and enter "localhost:4200", that will take you to the in browser terminal of the container, that hosts ambari. Enter "root" for login and "hadoop" for password, change the root password, and then enter "ambari-admin-password-reset" in order to reset ambari password.
In order to be able to use sandbox-hdp.hortonworks.com, you need to add the line "127.0.0.1 sandbox-hdp.hortonworks.com" at the end of the /private/etc/hosts file on your mac.
Incorrect Pass
Then right corner click on power button >> power off drop down >> Restart >> when it boots up then press Esc key to get into recovery menu
Restart
select advance option and hit enter
Advance Option
Select Recovery mode hit enter
Select Root
Root enter
Command
mount -rw -o remount/
ls /home
change pass command
passwd username
user as yours
last step
enter pass two times by pressing enter
enter image description here
Hopefully you changed password (:
I am running RStudio Server version 0.98.978 with R 3.0.2. on an Ubuntu 14 server. Yesterday I executed a command that caused the session to freeze. Since then, every time I try to load that user profile and that user profile only, the browser hangs. I generally get an error message saying the browser has become unresponsive (regardless of whether I use Chrome or IE ).
Simple commands like R.Version() take several minutes to complete. I have tried rebooting the server and killing all processes related to the RStudio account in question. So far nothing has resolved the problem. My searches have only brought up solutions to fix the problem on Windows. What else can I try to fix this problem?
I had the same problem and none of above solutions worked for me. I Tried following steps and problem solved:
1- Suspand all tasks:
`$ sudo rstudio-server force-suspend-all`
2- Stop RStudio server:
`$ sudo rstudio-server stop`
3- Delete .rstudio folder in /home/user/.rstudio:
`$ rm /home/user/.rstudio`
4- Start RStudio server:
`$ sudo rstudio-server start`
5- Open your browser and start using RStudio Server :)
You might want to do a ps -u user where user is the user whose session is hanging in the browser, and kill the rsession related to it.
I had the same issue and was about to give up and delete my user when almost serendipitiously, I couldn't delete it due to an ongoing processes tied to my user. I did a ps -u jp_smasher where jp_smasher is my user name and found an offending rsession.
[root]# ps -u jp_smasher
PID TTY TIME CMD
39774 ? 00:00:00 sshd
39776 pts/0 00:00:00 bash
39888 ? 00:00:00 sshd
39889 pts/1 00:00:00 bash
39999 pts/0 00:02:24 R
54230 ? 00:00:00 sshd
54231 pts/3 00:00:00 bash
54372 ? 00:00:11 rsession
54503 ? 00:00:00 sshd
54504 ? 00:00:00 sftp-server
69992 ? 00:00:00 sshd
69993 pts/4 00:00:00 bash
[root]# kill 54372
Killing the process doesn't solve the main problem (which could be some memory-hogging process etc), but it does resolve the symptom of the hanging browser.
I suspect that the ~/.rstudio folder is for versions of Rstudio < 1.3. For newer versions, you will want to look at ~/.local/share/rstudio/sessions instead, and remove the folder within it instead.
See: https://support.rstudio.com/hc/en-us/articles/218730228-Resetting-a-user-s-state-on-RStudio-Server, which mentions Older versions of (version 1.3 of RStudio Server Pro and earlier) used a non-configurable folder named .rstudio to store state
it looks like you have a huge .Rdata file under your home directory so Rstudio tries to reload it everytime you restart your session. Delete that file and you should be good to go.
It may be due to that you have processed larger files. Just kill the particular r session. That will suffice.
Command: ps -u
find out the rsession and kill it.
I solved this problem - Rstudio-server hanging 1 person only. But I could not solve it with above methods. I think it is a network problem so I couldn't receive any response from <~~~~~~.cache.js> calling. In this case, you can find out <~~~~~~~~~.cache.js> no response with pushing key before you click log-in button.
Anyway, here is my way.
Reset your Network with following orders
you can insert these into cmd terminal as an admin mode.
netsh winsock reset
netsh int ip reset
Reboot
The IP information may be erased.
So if you're using fixed IP address, fill the blanks with as-is IP address.
That's all.
Since there could be a network issue when you cannot login into Rstudio server,
you may take this way to recover the connection.
What I've done so far according to these instructions is unziped and moved jboss into my /usr/local/ directory. Then I put the jboss_init_redhat.sh script in /etc/init.d/ as jboss and edited the script to meet my configurations. I then run /etc/init.d/jboss start and all it says is
JBOSS_CMD_START = cd /usr/local/jboss-4.2.3.GA//bin; /usr/local/jboss-4.2.3.GA//bin/run.sh -c default -b 0.0.0.0
and then nothing happens. Also if I go into /usr/local/jboss-4.2.3.GA/bin and run run.sh it starts the server but when I go to the vm's IP:8080 in my browser I still get nothing. Any help would be appreciated also I don't know much about doing this so excuse my inexperience.
Init scripts should be owned and started by root.
The init script you use uses su (better would be to runuser) to change to the jboss user.
The jboss user itself does not have permission to do that.
The jboss user also does not have permission to write to /var/run etc.
So run sudo /etc/init.d/jboss start (you need to set up sudo first to allow this) or change to the root account and execute /etc/init.d/jboss start.
If it still fails check the logs at /usr/local/jboss-4.2.3.GA/server/default/log.
Hope this helps.