I am trying to connect to my Linux server.
After entering the password it is showing below output:
Last Logan: Mon Jun 24 12:22:48 2013 from xxx.xxx.xxx.xxx
/bin/bash: No such file or directory
Connection to xxx.xxx.x.xx closed.
How do I connect to the server?
you are trying to enter your server as a user privileges and you see /bin/bash file. When you chroot, you can reach /bin/bash directroy. And you can add your user in sudo.
Then you should see directory /home/username/bin/bash/
Edit :
When you chroot, the named directory becomes /. The correct shell path inside the chroot is then /bin/bash, not /home/username/bin/bash.
You will also need to make sure there's enough other stuff inside the chroot for the system to work. You can test this with sudo chroot /home/username /bin/bash and see what works and what doesn't
And also there is good information about chroot configuration
Your user is associated with an incorrect shell. The path to the associated shell "/bin/bash" doesn't exist on the system.
Correct your user's shell from the root or ask the administrator to do it.
Similar question: changing default shell in linux
Related
I created bash script on Linux Redhat with vim
My script is working well on an user.
I need to esxute on the same script this command
su - root with password
VGDISPLAY -v |grep "LV Status"
the probem is when I execute my script the part of the normal user is done
and the other part with root not done
my question how can I do to execute this
I need to switch in the same script to root
Best regards
I'd recommend not switching to the root user in the script for specific tasks/commands but assigning SUDO privileges to your user (by which you are running the script), and using sudo command in the script for the tasks/commands that need elevations.
Hint: By sudo command you may run a process on behalf of another user (root probably).
If you are not familiar with sudo command or how to assign SUDO privileges to a user, please see the following link or google it.
https://phoenixnap.com/kb/linux-sudo-command
PS. Providing SUDO privileges to a user is configurable whether to be used for all commands or limited commands. For testing purposes, you may configure the user to be able to run all commands with sudo to gain the privileges, but for production use it is strongly recommended to limit the user to be able to use only necessary commands with sudo and nothing more.
I have shell scripts like below for changing/switch to another ISP connection.
#!/bin/bash
/sbin/route add default gw 192.168.1
/sbin/route del default gw 192.168.1.2
/sbin/route del default gw 192.168.1.3
/sbin/route -n
I have root access to my Ubuntu machine but I need to run the above shell script as a normal user. How can I do that?
NOTE:
Case-1: Our local machine login to LDAP server, so I can't add my Linux username to sudoers/visudo.
Case-2: I have already move that script to /bin directory and added the SUID special permission to my script. But the normal user can't run this script.
I was fixed the issue. Fix method is follows:
Added SUID permission to the route command and added execute permission to that shell script and moved file to /bin directory.Now you can run the script as command.
I work on a shared linux enviroment (CentOS), but for some reason one of my logins has been locked.
When I do a cat /etc/passwd | grep "/home", I can find my user:
roaming:x:579:579::/home/roaming:/bin/nologin
I've got root permission but don't know what to do to be able to login again.
What should I do about this 'no login' thing??
The shell for this user is set to a non-existent program in order to prevent user from logging in with interactive shell (ssh, local login). Yet the user can authenticate to do some other stuff like copying files through FTP or SMB.
Just run as a root to put a normal shell back.
chsh roaming /bin/bash
As root, enter
chsh -s /bin/sh roaming
For work, I have to connect to dozens of Linux machines via SSH (to perform maintenance, monitor the system, install software, etc).
I have a small arsenal of scripts that help me do some of these tasks, and these are located in a folder on my Mac in /Users/me/bin. I want to be able to run these scripts on the remote Linux machine, but for several reasons I do not want these scripts permanently located on these machines (e.g., other people also connect to these remote machines, and it would be unwise to let them execute these files).
So, is possible to share scripts across an SSH connection for the lifetime of the session only?
I have a couple of ideas on how to do this, but I don't know if any of them will work. Firstly, if SSH allows file mounting, I could automatically mount me#mymac:/Users/me/bin to me#linux:/remote_bin when I connect to the remote Linux box, and set my PATH variable to "$PATH:/remote_bin". Secondly, I could set up port forwarding in the connection string (e.g., ssh me#linux -R 9999:127.0.0.1:<SMBPORT|ETC> and every time I connect mount the share and set the $PATH variable.
EDIT: I've come up with a semi-solution. On the linux machine, edit /etc/ssh/sshd_config to add the following subsystem: Subsystem shareduserbinary sudo su -l -c "/bin/mount -t cifs -o port=9999,username=me,nounix,sec=ntlmssp //127.0.0.1/exported_bin /mnt/remote_bin" && bash -l -i -s. When connecting to the remote machine, set up a reverse port forward and invoke the subsystem. E.g.: ssh -R 9999:127.0.0.1:445 -s shareduserbinary me#linux.
EDIT 2: You can make the solution above cleaner, by removing the -l from the sudo command and changing the path from /mnt/remote_bin to $HOME/rbin.
Interesting question. Perhaps you can add a command to ~/.bash_login (assuming you are using bash) to copy the scripts from a remote host (such as your mac) when you login, then add a command to ~/.bash_logout to delete the scripts when you logout. But, as bmargulies points out, it would be a good idea to go a step further and make sure that nobody else has permissions to read or execute the scripts.
You can use OpenSSH's LocalCommand to upload the files (using e.g. scp or rsync) when initiating an SSH session (see man ssh_config and this):
Host server1 server2 [...]
PermitLocalCommand yes
LocalCommand scp -q /Users/bin/me/* %h:temp_bin/
and use .bash_logout or an EXIT-trap that you specify in your .bashrc to delete the contents of the directory on logout.
I seem to be stuck between an NFS limitation and a Cron limitation.
So I've got root cron (on RHEL5) running a shell script that, among other things, needs to rsync some files over an NFS mount. And the files on the NFS mount are owned by the apache user with mode 700, so only the apache user can run the rsync command -- running as root yields a permission error (NFS being a rare case, apparently, where the root user is not all-powerful?)
When I just want to run the rsync by hand, I can use "sudo -u apache rsync ..." But sudo no workie in cron -- it says "sudo: sorry, you must have a tty to run sudo".
I don't want to run the whole script as apache (i.e. from apache's crontab) because other parts of the script do require root -- it's just that one command that needs to run as apache. And I would really prefer not to change the mode on the files, as that will involve significant changes to other applications.
There's gotta be a way to accomplish "sudo -u apache" from cron??
thanks!
rob
su --shell=/bin/bash --session-command="/path/to/command -argument=something" username &
Works for me (CentOS)
Use su instead of sudo:
su -c "rsync ..." apache
By default on RHEL, sudo isn't allowed for processes without a terminal (tty). That's set in /etc/sudoers.
You can allow tty-less sudo for particular users with these instructions:
https://serverfault.com/questions/111064/sudoers-how-to-disable-requiretty-per-user
If you want to permanently enable you to fiddle around as apache:
chsh apache
this allows you to change the shell for the user
place it in /etc/crontab and specify apache instead of root in the user field