Changing a hostname permanently in Ubuntu - linux

I want to create a shell script that can change the hostname of my Ubuntu permanently. Whenever I use the hostname New_hostname command, it returns to the original hostname after I restart the machine.
I found out that the only way I can change this permanently is by modifying the file in /etc/hostname and save it. Is there some way I can do this using a shell script only? I also have a password.

The hostnamectl combines setting the hostname via the hostname command and editing /etc/hostname. Unfortunately, editing /etc/hosts still has to be done separately.
hostnamectl set-hostname <new-hostname>

Type
echo "myNewHostName" > /etc/hostname
in any shell with root access near you..
You may also want to take a look at the file /etc/hosts, cf. http://pricklytech.wordpress.com/2013/04/24/ubuntu-change-hostname-permanently-using-the-command-line/.

In Ubuntu 18.04 LTS
Hostname changing via SSH is reverted after reboot in Ubuntu 18.04. Make permanent change as following way.
1. Edit /etc/cloud/cloud.cfg
sudo nano /etc/cloud/cloud.cfg
Set preserve_hostname to true
preserve_hostname: true
2. Run hostnamectl
hostnamectl set-hostname new-host-name
3. Reboot
sudo reboot

Change hostname permanently without reboot
/etc/hosts
127.0.0.1 persistent_host_name
/etc/hostname
persistent_host_name
Apply changes Immediately
$ sudo hostname persistent_host_name
Check changes
$ hostname
persistent_host_name

Typically, you would need to change it in these files:
/etc/hostname
/etc/hosts
If you are using some advanced printers, also here:
/etc/printcap
This is why I would recommend doing it manually - but search the old hostnames first. To find all occurrences in /etc:
sudo grep -iRI "_OLDHOSTNAME_" /etc 2>/dev/null
Then change the _OLDHOSTNAME_ in every occurrence.
Done.

To chaneg the Hostname permanet in ubuntu machine
Go to :
#vim /etc/hostname
Type the hostname inside the file you want to set for the machine
Then save and the file
After saving the document run this command
# hostname -F /etc/hostname
Then edit the /etc/hosts file
#vim /etc/hosts
type the ip hostname inside the file
Then Logout of of the machine and relogin into the machine

If you just want to change host name, because its getting displayed as a command prompt in the terminal. Then you can replace \h in PS1 with "desired_host_name" in ~/.bashrc
Like in ~/.bashrc put this line at end of file:
export PS2="continue-> ";
export PS1="\u#3050:~$ ";

Change Hostname on Ubuntu 18.04
Definition
A hostname is a label that identifies a machine on the network. You shouldn’t use the same hostname on two different machines on a same network.
Prerequisites
User should have a sudo privileges
Change the Hostname
Change the hostname using hostnamectl command. If you want to change the hostname to new_hostname
sudo hostnamectl set-hostname new_hostname
It will not change the hostname directly. You want to preserve the changes permanently then you have to edit cloud.cfg file
sudo nano /etc/cloud/cloud.cfg
# This will cause the set+update hostname module to not operate (if true)
preserve_hostname: true
Save the file and close your editor.
Verify your Changes
You can verify your changes using command hostnamectl it will show new_hostname under Static hostname
PS: Source Link

Related

how to share ssh keys programmatically on ubuntu?

I am often having difficulties to set up password-less ssh connection on my clusters.
So I wrote a script, I thought it was working on ubuntu 14.04. I tried it today on a ubuntu 15 cluster, and it didn't work.
I am not really sure then if it ever worked on ubuntu 14.4 :-/
It is based on this page : http://mah.everybody.org/docs/ssh
I put the code on this github : https://github.com/romainjouin/formation_spark/blob/master/ubuntu_excange_ssh_keys.sh
the idea is to call the script, passing as first parameter the user#ip where we want to set up a password less ssh connection.
Can someone have a look to the github script : is there an obvious thing I am missing ?
Furthermore, before launching the script I :
Change /etc/ssh/sshd_config to uncomment
AuthorizedKeysFile %h/.ssh/authorized_keys
ant then do :
$/etc/init.d/ssh restart
EDIT
The code seems to work fine, but I was missing the authorisation on the home directory to make it work (home dir was 777 instead of 700).
The code seems to work fine, but I was missing the authorisation on the home directory to make it work (home dir was 777 instead of 700). – Romain Jouin
suppose you ssh remote-user#remote-host frequently, first
ssh-copy-id -i ~/.ssh/id_rsa.pub remote-host
then, edit you ~/.ssh/config file
cat ~./ssh/config
Host foobar
HostName remote-host-ip
Port 22
User remote-user
IdentityFile ~/.ssh/id_rsa
now you can ssh like this
ssh foobar

Editing /etc/hosts file OS X

I have a weird problem that is becoming rather annoying as I can't reason out why my edits to /etc/hosts file doesn't save. When I edit with sudo permissions and save them it looks fine. As soon as I open a new terminal or quit a terminal and check again the edits I made are gone.
sudo vim /etc/hosts/ or sudo vim /private/etc/hosts
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
and I try to add this line: 127.0.0.1 ac-decountsv.example.com
I also tried editing the /private/etc/hosts/. Could some one tell me what am I overlooking?
Thanks
OS X has a lock on changes to system files; remove the lock & make your changes.
This may help:
https://superuser.com/questions/40749/command-to-unlock-locked-files-on-os-x
you can edit the file by using the below
chattr -i /etc/hosts; vi /etc/hosts; chattr +i /etc/hosts

Bash to append domain in search field of resolv.conf after vpnc connection in linux

After vpn connection,I am not able to resolve any machines in remote gateway.It looks like remote domain is added to domain field instead of adding it to the Search field in /etc/resolv.conf
I want the remote domain to add to search field along with local domain.Can anyone tell a bash or script to do it?
Use a bash script with this content: echo "nameserver 8.8.8.8" > /etc/resolv.conf and put it in /etc/network/if-up-d/ and chmod script with 755

On Linux, how can I share scripts across an SSH connection for the session only?

For work, I have to connect to dozens of Linux machines via SSH (to perform maintenance, monitor the system, install software, etc).
I have a small arsenal of scripts that help me do some of these tasks, and these are located in a folder on my Mac in /Users/me/bin. I want to be able to run these scripts on the remote Linux machine, but for several reasons I do not want these scripts permanently located on these machines (e.g., other people also connect to these remote machines, and it would be unwise to let them execute these files).
So, is possible to share scripts across an SSH connection for the lifetime of the session only?
I have a couple of ideas on how to do this, but I don't know if any of them will work. Firstly, if SSH allows file mounting, I could automatically mount me#mymac:/Users/me/bin to me#linux:/remote_bin when I connect to the remote Linux box, and set my PATH variable to "$PATH:/remote_bin". Secondly, I could set up port forwarding in the connection string (e.g., ssh me#linux -R 9999:127.0.0.1:<SMBPORT|ETC> and every time I connect mount the share and set the $PATH variable.
EDIT: I've come up with a semi-solution. On the linux machine, edit /etc/ssh/sshd_config to add the following subsystem: Subsystem shareduserbinary sudo su -l -c "/bin/mount -t cifs -o port=9999,username=me,nounix,sec=ntlmssp //127.0.0.1/exported_bin /mnt/remote_bin" && bash -l -i -s. When connecting to the remote machine, set up a reverse port forward and invoke the subsystem. E.g.: ssh -R 9999:127.0.0.1:445 -s shareduserbinary me#linux.
EDIT 2: You can make the solution above cleaner, by removing the -l from the sudo command and changing the path from /mnt/remote_bin to $HOME/rbin.
Interesting question. Perhaps you can add a command to ~/.bash_login (assuming you are using bash) to copy the scripts from a remote host (such as your mac) when you login, then add a command to ~/.bash_logout to delete the scripts when you logout. But, as bmargulies points out, it would be a good idea to go a step further and make sure that nobody else has permissions to read or execute the scripts.
You can use OpenSSH's LocalCommand to upload the files (using e.g. scp or rsync) when initiating an SSH session (see man ssh_config and this):
Host server1 server2 [...]
PermitLocalCommand yes
LocalCommand scp -q /Users/bin/me/* %h:temp_bin/
and use .bash_logout or an EXIT-trap that you specify in your .bashrc to delete the contents of the directory on logout.

why password less ssh not working?

I connected 3 data nodes(in all these data nodes pass-wordless is working fine) in my cluster which are working fine but when i try to connect another data node pass-wordless ssh not working in fourth data node.
IP address of first three data nodes:
172.20.93.192(name node)
172.20.94.189(data node)
172.20.94.145(data node)
172.20.94.193(data node)
now my fourth data node's IP address is 172.20.95.6 where password-less is not working.
I am generating keys with
ssh-keygen -t rsa
I am doing the same process for the fourth data node as above three data nodes but it is not working. Why? what may be the reason?
I had a very similar problem today with CentOS servers. The problem turned out that the /root folder had wrong permissions. In fact, the /var/log/secure log file showed this error:
Sep 3 09:10:40 nec05 sshd[21858]: Authentication refused: bad ownership or modes for directory /root
This is what it wrongly was:
[root#nec05 ~]# ls -ld /root
drwxrwxrwx. 32 root root 4096 Sep 3 09:54 /root
Using chmod fixed it:
[root#nec05 ~]# chmod 550 /root
[root#nec05 ~]# ls -ld /root
dr-xr-x---. 32 root root 4096 Sep 3 09:54 /root
After that, passwordless login worked on this particular server.
More information would be required to get the "real" cause. However here it goes two of the most common problems I have found and not related to the key configuration itself (taking into account that you use Linux :)):
SSHD in the remote machine is configured in restricted mode for "root" and you are trying to ssh as root. SOLUTION: Copy /etc/ssh/sshd.conf from one of the working machines to the faulty and restart ssh server.
Home folder of the user used for remote login has invalid permissions. Many default configurations for SSH Daemons contain restrictions about the permissions of the user home folder for security purposes. SOLUTION: Compare with working nodes and fix. (Sometimes you would see a warning/error log in /var/log/messages.
If you follow the process to integrate the keys from the scratch and review the permissions for all the files involved you should face no issues.
Please answer back with sshd.conf file as well as the logs from a remote login with -v (ssh -v IPADDR) for a better analysis.
I went through the same errors recently. All my file permissions are set up correctly but still ssh asks for password. Finally I figured out it is due to one missing at /etc/ssh/sshd_config: you shoud add "AuthorizedKeysFile %h/.ssh/authorized_keys", so that sshd will look for the publickey file at your home dir.
After doing this the problem is gone.
You would have to more elaborate your problem i.e. whether you are using the same private-public key pair for all servers.
Secondly you must try ssh with -v flag it will give you some hint like which private key it is using for authentication, what is the cause of authentication failure.
Thirdly Verify the permission of .ssh/authorized_keys at server end. It should not have write permission to group or other users.
You can simply use
ssh-keygen -f # to generate ssh key pair.
ssh-copy-id # #to copy public key in the server's authorized key.
troubleshoot checklist:
example: Machine A passwordless login to B
turn off selinux on B
FOR BOTH A&B: make sure correct permission for .ssh(700) and .ssh/authorized_keys (600)
check on B: /etc/ssh/sshd_config: PubkeyAuthentication yes
check firewall on B
check the log /var/log/secure
if you've renamed id_rsa/id_rsa.pub to example id_rsa_b/id_rsa_b.pub, you should do ssh -i .ssh/id_rsa_b user#MachineB
refer
I am going to explain with example:
Suppose there are two server server1(192.168.43.21) and server2(192.168.43.33).If you want password less ssh between server1 and server2 where user is admin then follow below steps-
To install run command: yum install openssh-server openssh-clients
To create ssh key run command : ssh-keygen -t rsa on server1 and server2
SELINUX disable at : vim /etc/selinux/conifg
SELIINUX=disabled
After changing SELINUX need to reboot.
Add user to AllowUsers ,AllowGroups and PermitEmptyPasswords on at :
vim /etc/ssh/sshd_config
AllowUsers admin
AllowGroups admin
After update restart sshd: systemctl restart sshd
Go to home directory of admin user : cd ~
Go to ssh folder : cd .ssh and copy id_rsa.pub key from server1 and paste it into server server2 authorized.key file of .ssh folder.
note: Instead of manually copy we can use:
From server2 use command: `ssh-copy-id admin#serve1`
From server1 use command: `ssh-copy-id admin#server2`
Now try ssh from server1 to server2 and server2 to server1
From server1 command: `ssh admin#server2`
From server2 command: `ssh admin#server1`
If not working then check firewall user use command:
To check status of firewall run command: firewall-cmd --state
If it is running then check ssh port is added or not using below command:
firewall-cmd --list-all
If port is not added then need need to add to desired zone.
If firewall is not mandatory to active in that cat you can stop firewall and
mask it using below command:
systemctl stop firewalld
systemctl disable firewalld
systemctl mask --now firewalld
Please check if selinux is disabled.
In my case, worked after selinux disabled.
Method in linux is to generate encrypted key (either with rsa or dsa ) for that user , save that key in authorized key , assign rights to that folder and file in it.
1: Generate key with command
ssh-keygen –t dsa –P '' –f ~/.ssh/id_dsa
Your public key has been saved in /home/username_of_pc/.ssh/id_dsa.pub
2:Add that key in authorized key.
Cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
Set permissions for folder where it’s saved.
if you need it on another server then simply copy it to other machine.
3:Check ssh by simply typing
ssh localhost
It should not ask for password and only display last login time , then it’s setup correctly. Remember not to use root for ssh.

Resources