Is there a way to dynamically change hostname to ip address mapping in linux (without having to restart)? - linux

In a linux system, I suppose you can configure hostname to IP address mapping in /etc/hosts, but I guess if you change the mapping for a particular hostname, you would have to restart for the change to take effect.
Is there a way to dynamically (without restarting) change the mapping of a hostname to a different IP address?

In linux, administrators can specify the order of the source that an application will ask domain name information.
This file is
/etc/nsswitch.conf
and the default setting for dns is:
hosts: files dns
so yes you can add your sites in /etc/hosts and your application will follow that order. You dont need to restart and yes you can do it dynamically.
For more info type:
man nsswitch.conf
although there is a way for your application to bypass this feature. And that is when the application can "ask for dns" through a remote point or if the application havent built to use the operating system gethostbyname/gethostbyaddr system calls.

You just need to change the IP address in the /etc/hosts. Most of the time this change will propagate into the NS cache automatically. However, sometimes you just need to flush the name-server cache on your system. Depending on what you've got running the actual steps may vary. I'll list a few popular ones:
NSCD
$ sudo /etc/init.d/nscd restart
OR
$ sudo service nscd restart
OR
$ sudo systemctl restart nscd
dnsmasq
$ sudo /etc/init.d/dnsmasq restart
OR
$ sudo service dnsmasq restart
OR
$ sudo systemctl restart dnsmasq
BIND server dns cache
unrelated to OP question but in case someone ends up here
$ sudo rndc restart
OR
$ sudo rndc flushname foo.local
Where foo.local is the particular hostname you wish to r

Related

nameservers update differently with openconnect and openconnect-gnome in ubuntu 18.04

This seems to be a new issue with network-manager-openconect-gnome in Ubuntu 18.04+
I install sudo apt install network-manager-openconnect-gnome to get gnome integration with opeconnect and Cisco AnyConnect Compatible VPN (openconnect)
As an aside (which may actually be relevant) I do this to get *.local addresses to resolve:
sudo ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf as per systemd docs
Move dns before mdns4_minimal in /etc/nsswitch.conf
If I connect to the VPN with openconnect through the gnome network manager, VPN addresses (sites for work) do not resolve. Regular sites continue to work as expected.
If I connect to the VPN with openconnect on the command line with sudo openconnect vpn.mycompany.com, VPN addresses (sites for work) do resolve. Regular sites continue to work as expected.
I thought I would check to see if there were any differences between /etc/resolv.conf with each of these VPN connection methods and sure enough, there is one:
openconnect on the command line (working):
##VPNC_GENERATED# -- this file is generated by vpnc
# and will be overwritten by vpnc
# as long as the above mark is intact
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients directly to
# all known uplink DNS servers. This file lists all configured search domains.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 10.10.10.10
nameserver 10.10.10.11
search broadband mycompany.com
openconnect gnome integration (not working):
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients directly to
# all known uplink DNS servers. This file lists all configured search domains.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 192.168.1.1
nameserver 10.10.10.10
nameserver 10.10.10.11
search broadband mycompany.com
If I remove (or comment out) the nameserver 192.168.1.1, which is the difference in content between the working and not working files... everything works as expected. I can resolve addresses within the company and regular sites work as expected.
This does not happen with Fedora. Everything works out of the box. I'm not sure why the network-manager-openconnect-gnome package works differently or if there's a way I can make it work without either
Editing the file by hand.
Using the openconnect tool from the command line and keeping a terminal open running that command.

Docker build command fails to resolve domains

I set up a Debian 10 server to host my containers running on Docker version 19.03.2.
It currently hosts 3 DNS containers (pi-hole => bind9 => dnscrypt-proxy) which means my Debian 10 server acts as a DNS server for my LAN.
I want to add a new container. However, I can't build it because it fails when it comes to RUN apt-get update. I checked the content of the /etc/resolv.conf of the container, and the content seems right (nameserver 1.1.1.1 and nameserver 9.9.9.9, that matches with what I wrote in /etc/docker/daemon.json).
If I'm correct, the build step uses - by default - the DNS of the host, except if you specify DNS servers in /etc/default/docker or /etc/docker/daemon.json.
If the DNS servers in /etc/resolv.conf seem correct, and if the container has an Internet access (I tried a RUN ping 8.8.8.8 -c1 and it works), the build should succeed ?
I tried several things, like overwriting the content of /etc/resolv.conf with other DNS, I also rebooted the server, restarted Docker, pruned downloaded images, used the --no-cache option... I also reinstalled Docker. Nothing seems to work.
It must be somehow related to my DNS containers I guess.
Below is the content of the /etc/resolv.conf of the host (the first one is itself, as it redirects to Pi-hole).
Have you any lead to solve this issue ?
I can provide the docker-compose file of my DNS containers and the Dockerfile of my new container if you need them.
Thanking you in advance,
I have found this fix :
RUN chmod o+r /etc/resolv.conf && apt-get [....]
It works when I change the permissions.
I do not really understand why it behaves like this, if you have any lead I would be glad to know more !

How to control /etc/hosts file from over writing

I am deploying a Kops Kubernetes cluster on AWS with Debian Jessie image.
Mine is a hybrid environment where my artifactory is in a physical env in our DC. Now I have been facing an issue, my worker nodes are unable to pull images from my artifactory unless I specify the artifactory FQDN and IP in the /etc/hosts file.
So this is a manual edit, it works all fine after I do this. So I went ahead and added the data in my additional userdata of the Kops worker node group, but I am seeing after some time the hosts file on worker nodes is getting overwritten and also this is evident upon node reboot.
So how can I resolve this!!
The real answer is to run your own DNS server, or at least use DNS hostnames to resolve. If your router supports it, you can set local hostnames (machine-1.local)
If that isn't possible, you could try a solution like puppet if you own the virtual machines. Also, I believe Kubernetes does have a DNS addon. Also, you could use a crontab for on boot to write to the hosts file, but that's a dirty solution.
In addition, your hosts file would get rewritten for every DHCP renew. You could use static IPs, but again, DNS is the way to go.
Another workaround for this is to put it in your /etc/rc.local file:
If the file exists add this to the end:
echo '<ip-address-of-artifactory> <fqdn-of-artifactory>' >> /etc/hosts
If the file doesn't exist, create it:
$ cat << EOF > /etc/rc.local
#!/bin/sh -e
#
echo '<ip-address-of-artifactory> <fqdn-of-artifactory>' >> /etc/hosts
EOF
$ chmod 755 /etc/rc.local
$ reboot # check that it works

Setting up FTP on Amazon Cloud Server [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I am trying to set up FTP on Amazon Cloud Server, but without luck.
I search over net and there is no concrete steps how to do it.
I found those commands to run:
$ yum install vsftpd
$ ec2-authorize default -p 20-21
$ ec2-authorize default -p 1024-1048
$ vi /etc/vsftpd/vsftpd.conf
#<em>---Add following lines at the end of file---</em>
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=<Public IP of your instance>
$ /etc/init.d/vsftpd restart
But I don't know where to write them.
Jaminto did a great job of answering the question, but I recently went through the process myself and wanted to expand on Jaminto's answer.
I'm assuming that you already have an EC2 instance created and have associated an Elastic IP Address to it.
Step #1: Install vsftpd
SSH to your EC2 server. Type:
> sudo yum install vsftpd
This should install vsftpd.
Step #2: Open up the FTP ports on your EC2 instance
Next, you'll need to open up the FTP ports on your EC2 server. Log in to the AWS EC2 Management Console and select Security Groups from the navigation tree on the left. Select the security group assigned to your EC2 instance. Then select the Inbound tab, then click Edit:
Add two Custom TCP Rules with port ranges 20-21 and 1024-1048. For Source, you can select 'Anywhere'. If you decide to set Source to your own IP address, be aware that your IP address might change if it is being assigned via DHCP.
Step #3: Make updates to the vsftpd.conf file
Edit your vsftpd conf file by typing:
> sudo vi /etc/vsftpd/vsftpd.conf
Disable anonymous FTP by changing this line:
anonymous_enable=YES
to
anonymous_enable=NO
Then add the following lines to the bottom of the vsftpd.conf file:
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=<Public IP of your instance>
Your vsftpd.conf file should look something like the following - except make sure to replace the pasv_address with your public facing IP address:
To save changes, press escape, then type :wq, then hit enter.
Step #4: Restart vsftpd
Restart vsftpd by typing:
> sudo /etc/init.d/vsftpd restart
You should see a message that looks like:
If this doesn't work, try:
> sudo /sbin/service vsftpd restart
Step #5: Create an FTP user
If you take a peek at /etc/vsftpd/user_list, you'll see the following:
# vsftpd userlist
# If userlist_deny=NO, only allow users in this file
# If userlist_deny=YES (default), never allow users in this file, and
# do not even prompt for a password.
# Note that the default vsftpd pam config also checks /etc/vsftpd/ftpusers
# for users that are denied.
root
bin
daemon
adm
lp
sync
shutdown
halt
mail
news
uucp
operator
games
nobody
This is basically saying, "Don't allow these users FTP access." vsftpd will allow FTP access to any user not on this list.
So, in order to create a new FTP account, you may need to create a new user on your server. (Or, if you already have a user account that's not listed in /etc/vsftpd/user_list, you can skip to the next step.)
Creating a new user on an EC2 instance is pretty simple. For example, to create the user 'bret', type:
> sudo adduser bret
> sudo passwd bret
Here's what it will look like:
Step #6: Restricting users to their home directories
At this point, your FTP users are not restricted to their home directories. That's not very secure, but we can fix it pretty easily.
Edit your vsftpd conf file again by typing:
> sudo vi /etc/vsftpd/vsftpd.conf
Un-comment out the line:
chroot_local_user=YES
It should look like this once you're done:
Restart the vsftpd server again like so:
> sudo /etc/init.d/vsftpd restart
All done!
Appendix A: Surviving a reboot
vsftpd doesn't automatically start when your server boots. If you're like me, that means that after rebooting your EC2 instance, you'll feel a moment of terror when FTP seems to be broken - but in reality, it's just not running!. Here's a handy way to fix that:
> sudo chkconfig --level 345 vsftpd on
Alternatively, if you are using redhat, another way to manage your services is by using this nifty graphic user interface to control which services should automatically start:
> sudo ntsysv
Now vsftpd will automatically start up when your server boots up.
Appendix B: Changing a user's FTP home directory
* NOTE: Iman Sedighi has posted a more elegant solution for restricting users access to a specific directory. Please refer to his excellent solution posted as an answer *
You might want to create a user and restrict their FTP access to a specific folder, such as /var/www. In order to do this, you'll need to change the user's default home directory:
> sudo usermod -d /var/www/ username
In this specific example, it's typical to give the user permissions to the 'www' group, which is often associated with the /var/www folder:
> sudo usermod -a -G www username
To enable passive ftp on an EC2 server, you need to configure the ports that your ftp server should use for inbound connections, then open a list of available ports for the ftp client data connections.
I'm not that familiar with linux, but the commands you posted are the steps to install the ftp server, configure the ec2 firewall rules (through the AWS API), then configure the ftp server to use the ports you allowed on the ec2 firewall.
So this step installs the ftp client (VSFTP)
> yum install vsftpd
These steps configure the ftp client
> vi /etc/vsftpd/vsftpd.conf
-- Add following lines at the end of file --
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=<Public IP of your instance>
> /etc/init.d/vsftpd restart
but the other two steps are easier done through the amazon console under EC2 Security groups. There you need to configure the security group that is assigned to your server to allow connections on ports 20,21, and 1024-1048
Thanks #clone45 for the nice solution. But I had just one important problem with Appendix b of his solution. Immediately after I changed the home directory to var/www/html then I couldn't connect to server through ssh and sftp because it always shows following errors
permission denied (public key)
or in FileZilla I received this error:
No supported authentication methods available (server: public key)
But I could access the server through normal FTP connection.
If you encountered to the same error then just undo the appendix b of #clone45 solution by set the default home directory for the user:
sudo usermod -d /home/username/ username
But when you set user's default home directory then the user have access to many other folders outside /var/www/http. So to secure your server then follow these steps:
1- Make sftponly group
Make a group for all users you want to restrict their access to only ftp and sftp access to var/www/html. to make the group:
sudo groupadd sftponly
2- Jail the chroot
To restrict access of this group to the server via sftp you must jail the chroot to not to let group's users to access any folder except html folder inside its home directory. to do this open /etc/ssh/sshd.config in the vim with sudo.
At the end of the file please comment this line:
Subsystem sftp /usr/libexec/openssh/sftp-server
And then add this line below that:
Subsystem sftp internal-sftp
So we replaced subsystem with internal-sftp. Then add following lines below it:
Match Group sftponly
ChrootDirectory /var/www
ForceCommand internal-sftp
AllowTcpForwarding no
After adding this line I saved my changes and then restart ssh service by:
sudo service sshd restart
3- Add the user to sftponly group
Any user you want to restrict their access must be a member of sftponly group. Therefore we join it to sftponly by:
sudo usermod -G sftponly username
4- Restrict user access to just var/www/html
To restrict user access to just var/www/html folder we need to make a directory in the home directory (with name of 'html') of that user and then mount /var/www to /home/username/html as follow:
sudo mkdir /home/username/html
sudo mount --bind /var/www /home/username/html
5- Set write access
If the user needs write access to /var/www/html, then you must jail the user at /var/www which must have root:root ownership and permissions of 755. You then need to give /var/www/html ownership of root:sftponly and permissions of 775 by adding following lines:
sudo chmod 755 /var/www
sudo chown root:root /var/www
sudo chmod 775 /var/www/html
sudo chown root:www /var/www/html
6- Block shell access
If you want restrict access to not access to shell to make it more secure then just change the default shell to bin/false as follow:
sudo usermod -s /bin/false username
Great Article... worked like a breeze on Amazon Linux AMI.
Two more useful commands:
To change the default FTP upload folder
Step 1:
edit /etc/vsftpd/vsftpd.conf
Step 2: Create a new entry at the bottom of the page:
local_root=/var/www/html
To apply read, write, delete permission to the files under folder so that you can manage using a FTP device
find /var/www/html -type d -exec chmod 777 {} \;
In case you have ufw enabled, remember add ftp:
> sudo ufw allow ftp
It took me 2 days to realise that I enabled ufw.
It will not be ok until you add your user to the group www by the following commands:
sudo usermod -a -G www <USER>
This solves the permission problem.
Set the default path by adding this:
local_root=/var/www/html
Don't forget to update your iptables firewall if you have one to allow the 20-21 and 1024-1048 ranges in.
Do this from /etc/sysconfig/iptables
Adding lines like this:
-A INPUT -m state --state NEW -m tcp -p tcp --dport 20:21 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 1024:1048 -j ACCEPT
And restart iptables with the command:
sudo service iptables restart
I've simplified clone45 steps:
Open the ports as he mentioned
sudo su
sudo yum install vsftpd
echo -n "Public IP of your instance: " && read publicip
echo -e "anonymous_enable=NO\npasv_enable=YES\npasv_min_port=1024\npasv_max_port=1048\npasv_address=$publicip\nchroot_local_user=YES" >> /etc/vsftpd/vsftpd.conf
sudo /etc/init.d/vsftpd restart
I followed clone45's answer all the way to the end. A great article! Since I needed the FTP access to install plug-ins to one of my wordpress sites, I changed the home directory to /var/www/mysitename. Then I continued to add my ftp user to the apache(or www) group like this:
sudo usermod -a -G apache myftpuser
After this I still saw this error on WP's plugin installation page: "Unable to locate WordPress Content directory (wp-content)". Searched and found this solution on a wp.org Q&A session: https://wordpress.org/support/topic/unable-to-locate-wordpress-content-directory-wp-content and added the following to the end of wp-config.php:
if(is_admin()) {
add_filter('filesystem_method', create_function('$a', 'return "direct";' ));
define( 'FS_CHMOD_DIR', 0751 );
}
After this my WP plugin was installed successfully.
maybe worth mentioning in addition to clone45's answer:
Fixing Write Permissions for Chrooted FTP Users in vsftpd
The vsftpd version that comes with Ubuntu 12.04 Precise does not
permit chrooted local users to write by default. By default you will
have this in /etc/vsftpd.conf:
chroot_local_user=YES
write_enable=YES
In order to allow local users to write, you need to add the following parameter:
allow_writeable_chroot=YES
Note:
Issues with write permissions may show up as following FileZilla errors:
Error: GnuTLS error -15: An unexpected TLS packet was received.
Error: Could not connect to server
References:
Fixing Write Permissions for Chrooted FTP Users in vsftpd
VSFTPd stopped working after update
In case you are getting 530 password incorrect
1 more step needed
in file /etc/shells
Add the following line
/bin/false
FileZila is good FTP tool to setup with Amazon Cloud.
Download FileZila client from https://filezilla-project.org/
Click on File -> Site Manager - >
New Site
Provide Host Name IP address of your amazon cloud location (Port if any)
Protocol - SFTP (May change based on your requirement)
Login Type - Normal (So system will not ask for password each time)
Provide user name and password.
Connect.
You need to do these step only 1 time, later it will upload content to the same IP address and same site.

DHCP overwrites Cisco VPN resolv.conf on Linux

I'm using an Ubuntu 8.04 (x86_64) machine to connect to my employer's Cisco VPN. (The client didn't compile out of the box, but I found patches to update the client to compile on kernels released in the last two years.) This all works great, until my DHCP client decides to renew its lease and updates /etc/resolv.conf, replacing the VPN-specific name servers with my general network servers.
Is there a good way to prevent my DHCP client from updating /etc/resolv.conf while my VPN is active?
If you are running without NetworkManager handling the connections, use the resolvconf package to act as an intermediary to programs tweaking /etc/resolv.conf: sudo apt-get install resolvconf
If you are using NetworkManager it will handle this for you, so get rid of the resolvconf package: sudo apt-get remove resolvconf
I found out about this when setting up vpnc on Ubuntu last week. A search for vpn resolv.conf on ubuntuforums.org has 250 results, many of which are very related!
If you are using the Ubuntu default with NetworkManager, try removing the CiscoVPN client and use the NetworkManager vpnc plugin to connect to the Cisco VPN. This should avoid all problems, since NetworkManager then knows about your VPN connection.
I would advice following the advice from #Sean, but if that fails for whatever reason, it should be possible to configure dhclient to not request DNS servers in /etc/dhcp3/dhclient.conf
chattr +i /etc/resolv.conf should work. ( -i to undo )
But the better thing is to configure your dhclient.conf:
https://calomel.org/dhclient.html
Look at superceding domain-name-servers, and domain-name.
Also look at "send hostname;"
If it works at your work place, you will have a cool hostname for your PC and not some weird name that DHCP servers assign.
vpnc seems to be doing the right thing for my employer's cisco concentrator. I jump on and off the vpn, and it seems to update everything smoothly.
The DHCPclient daemon can be told not to update resolv.conf with a command line switch. (-r I think, depending on the client)
That's less dynamic, because you'd have to restart/reconfigure DHCP when you connect, but not too hard. Similarly, you could just stop the service, but you might lose your IP in the meantime, so I wouldn't really recommend that.
Alternatively, you could run the dhcpclient from within a cron job, adding the appropriate process checks.
This problem is much more noticeable on networks with low DHCP lease ages. There is a bug filed in Ubuntu's dhcp3 package launchpad:
https://bugs.launchpad.net/ubuntu/+source/dhcp3/+bug/90681
Which includes this patch in the description:
--- /sbin/dhclient-script.orig 2007-03-08 19:19:56.000000000 +0000
+++ /sbin/dhclient-script 2007-03-08 19:19:46.000000000 +0000
## -13,6 +13,10 ##
# The alias handling in here probably still sucks. -mdz
make_resolv_conf() {
+ # don't overwrite resolv.conf at RENEW time, since a VPN/PPTP tunnel may
+ # have updated it with remote DNS servers
+ [ "$reason" = "RENEW" ] && return
+
if [ -n "$new_domain_name" -o -n "$new_domain_name_servers" ]; then
# Find out whether we are going to mount / rw
exec 9>&0 </etc/fstab
This change to /sbin/dhcp-script stops DHCP client from overwriting /etc/resolv.conf when it renews its lease.

Resources