I'm facing a pretty strange issue:
Here is my config:
docker 17-ce
ubuntu 16.04.
I work from two differents places with differents internet providers.
On the first place, everything works just fine, i can run docker out of the box and access internet without any problems.
But on the second place i cannot access the internet while docker is running, more precisly while the two virtual briges created by docker are up.
In this place, internet connection operate very strangly, i can ping google dns at 8.8.8.8, but nearly all dns request failed and most of the time after a few seconds the internet connection is totally down.
( The only difference between the first and the second place is the internet provider ).
At first i tought i could fix that by changing the default network bridge ip, but this does not solve the problem at all.
The point is that the --bip option of the docker daemon change the IP of the default docker bridge docker0, but docker also create an other bridge called br-1a0208f108d9 which does not reflect the settings passed to the --bip option.
I guess that this second bridge is causing trouble to my network because it overlap my wifi adapter configuration.
I'm having a hard time trying to diagnosticate this.
My questions are:
How can i be sure that my asumptions are right and that this second bridget is in conflict with my wifi adapter
What is this second bridge ? It's easy to find documentation about the docker0 bridge, but i cannot find anything related to this second bridge br-1a0208f108d9
How the exact same setup can work on one place and not an other one.
With this trouble i feel like i'm pretty close to level up my docker knowledges but before that i have to increase my network administration knowledges.
Hope you can help.
I manage to solve this issue after reading this:
https://success.docker.com/Architecture/Docker_Reference_Architecture%3A_Designing_Scalable%2C_Portable_Docker_Container_Networks
The second docker bridge br-1a0208f108d9 was created by docker because i was using a docker-compose file which involve the creation of an other custom network.
This network was using a fixed ip range:
networks:
my_network:
driver: bridge
ipam:
config:
- subnet: 172.16.0.0/16
gateway: 172.16.0.1
At my home, the physical wifi network adapter was automaticly assigned using DHCP the address 192.168.0.X.
But in the other place, the same wifi adapter get the address 172.16.0.x
Which collide with the custom docker network.
The solution was simply to change the IP of the custom docker network.
You have to tell Docker to use a different subnet. Edit /etc/docker/daemon.json and use something like this:
{
"bip": "198.18.251.1/24",
"default-address-pools": [
{
"base": "198.18.252.0/22",
"size": 26
}
]
}
Information is a bit hard to come by, but it looks like the bip option controls the IP and subnet assigned to the docker0 interface, while default-address-pools controls the addresses used for the br-* interfaces. You can omit bip in which case it will grab an allocation from the pool, and bip doesn't have to reside in the pool, as shown above.
The size is how big of a subnet to allocate to each Docker network. For example if your base is a /24 and you also set size to 24, then you'll be able to create exactly one Docker network, and probably you'll only be able to run one Docker container. If you try to start another you'll get the message could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network, which means you've run out of IP addresses in the pool.
In the above example I have allocated a /22 (1024 addresses) with each network/container taking a /26 (64 addresses) from that pool. 1024 รท 64 = 16, so you can run up to 16 Docker networks with this config (so max 16 containers running at the same time, or more if some of them share the same network). Since I rarely have more than two or three running containers at any one time this is fine for me.
In my example I'm using part of the 198.18.0.0/15 subnet as listed in RFC 3330 (but fully documented in RFC 2544) which is reserved for performance testing. It is unlikely that these addresses will appear on the real Internet, and no professional network provider will use these subnets in their private network either, so in my opinion they are a good choice for use with Docker as conflicts are very unlikely. But technically this is a misuse of this IP range so just be aware of potential future conflicts if you also choose to use these subnets.
The defaults listed in the documentation are:
{
"bip": "",
"default-address-pools": [
{"base": "172.80.0.0/16", "size": 24},
{"base": "172.90.0.0/16", "size": 24}
]
}
As mentioned above, the default empty bip means it will just grab an allocation from the pool, like any other network/container will.
In my case I would not apply Clement solution because I have the network conflict only with my dev pc while the container is delivered to many server which are not affected.
This problem in my opinion should be resolved as suggested here.
I tried this workaround:
I stopped the container with "docker-compose down" which destroys the bridge
Started the container while I'm on the "bad" network, so the container use another network
Since then, if I restart the container on any network it doesn't try to use the "bad" one, normally get the last used one.
Related
I currently have a Cisco switch (CBS-350) which is autoconfigured with isc-dhcp-server and atftpd on Ubuntu 18.04.
This is the dhcpd config:
subnet 192.168.0.0 netmask 255.255.255.0 {
option routers 192.168.0.1;
range 192.168.0.100 192.168.0.120;
option tftp-server-name "192.168.0.1";
option bootfile-name "config/cisco-switch1.cfg"; }
I would like to have another different switch model on the same subnet that I would like also to be autoconfigured. Any idea on how I can achieve this?
Both switches will not be running at the same time. I just would like to have each switches take the good autoconfig file.
Thanks for your help
Typically you would autoconfig enough of a device configlet to allow it to be manageable - VLANs, SNMP, username/password, routing, etc. You may not need to push a full config. Then the 'template' of this baseline config could be used for multiple devices, especially if DHCP allocated IPs are an option.
THEN you would use some 2nd stage configuration management system to push the remaining, device-specific parts.
Otherwise you have to start doing some distinguishing identifiers (eg. MAC address or serial number) and use that to map a device to a waiting config on the file server.
I've found that using a SubClass might be an option:
subclass "Vendor-Class" "CiscoPnP" {
option bootfile-name "config/cisco-switch1.cfg";
}
Unfortunately, both switches have the same Vendor-Class (CiscoPnP) so I cannot use this to distinguished them.
At the moment there is a nginx-balancer (Centos 7, a virtual machine with a white address) proxying to a large number of backend Apache servers. It is necessary to implement a failover cluster of two balancers on nginx. Fault tolerance is trially implemented using a virtual ip address (keepalived is used). Tell me what you can read about the pair nginx-balancer or how it can be implemented: all requests coming to them on the same virtual ip-address are evenly distributed between the two of them, but if one of them fails, the remaining one takes everything on itself?
At the moment, it turns out that there are two identical balancers and the benefit of the second is only in insurance. In the moments of full work of the main (master), the second (backup) is uselessly idle.
What you are describing is active-active HA.. you can find something on google for nginx+ but by briefly looking at it I don't really see it as true active/active = there is not just one virtual (floating) IP.. instead active/active is achieved by using two floating IPs (two VRRP groups - one VIP address active on each nginx) and then using round-robin DNS A record containing both addresses.
As far as I know keepalived is using VRRP protocol which in some implementations can provide 'true' active/active.. anyway I'm not sure keepalived supports this. Based on informatin I'm able to lookup it's not possible.
Is it possible to ping mininet ip? I found mininet's ip starts with 10.0.2.15 . I can ping from mininet to others. However, I failed to ping other place to mininet. How can I setup this?
10.0.0.0/8, which is 10.0.0.0 - 10.255.255.255 are IP addresses used only locally, they are not accessed from the internet (other networks). Here is some info from IANA:
These addresses are in use by many millions of independently operated networks, which might be as small as a single computer connected to a home gateway, and are automatically configured in hundreds of millions of devices. They are only intended for use within a private context and traffic that needs to cross the Internet will need to use a different, unique address.
These addresses can be used by anyone without any need to coordinate with IANA or an Internet registry. The traffic from these addresses does not come from ICANN or IANA. We are not the source of activity you may see on logs or in e-mail records.
I'm trying to set up both Confluence and PostgreSQL in Docker. I've got them both up and running on my fully up to date CentOS 6 machine, with volume-mapping to the host file system so I can back them up easily. I can connect to PostgreSQL using pgAdmin from another machine just fine, and I can get into Confluence from a browser from that same machine. So, basically, both apps seem to be running as expected inside their respective containers and are accessible to the outside world, which of course eliminates a whole bunch of possibilities for my issue.
And that issue is that Confluence can't talk to PostgreSQL during initial setup, which is necessary for it to function. I'm getting connection failed errors (to be specific: "Can't reach database server or port : SQLState - 08001 org.postgresql.util.PSQLException: The connection attempt failed").
PostgreSQL is using the default 5432 port, which of course is exposed, otherwise I wouldn't be able to connect to it via pgAdmin, and of course I know the ID/password I'm trying is correct for the same reason (and besides, if it was an auth problem I wouldn't expect to see this error message). When I try to configure the database connection during Confluence's initial setup, I specify the IP address of the host machine, just like from pgAdmin on the other machine, but that doesn't work. I also tried some things that I basically knew wouldn't work (0.0.0.0, 127.0.0.1 and localhost).
I'm not sure what I need to do to make this work. Is there maybe some special method to specify the IP to a container from the same host machine, some nomenclature I'm not aware of?
At this point, I'm "okay" with Docker in terms of basic operations, but I'm far from an expert, so I'm a bit lost. I'm also not a big-time *nix user generally, though I can usually fumble my way through most things... but any hints would be greatly appreciated because I'm at a loss right now otherwise.
Thanks,
Frank
EDIT 1: As requested by someone below, here's my pg_hba.conf file, minus comments:
local all all trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
local replication all trust
host replication all 127.0.0.1/32 trust
host replication all ::1/128 trust
host all all all md5
try changing the second line of the pg_hba.conf file to the following:
host all all 0.0.0.0/32 trust
this will cause PostgreSQL to start accepting calls from any source address. Since a docker container is technically not operating on localhost but on its own ip, the current configuration causes PostgreSQL to block any connections to it.
Also check if confluence is searching for the database on localhost. If that is the case change that to the ip of the hostmachine within the docker network.
Success! The solution was to create a custom network and then use the image name in the connection string to PostreSQL container from Confluence container. In other words, I ran this:
docker network create -d bridge docker-net
Then, on both of the docker run commands for the PostgreSQL and Confluence containers, I added:
--network=docker-net
That way, when I ran through the Confluence configuration wizard, when it asked for the hostname for the PostgreSQL server, I used postgres (the name I gave the container) rather than an IP address or actual hostname. Docker makes that work thanks to the custom network. This also leaves the containers available via the IP of the host machine, so for example I can still connect to PostgreSQL via 192.168.123.12:5432, and of course I can launch Confluence in the browser via 192.168.123.12:8080.
FYI, I didn't even have to alter the pg_hba.conf file, I just used the official PostgreSQL image (latest) as it was, which is ideal.
Thanks very much to RSloeserwij for the suggestions... while none of them proved to be the solution I needed, they did put me on the right track in the Docker docs, which, after some reading, led me to understand a few things I didn't before and figure out the config magic I needed.
I understand that
a. one can maintain multiple routing tables in linux using "ip route ..... table "
b. forwarding decision for packets that ingress from outside network could be done using "ip rule add iif dev table "
However, if I want an user-app to talk to the outside world using specific routing table, I don't see an easy way out except to use "ip netns".
Is there a way to tell the system to use "lookup route" using specific routing table?
My understanding is "ip rules" apply only after a packet has been generated, but the user-apps consult the routing table even before the packet is generated so that ARP for the gateway can be sent.
This is a bit complicated matter. You should familiarize with SELinux labels and containers.
Docker documentation for RedHat states:
by default, docker creates a virtual ethernet card for each container. each container has its own routing tables and iptables. in addition to this, when you ask for specific ports to be forwarded, docker creates certain host iptables rules for you. the docker daemon itself does some of the proxying. the takeaway here is that if you map applications to containers, you provide flexibility to yourself by limiting network access on a per-application basis.