tftp config for autoconfiguration of 2 different switches - cisco

I currently have a Cisco switch (CBS-350) which is autoconfigured with isc-dhcp-server and atftpd on Ubuntu 18.04.
This is the dhcpd config:
subnet 192.168.0.0 netmask 255.255.255.0 {
option routers 192.168.0.1;
range 192.168.0.100 192.168.0.120;
option tftp-server-name "192.168.0.1";
option bootfile-name "config/cisco-switch1.cfg"; }
I would like to have another different switch model on the same subnet that I would like also to be autoconfigured. Any idea on how I can achieve this?
Both switches will not be running at the same time. I just would like to have each switches take the good autoconfig file.
Thanks for your help

Typically you would autoconfig enough of a device configlet to allow it to be manageable - VLANs, SNMP, username/password, routing, etc. You may not need to push a full config. Then the 'template' of this baseline config could be used for multiple devices, especially if DHCP allocated IPs are an option.
THEN you would use some 2nd stage configuration management system to push the remaining, device-specific parts.
Otherwise you have to start doing some distinguishing identifiers (eg. MAC address or serial number) and use that to map a device to a waiting config on the file server.

I've found that using a SubClass might be an option:
subclass "Vendor-Class" "CiscoPnP" {
option bootfile-name "config/cisco-switch1.cfg";
}
Unfortunately, both switches have the same Vendor-Class (CiscoPnP) so I cannot use this to distinguished them.

Related

Docker create two bridges that corrupts my internet access

I'm facing a pretty strange issue:
Here is my config:
docker 17-ce
ubuntu 16.04.
I work from two differents places with differents internet providers.
On the first place, everything works just fine, i can run docker out of the box and access internet without any problems.
But on the second place i cannot access the internet while docker is running, more precisly while the two virtual briges created by docker are up.
In this place, internet connection operate very strangly, i can ping google dns at 8.8.8.8, but nearly all dns request failed and most of the time after a few seconds the internet connection is totally down.
( The only difference between the first and the second place is the internet provider ).
At first i tought i could fix that by changing the default network bridge ip, but this does not solve the problem at all.
The point is that the --bip option of the docker daemon change the IP of the default docker bridge docker0, but docker also create an other bridge called br-1a0208f108d9 which does not reflect the settings passed to the --bip option.
I guess that this second bridge is causing trouble to my network because it overlap my wifi adapter configuration.
I'm having a hard time trying to diagnosticate this.
My questions are:
How can i be sure that my asumptions are right and that this second bridget is in conflict with my wifi adapter
What is this second bridge ? It's easy to find documentation about the docker0 bridge, but i cannot find anything related to this second bridge br-1a0208f108d9
How the exact same setup can work on one place and not an other one.
With this trouble i feel like i'm pretty close to level up my docker knowledges but before that i have to increase my network administration knowledges.
Hope you can help.
I manage to solve this issue after reading this:
https://success.docker.com/Architecture/Docker_Reference_Architecture%3A_Designing_Scalable%2C_Portable_Docker_Container_Networks
The second docker bridge br-1a0208f108d9 was created by docker because i was using a docker-compose file which involve the creation of an other custom network.
This network was using a fixed ip range:
networks:
my_network:
driver: bridge
ipam:
config:
- subnet: 172.16.0.0/16
gateway: 172.16.0.1
At my home, the physical wifi network adapter was automaticly assigned using DHCP the address 192.168.0.X.
But in the other place, the same wifi adapter get the address 172.16.0.x
Which collide with the custom docker network.
The solution was simply to change the IP of the custom docker network.
You have to tell Docker to use a different subnet. Edit /etc/docker/daemon.json and use something like this:
{
"bip": "198.18.251.1/24",
"default-address-pools": [
{
"base": "198.18.252.0/22",
"size": 26
}
]
}
Information is a bit hard to come by, but it looks like the bip option controls the IP and subnet assigned to the docker0 interface, while default-address-pools controls the addresses used for the br-* interfaces. You can omit bip in which case it will grab an allocation from the pool, and bip doesn't have to reside in the pool, as shown above.
The size is how big of a subnet to allocate to each Docker network. For example if your base is a /24 and you also set size to 24, then you'll be able to create exactly one Docker network, and probably you'll only be able to run one Docker container. If you try to start another you'll get the message could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network, which means you've run out of IP addresses in the pool.
In the above example I have allocated a /22 (1024 addresses) with each network/container taking a /26 (64 addresses) from that pool. 1024 รท 64 = 16, so you can run up to 16 Docker networks with this config (so max 16 containers running at the same time, or more if some of them share the same network). Since I rarely have more than two or three running containers at any one time this is fine for me.
In my example I'm using part of the 198.18.0.0/15 subnet as listed in RFC 3330 (but fully documented in RFC 2544) which is reserved for performance testing. It is unlikely that these addresses will appear on the real Internet, and no professional network provider will use these subnets in their private network either, so in my opinion they are a good choice for use with Docker as conflicts are very unlikely. But technically this is a misuse of this IP range so just be aware of potential future conflicts if you also choose to use these subnets.
The defaults listed in the documentation are:
{
"bip": "",
"default-address-pools": [
{"base": "172.80.0.0/16", "size": 24},
{"base": "172.90.0.0/16", "size": 24}
]
}
As mentioned above, the default empty bip means it will just grab an allocation from the pool, like any other network/container will.
In my case I would not apply Clement solution because I have the network conflict only with my dev pc while the container is delivered to many server which are not affected.
This problem in my opinion should be resolved as suggested here.
I tried this workaround:
I stopped the container with "docker-compose down" which destroys the bridge
Started the container while I'm on the "bad" network, so the container use another network
Since then, if I restart the container on any network it doesn't try to use the "bad" one, normally get the last used one.

using netcat for external loop-back test between two ports

I am writing a test script to exercise processor boards for a burn-in cycle during manufacturing. I would like to use netcat to transfer files from one process, out one Ethernet port and back into another Ethernet port to a receiving process. It looks like netcat would be an easy tool to use for this.
The problem is that if I set up the ethernet ports with IP addresses on separate IP sub nets and attempt to transfer data from one to the other, the kernel's protocol stack detects an internal route and although the data transfer completes as expected, it does NOT go out over the wire. The packets are routed internally.
That's great for network optimization but it foils the test I want to do.
Is there easy way to make this work? Is there a trick with iptables that would work? Or maybe things you can do to the route table?
I use network name spaces to do this sort of thing. With each of the adapters in a different namespace the data traffic definitely goes through the wire instead of reflecting in the network stack. The separate namespaces also prevent reverse packet filters and such from getting in the way.
So presume eth0 and eth1, wiht iperf3 as the reflecting agent (ping server or whatever). [DISCLAIMER:text from memory, all typos are typos, YMMV]
ip netns add target
ip link set dev eth1 up netns target
ip netns exec target ip address add dev eth1 xxx.xxx.xxx.xxx/y
ip netns exec target iperf3 --server
So now you've created the namespace "target", moved one of your adapters into that namespace. Set its IP address. And finally run your application in the that target namespace.
You can now run any (compatible) program in the native namespace, and if it references the xxx.xxx.xxx.xxx IP address (which clearly must be reachable with some route) will result in on-wire traffic that, with a proper loop-back path, will find the adapter within the other namespace as if it were a different computer all together.
Once finished, you kill the daemon server and delete the namespace by name and then the namespace members revert and you are back to vanilla.
killall iperf3
ip netns delete target
This also works with "virtual functions" of a single interface, but that example requires teasing out one or more virtual functions --- e.g. SR-IOV type adapters -- and handing out local mac addresses. So I haven't done that enough to have a sample code tidbit ready.
Internal routing is preferred because in the default routing behaviour you have all the internal routes marked as scope link in the local table. Check this out with:
ip rule show
ip route show table local
If your kernel supports multiple routing tables you can simply alter the local table to achieve your goal. You don't need iptables.
Let's say 192.168.1.1 is your target ip address and eth0 is the interface where you want to send your packets out to the wire.
ip route add 192.168.1.1/32 dev eth0 table local

Configuring multiple FastEthernet interfaces using GNS3

I'm doing a lab for my internetworking course and I'm using GNS3 as the emulator. I can configure single FastEthernet interfaces on each router but I need to have two per router. I am using the c7200 image and router.
This is my attempt to configure the router.
Router>enable
Router#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Router(config)#interface f0/0
Router(config-if)#ip address 172.16.1.1 255.255.255.192
Router(config-if)#no shutdown
Router(config-if)#exit
Router(config)#interface f0/1
^
% Invalid input detected at '^' marker.
Router(config)#
Every time I try to do the f0/1 interface I get the invalid input detected message.
WHY?!!?
Any help would be appreciated, thanks :-)
It seems the interface FastEthernet0/1 does not exist ?
Use the command Router#show ip interface brief, to verify that the interface exists.
If it doesn't exist, in GNS3 you can add additional interfaces through the configure option menu of the router(slots tab).
i had the same problem.
right click on router, click on configure, go to slots, click on slot 0, choose the second on (2FE).
apply, and ok, now u have 2 FastEthernet interfaces.

bond on software-bridge connection issue

What you have:
bond (bond0) interface (all modes except 4) with at least 2 ifaces (say eth0 / eth1) connected on the same external switch
bond0 interface joined on a software bridge (br0)
virtual machine (vm0) (eg LibVirt::LXC) with an interface on br0
What you get:
vm0 is not able to connect to (most) IP addresses via bond0 over br0
"bond0: received packet with own address as source address" in syslog
Why you get this:
When vm0 wants to contact an external IP address it will send out an ARP request. This L2 broadcast with the source mac of vm0 will leave through (depending on bonding mode) eg eth0, but via the external switch, re-enter through eth1 and thus bond0. Hence the switch br0 will learn the mac-address of vm0 on the port connected to bond0. As a consequence the ARP-reply is never received by vm0.
What can you do to resolve:
The reason I post this, next to sharing the info, is that I wasn't able to figure out a good enough solution. Those I did find are:
On vm0 set static ARP entry
Use bond0 mode=4 but your external switch must support this
Configure your external siwtch to use private VLAN on eth0/eth1 but only works in some use-cases and adds complexity
Add both physical interfaces to the bridge with spanning tree enabled, instead of using bond driver
Statically configuring the MAC of vm0 on the correct port of br0 is not an option on Linux (works on OpenBSD though)
I'm really hoping for a more elegant solution here... Anyone?
Thanks
I've got the same problem and I come up with the same analysis.
The only non-invasive/scalable solution I've found is to use the active/backup bonding (mode 1). The tradeoff is that you lose the aggregation.
IMO, the best solution is to use 802.3ad, but I can't always use it because I'm limited with 6 port-channels on most of my switches.
Try these options in bridge:
brigde_fd 0
bridge_stp off # switch on with more system like this
bridge_maxage 0
bridge_ageing 0
bridge_maxwait 0
Taken from this thread:
kvm bridge also in proxmox

gsoap client multiple ethernets

I have a linux system with two eth cards. eth0 and eth1. I am creating a client that sends
to endpoint 1.2.3.4.
I send my webservice with soap_call_ functions. How can I select eth1 instead of eth0?
the code is like that
soap_call_ns__add(&soap, server, "", a, b, &result);
How can I set inside the &soap variable the eth0 or the eth1?
(gsoap does not have a bind for clients... like soap_bind)
You want outgoing packages from your host to take a specific route (in this case a specific NIC)? If that's the case, then you have to adjust kernels routing tables.
Shorewall has excellent documentation on that kind of setup. You'll find there info about how to direct certain traffic through a particular network interface.
for gsoap we need to manually bind(2) before connect(3) in tcp_connect

Resources