I've been looking into LXC containers and I was wondering as to whether or not it is possible to use an LXC container like an ordinary VPS?
What I mean is;
How do I assign an external IP address to an LXC container?
How do I ssh into an LXC container directly?
I'm quite new to LXC containers so please let me know if there are any other differences I should be aware of.
lxc-create -t download -n cn_name
lxc-start -n cn_name -d
lxc-attach -n cn_name
then in container cn_name install openssh server so you can use ssh then reboot it or restart ssh service.
To make any container "services" available to the world configure port forwarding from the host to the container.
For instance if you had a web server in a container, to forward port 80 from the host ip 192.168.1.1 to a container with ip 10.0.3.1 you can use the iptables rule below.
iptables -t nat -I PREROUTING -i eth0 -p TCP -d 191.168.1.1/32 --dport 80 -j DNAT --to-destination 10.0.3.1:80
now the web server on port 80 of the container will be available via port 80 of the host OS.
It sounds like what you want is to bridge the host NIC to the container. In that case, the first thing you need to do is create a bridge. Do this by first ensuring bridge-utils is installed on the system, then open /etc/networking/interfaces for editing and change this:
auto eth0
iface eth0 inet dhcp
to this:
auto br0
iface br0 inet dhcp
bridge-interfaces eth0
bridge-ports eth0
up ifconfig eth0 up
iface eth0 inet manual
If your NIC is not named eth0, you should replace eth0 with whatever your NIC is named (mine is named enp5s0). Once you've made the change, you can start the bridge by issuing the command
sudo ifup br0
Assuming all went well, you should maintain internet access and even your ssh session should stay online during the process. I recommend you have physical access to the host because messing up the above steps could block the host from internet access. You can verify your setup is correct by running ifconfig and checking that br0 has an assigned IP address while eth0 does not.
Once that's all set up, open up /etc/lxc/default.conf and change
lxc.network.link = lxcbr0
to
lxc.network.link = br0
And that's it. Any containers that you launch will automatically bridge to eth0 and will effectively exist on the same LAN as the host. At this point, you can install ssh if it's not already and ssh into the container using its newly assigned IP address.
"Converting eth0 to br0 and getting all your LXC or LXD onto your LAN"
Related
I use proxmox and I need to make port routing for virtual machines and containers, I use:
qm set 100 -args "--redir tcp:1000::1001"ยป
Command for port routing on VM. It works well, but doesn't work for containers. The error when I use it for containers is:
Configuration file '100.conf' does not exist.
How can I make port routing for containers in proxmox?
The qm command in proxmox is used for qemu virtual machines (kvm) and not for the LXC containers. It's normal not to work for LXC since when executed, it tries to find a kvm virtual machine configuration for that ID. That id being an LXC container and not a KVM machine, has no configuration file.
In order to map ports to an LXC container you'll have to use iptables (afaik there is no similar qm tool for lxc). Login to your proxmox server via SSH, become root and the syntax for port forwarding is like this:
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport PORT -j DNAT --to [LXC-container-IP:PORT]
For example if you want to map let's say port 9999 to port 9999 of your LXC container (let's assume the lxc container has ip 1.1.1.1 for the sake of the example), your iptables rule is:
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 9999 -j DNAT --to [1.1.1.1:9999]
Please keep in mind that, your default ethernet device might not be eth0 but vmbr0 or whatever that is. So replace eth0 with the corresponding device.
I am trying to implement wireless access point on my embedded platform. I have implemented some parts like running wireless card as access point, dhcp server and some forwarding rules (via iptables).
I have tried several iptables commands. results of all are the same. The last one I decided to use is:
iptables -t nat -F
iptables -F
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
echo '1' > /proc/sys/net/ipv4/ip_forward
Access point runs successfully, clients can connect to it and get ip address. However there is DNS problem. Clients could not resolve the hostnames but they can connect via ip addresses.
DHCP configuration is as below:
interface wlan0
start 192.168.7.11
end 192.168.7.20
max_leases 10
option subnet 255.255.255.0
option router 192.168.7.1
#option dns 192.168.7.2 192.168.7.4
option domain local
option lease 864000
lease_file /conf/udhcpd.leases
#pidfile /tmp/udhcpd.pid
For this configuration, If I use 'option dns 8.8.8.8 8.8.4.4' I can resolve the problem but I need to use the dns of the network. Is there any way to forward the dns address 192.168.7.2 to the dns address of the wired network (eg. 192.168.0.2).
I could not find the DNS routing (eg. 192.168.7.2 to 192.168.0.2). But I have found a way to use the DNS address of the embedded platform on clients.
in DHCP server configuration, I used this option:
option dns 192.168.0.2 192.168.0.4 (conf file are generated when access point is started, so the dns addresses are obtained from the system )
after DHCP server is run, I have run these commands to forward dns addresses:
iptables -A FORWARD --in-interface eth1 -m tcp --sport 53 -j ACCEPT
iptables -A FORWARD --in-interface eth1 -m udp --sport 53 -j ACCEPT
I am trying to configure networking on QEMU malta mips, which is running on vmware host (ubuntu) using tap/tun device and bridge interface. My qemu guest is unable to retrieve ip address from DHCP server. If i give it manually, it can just connect with its host. Using tcpdump i came to know that outgoing traffic is working perfectly but incoming is not working.
Can anyone suggest me how to solve this kind of issue?
Thank You
If you use NAT mode, then your host machine will act as a router for your guest VM. This means you must enable routing on your host.
Assuming you start qemu and link it to tap0 interface and your outgoing internet interface is eth0, then you should:
Create the tap0 virtual interface:
tunctl -t tap0
ifconfig tap0 192.168.0.1 netmask 255.255.255.0 up
Activate routing
# activate ip forwarding
echo 1 > /proc/sys/net/ipv4/ip_forward
# Create forwarding rules, where
# tap0 - virtual interface
# eth0 - net connected interface
iptables -A FORWARD -i tap0 -o eth0 -j ACCEPT
iptables -A FORWARD -i eth0 -o tap0 -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
Start your VM with somenthing like this:
qemu [..] -net nic,model=e1000,vlan=0 -net tap,ifname=tap0,vlan=0,script=no
In your VM, configure an interface with ip 192.168.0.2/24 and default gateway 192.168.0.1
In NAT mode you can't achieve this. You need to configure VM in bridge mode and I hope you know the steps to configure it; if not see the link here ;
Step #2 of catalin.me's answer could be even simpler:
# activate ip forwarding
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
First two iptables rules are needed only in case if default policy for FORWARD chain is DROP.
E.g.:
iptables -P FORWARD DROP
Does anybody have any experience running Meteor on multihomed servers? We're bring an app into production, and have some servers that have two network cards each. The one interface on eth0 connects to our internal network with our Mongo cluster, and the other interface eth1 connects to our DMZ. We're well past development, and are in post-bundle workflow. So, it's a question of running the following command only on eth1:
MONGO_URL='mongodb://mongodb:27017/?replicaSet=meteor' PORT='80' ROOT_URL='http://app.domain.org' node main.js
I don't know enough about node to know exactly how to specify a single interface. Is this specified with an environment variable? In our /etc/network/interfaces file? iptables? Something else?
I'm finding resources like the following on the web, but I'm not sure if I'm on the right track with them. Does getting a node.js server running on a specific interface require this kind of fussing? Is there something easier?
https://gist.github.com/logicalparadox/2142595
how to set node.js as a service on a private server?[can't access the node application]
Any help would be much appreciated! Thanks!
Abigail
Meteor will listen on 0.0.0.0 (all interfaces) unless your specify the environment variable BIND_IP.
Explicitly- the value of BIND_IP is passed as the hostname parameter to http://nodejs.org/api/http.html#http_server_listen_port_hostname_backlog_callback
Source: https://github.com/meteor/meteor/blob/master/packages/webapp/webapp_server.js#L541
Okay, so got things working. Second ethernet card wasn't configured.
sudo nano /etc/network/interfaces
auto eth0
iface eth0 inet static
address aaa.bbb.ccc.ddd
gateway aaa.bbb.ccc.eee
auto eth1
iface eth1 inet static
address aaa.bbb.ccc.fff
gateway aaa.bbb.ccc.ggg
sudo ifconfig eth1 up
sudo /etc/init.d/networking restart
Then had to make sure firewalls were working...
sudo iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
sudo iptables -A INPUT -p tcp --dport ssh -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 443 -j ACCEPT
sudo iptables -L -n -v
Then confirmed the site was running on the correct IP address with a big of curl...
curl -XGET http://aaa.bbb.ccc.fff/main.js
I'm writing a testing tool that requires known traffic to be captured from a NIC (using libpcap), then fed into the application we are trying to test.
What I'm attempting to set-up is a web server (in this case, lighttpd) and a client (curl) running on the same machine, on an isolated test network. A script will drive the entire setup, and the goal is to be able to specify a number of clients as well as a set of files for each client to download from the web server.
My initial approach was to simply use the loopback (lo) interface... run the web server on 127.0.0.1, have the clients fetch their files from http://127.0.0.1, and run my libpcap-based tool on the lo interface. This works ok, apart from the fact that the loopback interface doesn't emulate a real Ethernet interface. The main problem with that is that packets are all inconsistent sizes... 32kbytes and bigger, and somewhat random... it's also not possible to lower the MTU on lo (well, you can, but it has no effect!).
I also tried running it on my real interface (eth0), but since it's an internal web client talking to an internal web server, traffic never leaves the interface, so libpcap never sees it.
So then I turned to tun/tap. I used socat to bind two tun interfaces together with a tcp connection, so in effect, i had:
10.0.1.1/24 <-> tun0 <-socat-> tcp connection <-socat-> tun1 <-> 10.0.2.1/24
This seems like a really neat solution... tun/tap devices emulate real Ethernet devices, so i can run my web server on tun0 (10.0.1.1) and my capture tool on tun0, and bind my clients to tun1 (10.0.2.1)... I can even use tc to apply shaping rules to this traffic and create a virtual WAN inside my linux box... but it just doesn't work...
Here are the socat commands I used:
$ socat -d TCP-LISTEN:11443,reuseaddr TUN:10.0.1.1/24,up &
$ socat TCP:127.0.0.1:11443 TUN:10.0.2.1/24,up &
Which produces 2 tun interfaces (tun0 and tun1), with their respective IP addresses.
If I run ping -I tun1 10.0.1.1, there is no response, but when i tcpdump -n -i tun0, i see the ICMP echo requests making it to the other side, just no sign of the response coming back.
# tcpdump -i tun0 -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tun0, link-type RAW (Raw IP), capture size 65535 bytes
16:49:16.772718 IP 10.0.2.1 > 10.0.1.1: ICMP echo request, id 4062, seq 5, length 64
<--- insert sound of crickets here (chirp, chirp)
So am I missing something obvious or is this the wrong approach? Is there something else i can try (e.g. 2 physical interfaces, eth0 and eth1???).
The easiest way is just to use 2 machines, but I want all of this self-contained, so it can all be scripted and automated on a single machine, without and other dependencies...
UPDATE:
There is no need for the 2 socats to be connected with a tcp connection, it's possible (and preferable for me) to do this:
socat TUN:10.0.1.1/24,up TUN:10.0.2.1/24,up &
The same problem still exists though...
OK, so I found a solution using Linux network namespaces (netns). There is a helpful article about how to use it here: http://code.google.com/p/coreemu/wiki/Namespaces
This is what I did for my setup....
First, download and install CORE: http://cs.itd.nrl.navy.mil/work/core/index.php
Next, run this script:
#!/bin/sh
core-cleanup.sh > /dev/null 2>&1
ip link set vbridge down > /dev/null 2>&1
brctl delbr vbridge > /dev/null 2>&1
# create a server node namespace container - node 0
vnoded -c /tmp/n0.ctl -l /tmp/n0.log -p /tmp/n0.pid > /dev/null
# create a virtual Ethernet (veth) pair, installing one end into node 0
ip link add name veth0 type veth peer name n0.0
ip link set n0.0 netns `cat /tmp/n0.pid`
vcmd -c /tmp/n0.ctl -- ip link set n0.0 name eth0
vcmd -c /tmp/n0.ctl -- ifconfig eth0 10.0.0.1/24 up
# start web server on node 0
vcmd -I -c /tmp/n0.ctl -- lighttpd -f /etc/lighttpd/lighttpd.conf
# create client node namespace container - node 1
vnoded -c /tmp/n1.ctl -l /tmp/n1.log -p /tmp/n1.pid > /dev/null
# create a virtual Ethernet (veth) pair, installing one end into node 1
ip link add name veth1 type veth peer name n1.0
ip link set n1.0 netns `cat /tmp/n1.pid`
vcmd -c /tmp/n1.ctl -- ip link set n1.0 name eth0
vcmd -c /tmp/n1.ctl -- ifconfig eth0 10.0.0.2/24 up
# bridge together nodes using the other end of each veth pair
brctl addbr vbridge
brctl setfd vbridge 0
brctl addif vbridge veth0
brctl addif vbridge veth1
ip link set veth0 up
ip link set veth1 up
ip link set vbridge up
This basically sets up 2 virtual/isolated/name-spaced networks on your Linux host, in this case, node 0 and node 1. A web server is started on node 0.
All you need to do now is run curl on node 1:
vcmd -c /tmp/n1.ctl -- curl --output /dev/null http://10.0.0.1
And monitor the traffic with tcpdump:
tcpdump -s 1514 -i veth0 -n
This seems to work quite well... still experimenting, but looks like it will solve my problem.