I have two ubuntu 12.04 32 Bit PCs between which I want IPSec Tunnel to be setup. I have setup ipsec in both systems and ipsec verify runs fine on both. Since I have no prior experience of openswan, I am finding it hard to set config files.
Here is the snippet of ipsec.config
config setup
# Do not set debug options to debug configuration issues!
# plutodebug / klipsdebug = "all", "none" or a combation from below:
# "raw crypt parsing emitting control klips pfkey natt x509 dpd private"
# eg:
# plutodebug="control parsing"
# Again: only enable plutodebug or klipsdebug when asked by a developer
#
# enable to get logs per-peer
# plutoopts="--perpeerlog"
#
# Enable core dumps (might require system changes, like ulimit -C)
# This is required for abrtd to work properly
# Note: incorrect SElinux policies might prevent pluto writing the core
dumpdir=/var/run/pluto/
#
# NAT-TRAVERSAL support, see README.NAT-Traversal
nat_traversal=yes
# exclude networks used on server side by adding %v4:!a.b.c.0/24
# It seems that T-Mobile in the US and Rogers/Fido in Canada are
# using 25/8 as "private" address space on their 3G network.
# This range has not been announced via BGP (at least upto 2010-12-21)
virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12,%v4:25.0.0.0/8,%v6:fd00::/8,%v6:fe80::/10
# OE is now off by default. Uncomment and change to on, to enable.
oe=off
# which IPsec stack to use. auto will try netkey, then klips then mast
protostack=netkey
# Use this to log to a file, or disable logging on embedded systems (like openwrt)
#plutostderrlog=/dev/null
# Add connections here
# sample VPN connection
# for more examples, see /etc/ipsec.d/examples/
conn linux-to-linux
# # Left security gateway, subnet behind it, nexthop toward right.
left=192.168.58.17
# leftsubnet=172.16.0.0/24
# leftnexthop=10.22.33.44
# # Right security gateway, subnet behind it, nexthop toward left.
right=192.168.58.32
# rightsubnet=192.168.0.0/24
# rightnexthop=10.101.102.103
# # To authorize this connection, but not actually start it,
# # at startup, uncomment this.
auto=start
Queries:
Now based on the given topology (see image) of my network, is the above config correct for both the PCs.
Is it have to be same for both left and right PCs.
After it is setup how do I confirm that secure tunnel is working, what is the best tool to check the algos being used and packet's content.
Inside LAN the secure ipsec tunnel is called host-to-host tunnel and the site-to-site connection refers to when VPN kicks in, right?
Quse 1)Now based on the given topology (see image) of my network, is the above config correct for both the PCs.
Ans) You have to provide ipsec.secrets file and the method of authentication like PSK/RSA
Ques 2)Is it have to be same for both left and right PCs.
Ans) Left and right should be interchanged.
Quse 3)After it is setup how do I confirm that secure tunnel is working, what is the best tool to check the algos being used and packet's content.
Ans) try to ping any system on central site.
Ques 4)Inside LAN the secure ipsec tunnel is called host-to-host tunnel and the site-to-site connection refers to when VPN kicks in, right?
Ans) No, host-to host ans site-to-site are two different VPN configuration depending upon network topology
Related
I'm very unfamiliar with Linux so forgive me if this has been answered before, I've read quite a few answers but I'm never sure if they actually relate to my question.
I have a headless raspberry pi that connects to my phone's bluetooth automatically, my phone shares its internet access by tethering. I use this initial and reliable connection to SSH to my raspberry pi, and use the desktop with VNC viewer.
I would like to connect to a WiFi network that uses a captive portal, but the browser always uses the bluetooth connection so it never redirects me to the portal page. The bluetooth connection is just to be able to use the desktop so I can get through the portal, then I would like to either disconnect bluetooth or just not use it, mainly because of the low bandwidth it provides.
I've added wlan0 as a priority interface with ifmetric, but that hasn't worked.
I was thinking that forcing all HTTP connections through the wlan0 interface could solve the problem, but there may be a simpler way, feel free to tell me.
Can you explain in "simple" terms the best way to achieve this ?
Of course, there are multiple solutions. The simplest is making sure that there is only one correct default route.
There are 3 situations:
You are only connected via bluetooth via ssh
You are connected via bluetooth and via wifi, but not yet through the splash
You are through the splash
Each will require a different network configuration.
In 1, your network config will probably be:
some IP address (let's call it IP-bt) and network mask
Default gateway is your phone
With route -n you can verify this.
In 2, the network config will depend a bit on the wifi network, but in general, your network config will be:
you'll still have IP-bt
you will have a new address on the wifi adapter (which we call IP-wifi)
the default gateway should be the gateway on the wifi network.
When you verify this with route -n, you might still see a route with destination 0.0.0.0 towards your phone. You can delete this route. Your phone should be on a directly connected network and your ssh session should therefore not break.
If the default gw is not on the wifi network, you can still remove the route that sets your phone as default gw.
Under 3, the default gw must be on the wifi network, and not on the phone. You will still be able to use your phone, because it is directly connected.
Something to watch out for in this scenario is that your phone will act as a DHCP server. That means once in a while your DHCP lease will refresh, and the bluetooth default route may re-appear. Disconnecting bluetooth will prevent this.
The second solution is to use ifmetric. Instead of making wlan0 a lower metric, make your bluetooth a higher metric. Again verify with route -n that the metrics are as you want them to be. Verify with a traceroute how the packets are moving.
A third, and most complex option would be to install Quagga and configure correct routing.
As a project I have a physical firewall (IP: 10.0.0.2) with a SPAN port configured to a physical linux (CentOS 6) (IP: 10.0.0.3) on which I am running Suricata IDS.
Theoretically I should receive all the traffic to the box through an interface I called "span0". I can confirm this by running ifconfig and see traffic. So all good.
When running Suricata as follows: sudo suricata -c /etc/suricata/suricata.yaml -i span0 | I am not getting any errors. Also good.
The question here is how to configure the suricata.yaml file.
Should I have the HOME_NET on 10.0.0.2 or on 10.0.0.0/8?
Looking forward to hear your feedback, Jan (Honza) Novak
I am not a great IDS setup specialist, but I would suggest that the configuration depends on the network setup.
If the firewall simply broadcasts everything through itself, then you should choose 10.0.0.0/8 to protect the entire network. On the other hand, with this setting, events within the network may go unnoticed.
If NAT is configured, then I would suggest choosing 10.0.0.2 to track possible malicious activity both outside and inside the network.
We have a small case of security breach in one of our sites, we have a contractor that is suppose to stay out of our firewall Fortinet, today i noticed these two paragraphs that look fishy. My site network administrator bailed on us few months ago and i am trying to wrap my head around these paragraphs without the need of paying someone to do it. I need your help experts!
> edit "Mycompany_to_Contractor"
> set vdom "root"
> set type tunnel
> set snmp-index 6
> set interface "wan1"
> next
> edit "Mycompany to Contractor2"
> set vdom "root"
> set type tunnel
> set snmp-index 8
> set interface "wan1"
> next
Any explanation would be appreciated!
Thank you
These snippets from the config interface section of a FortiOS config file show two virtual IPsec tunnel interfaces. When you create an IPsec config, these virtual interfaces are set up so that you can use them in policies to allow/filter traffic to or from the tunnel to any local interface.
The tunnel definition is kept in config vpn ipsec phase1-interface and starts with edit "Mycompany_to_Contractor". In this phase1 part you can see the IP address of the remote gateway which may give you a clue to whom the tunnel is connecting to.
The rest of the VPN definition, including local and remote subnets, is defined in config vpn ipsec phase2-interface.
To quickly disable remote access from these two contractors / remote sites, disable the policies referring to the tunnel interfaces. Without policy the tunnels cannot be established. For forensic purposes I would backup the config first.
I have 3 different servers deployed on Aliyun, each of them is running 2 redis instances with port 6379 and 6380.
I was trying to build a redis cluster with these 6 nodes.(Redis version 3.2.0). But it failed and said "Sorry, cannot connect to the node 10.161.94.215:6379" (10.161.94.215 is the lan ip address of my first server.)
While obviously the servers were running quite well, and I could get it by redis-cli.
Gem is installed.
Requirepass is banned, no auth is needed.
No ip bind
No protected-mode as well.
error pic
All the configuration options about cluster are well set.
What's wrong with this?
I think i know why now.
Use the IP of the local host.
src/redis-trib.rb create 127.0.0.1:6379 127.0.0.1:6380 h2:p1 h2:p2 h3:p1 h3:p2
I think you are creating cluster from a different subnet. That might be a problem.
Looks like protected mode is a new security feature in redis 3.2. The short version is if you don't explicitly bind to an ip address it will only allow access to localhost.
If you only wish to create a cluster on a single host, this may be ok. If you're using multiple hosts to create a cluster you'll either need to turn off protected mode or explicitly bind to an ip address.
From redis.conf file:
# Protected mode is a layer of security protection, in order to avoid that
# Redis instances left open on the internet are accessed and exploited.
#
# When protected mode is on and if:
#
# 1) The server is not binding explicitly to a set of addresses using the
# "bind" directive.
# 2) No password is configured.
# The server only accepts connections from clients connecting from the
# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
# sockets.
#
# By default protected mode is enabled. You should disable it only if
# you are sure you want clients from other hosts to connect to Redis
# even if no authentication is configured, nor a specific set of interfaces
# are explicitly listed using the "bind" directive.
protected-mode yes
There are instructions on how to correct this if you attempt to connect to it using something aside from the loopback interface:
DENIED Redis is running in protected mode because protected mode is enabled, no bind address was specified, no authentication password is requested to clients. In this mode connections are only accepted from the loopback interface. If you want to connect from external computers to Redis you may adopt one of the following solutions: 1) Just disable protected mode sending the command 'CONFIG SET protected-mode no' from the loopback interface by connecting to Redis from the same host the server is running, however MAKE SURE Redis is not publicly accessible from internet if you do so. Use CONFIG REWRITE to make this change permanent. 2) Alternatively you can just disable the protected mode by editing the Redis configuration file, and setting the protected mode option to 'no', and then restarting the server. 3) If you started the server manually just for testing, restart it with the '--protected-mode no' option. 4) Setup a bind address or an authentication password. NOTE: You only need to do one of the above things in order for the server to start accepting connections from the outside.
The output of redis-trib.rb is fairly terse (probably appropriately so).
sudo nano /etc/redis/6379.conf
Replace #bind 127.0.0.1 or bind 127.0.0.1 with bind 0.0.0.0
sudo service redis_6379 restart
Allow to access redis anywhere.
I want to set up a 3 node Rabbit cluster on EC2 (amazon linux). We'd like to have recovery implemented so if we lose a server it can be replaced by another new server automagically. We can set the cluster up manually easily using the default hostname (ip-xx-xx-xx-xx) so that the broker id is rabbit#ip-xx-xx-xx-xx. This is because the hostname is resolvable over the network.
The problem is: This hostname will change if we lose/reboot a server, invalidating the cluster. We haven't had luck in setting a custom static hostname because they are not resolvable by other machines in the cluster; thats the only part of that article that doens't make sense.
Has anyone accomplished a RabbitMQ Cluster on EC2 with a recovery implementation? Any advice is appreciated.
You could create three A records in an external DNS service for the three boxes and use them in the config. E.g., rabbit1.alph486.com, rabbit2.alph486.com and rabbit3.alph486.com. These could even be the ec2 private IP addresses. If all of the boxes are in the same region it'll be faster and cheaper. If you lose a box, just update the DNS record.
Additionally, you could assign an elastic IPs to the three boxes. Then, when you lose a box, all you'd need to do is assign the elastic IP to it's replacement.
Of course, if you have a small number of clients, you could just add entries into the /etc/hosts file on each box and update as needed.
From:
http://www.rabbitmq.com/ec2.html
Issues with hostname
RabbitMQ names the database directory using the current hostname of the system. If the hostname changes, a new empty database is created. To avoid data loss it's crucial to set up a fixed and resolvable hostname. For example:
sudo -s # become root
echo "rabbit" > /etc/hostname
echo "127.0.0.1 rabbit" >> /etc/hosts
hostname -F /etc/hostname
#Chrskly gave good answers that are the general consensus of the Rabbit community:
Init scripts that handle DNS or identification of other servers are mainly what I hear.
Elastic IPs we could not get to work without the aid of DNS or hostname aliases because the Internal IP/DNS on amazon still rotate and the public IP/DNS names that stay static cannot be used as the hostname for rabbit unless aliased properly.
Hosts file manipulations via an script are also an option. This needs to be accompanied by a script that can identify the DNS's of the other servers upon launch so doesn't save much work in terms of making things more "solid state" config wise.
What I'm doing:
Due to some limitations on the DNS front, I am opting to use bootstrap scripts to initialize the machine and cluster with any other available machines using the default internal dns assigned at launch. If we lose a machine, a new one will come up, prepare rabbit and lookup the DNS names of machines to cluster with. It will then remove the dead node from the cluster for housekeeping.
I'm using some homebrew init scripts in Python. However, this could easily be done with something like Chef/Puppet.
Update: Detail from Docs
From:
http://www.rabbitmq.com/ec2.html
Issues with hostname
RabbitMQ names the database directory using the current hostname of
the system. If the hostname changes, a new empty database is created.
To avoid data loss it's crucial to set up a fixed and resolvable
hostname. For example:
sudo -s # become root
echo "rabbit" > /etc/hostname
echo "127.0.0.1 rabbit" >> /etc/hosts
hostname -F /etc/hostname