How to enable VRF on startup on Linux/Systemd/Networkd - linux

Using systemd-networkd I've created two VRF :
File : /etc/systemd/network/vrf20.netdev
[NetDev]
Name=vrf-mpls-red
Kind=vrf
[VRF]
Table=20
File : /etc/systemd/network/vrf30.netdev
[NetDev]
Name=vrf-mpls-green
Kind=vrf
[VRF]
Table=30
Each VRF have some network interfaces associated.
After system startup, both VRF are "DOWN" :
3: vrf-mpls-red: <NOARP,MASTER> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/ether fe:ab:58:be:29:ab brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 1280 maxmtu 65535
vrf table 20 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
4: vrf-mpls-green: <NOARP,MASTER> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/ether 26:a8:71:a0:9b:b2 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 1280 maxmtu 65535
vrf table 30 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
File : `/etc/systemd/network/vrf20.netdev`
I have to use ip link set dev vrf-mpls-red and ip link set dev vrf-mpls-green to have VRF "UP" and to have network communication between interfaces inside the same VRF.
How to configure networkd to automaticaly put VRF "UP"?

You need .network files to pair for your .netdev
For example 99-vrf.network with:
[Match]
Name=vrf-*
[Link]
ActivationPolicy=up
RequiredForOnline=no
Will bring up both your vrf. Also be accuracy with file naming, always use 00- 10- 90- in the start of name.

Related

linux - wpa_supplicant forcing wlan0 to used given bssid not working

i want to force my wlan to connect to network ID 0 with bssid, but it automaticly moves to network 1 even if it is disabled in wpa_supplicant. What am i missing here?
cat /etc/wpa_supplicant/wpa_supplicant.conf
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
ssid="Parter"
bssid=b0:98:2b:cf:a0:f2
psk=hash
}
network={
ssid="Parter"
bssid=64:70:02:6b:81:d8
disabled=1
}
wpa_cli list_networks and logs
network id / ssid / bssid / flags
0 Parter b0:98:2b:cf:a0:f2 [CURRENT]
1 Parter 64:70:02:6b:81:d8 [DISABLED]
<3>CTRL-EVENT-CONNECTED - Connection to b0:98:2b:cf:a0:f2 completed [id=0 id_str=]
<3>CTRL-EVENT-SUBNET-STATUS-UPDATE status=0
<3>CTRL-EVENT-REGDOM-CHANGE init=COUNTRY_IE type=COUNTRY alpha2=PL
<3>Associated with 64:70:02:6b:81:d8
<3>CTRL-EVENT-CONNECTED - Connection to 64:70:02:6b:81:d8 completed [id=0 id_str=]
<3>CTRL-EVENT-SUBNET-STATUS-UPDATE status=0
> list_networks
network id / ssid / bssid / flags
0 Parter b0:98:2b:cf:a0:f2 [CURRENT]
1 Parter 64:70:02:6b:81:d8 [DISABLED]
> status
bssid=64:70:02:6b:81:d8
freq=2437
ssid=Parter
id=0
mode=station
wifi_generation=4
pairwise_cipher=CCMP
group_cipher=CCMP
key_mgmt=WPA2-PSK
wpa_state=COMPLETED
ip_address=192.168.0.103
p2p_device_address=ba:27:eb:a5:fd:b3
address=b8:27:eb:a5:fd:b3
uuid=501fe555-b789-59f6-a164-6e40c0bc36f0
>

DPDK for general purpose workload

I have deployed OpenStack and configured OVS-DPDK on compute nodes for high-performance networking. My workload is a general-purpose workload like running haproxy, mysql, apache, and XMPP etc.
When I did load-testing, I found performance is average and after 200kpps packet rate I noticed packet drops. I heard and read DPDK can handle millions of packets but in my case, it's not true. In guest, I am using virtio-net which processes packets in the kernel so I believe my bottleneck is my guest VM.
I don't have any guest-based DPDK application like testpmd etc. Does that mean OVS+DPDK isn't useful for my cloud? How do I take advantage of OVS+DPDK with a general-purpose workload?
Updates
We have our own loadtesting tool which generate Audio RTP traffic which is pure UDP based 150bytes packets and noticed after 200kpps audio quality go down and choppy. In short DPDK host hit high PMD cpu usage and loadtest showing bad audio quality. when i do same test with SRIOV based VM then performance is really really good.
$ ovs-vswitchd -V
ovs-vswitchd (Open vSwitch) 2.13.3
DPDK 19.11.7
Intel NIC X550T
# ethtool -i ext0
driver: ixgbe
version: 5.1.0-k
firmware-version: 0x80000d63, 18.8.9
expansion-rom-version:
bus-info: 0000:3b:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
In the following output what does these queue-id:0 to 8 and why only
the first queue is in use but not others, they are always zero. What
does this mean?
ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 0 core_id 2:
isolated : false
port: vhu1c3bf17a-01 queue-id: 0 (enabled) pmd usage: 0 %
port: vhu1c3bf17a-01 queue-id: 1 (enabled) pmd usage: 0 %
port: vhu6b7daba9-1a queue-id: 2 (disabled) pmd usage: 0 %
port: vhu6b7daba9-1a queue-id: 3 (disabled) pmd usage: 0 %
pmd thread numa_id 1 core_id 3:
isolated : false
pmd thread numa_id 0 core_id 22:
isolated : false
port: vhu1c3bf17a-01 queue-id: 3 (enabled) pmd usage: 0 %
port: vhu1c3bf17a-01 queue-id: 6 (enabled) pmd usage: 0 %
port: vhu6b7daba9-1a queue-id: 0 (enabled) pmd usage: 54 %
port: vhu6b7daba9-1a queue-id: 5 (disabled) pmd usage: 0 %
pmd thread numa_id 1 core_id 23:
isolated : false
port: dpdk1 queue-id: 0 (enabled) pmd usage: 3 %
pmd thread numa_id 0 core_id 26:
isolated : false
port: vhu1c3bf17a-01 queue-id: 2 (enabled) pmd usage: 0 %
port: vhu1c3bf17a-01 queue-id: 7 (enabled) pmd usage: 0 %
port: vhu6b7daba9-1a queue-id: 1 (disabled) pmd usage: 0 %
port: vhu6b7daba9-1a queue-id: 4 (disabled) pmd usage: 0 %
pmd thread numa_id 1 core_id 27:
isolated : false
pmd thread numa_id 0 core_id 46:
isolated : false
port: dpdk0 queue-id: 0 (enabled) pmd usage: 27 %
port: vhu1c3bf17a-01 queue-id: 4 (enabled) pmd usage: 0 %
port: vhu1c3bf17a-01 queue-id: 5 (enabled) pmd usage: 0 %
port: vhu6b7daba9-1a queue-id: 6 (disabled) pmd usage: 0 %
port: vhu6b7daba9-1a queue-id: 7 (disabled) pmd usage: 0 %
pmd thread numa_id 1 core_id 47:
isolated : false
$ ovs-appctl dpif-netdev/pmd-stats-clear && sleep 10 && ovs-appctl
dpif-netdev/pmd-stats-show | grep "processing cycles:"
processing cycles: 1697952 (0.01%)
processing cycles: 12726856558 (74.96%)
processing cycles: 4259431602 (19.40%)
processing cycles: 512666 (0.00%)
processing cycles: 6324848608 (37.81%)
Does processing cycles mean my PMD is under stress? but i am only
hitting 200kpps rate?
This is my dpdk0 and dpdk1 port statistics
sudo ovs-vsctl get Interface dpdk0 statistics
{flow_director_filter_add_errors=153605,
flow_director_filter_remove_errors=30829, mac_local_errors=0,
mac_remote_errors=0, ovs_rx_qos_drops=0, ovs_tx_failure_drops=0,
ovs_tx_invalid_hwol_drops=0, ovs_tx_mtu_exceeded_drops=0,
ovs_tx_qos_drops=0, rx_128_to_255_packets=64338613,
rx_1_to_64_packets=367, rx_256_to_511_packets=116298,
rx_512_to_1023_packets=31264, rx_65_to_127_packets=6990079,
rx_broadcast_packets=0, rx_bytes=12124930385, rx_crc_errors=0,
rx_dropped=0, rx_errors=12, rx_fcoe_crc_errors=0, rx_fcoe_dropped=12,
rx_fcoe_mbuf_allocation_errors=0, rx_fragment_errors=367,
rx_illegal_byte_errors=0, rx_jabber_errors=0, rx_length_errors=0,
rx_mac_short_packet_dropped=128, rx_management_dropped=35741,
rx_management_packets=31264, rx_mbuf_allocation_errors=0,
rx_missed_errors=0, rx_oversize_errors=0, rx_packets=71512362,
rx_priority0_dropped=0, rx_priority0_mbuf_allocation_errors=1096,
rx_priority1_dropped=0, rx_priority1_mbuf_allocation_errors=0,
rx_priority2_dropped=0, rx_priority2_mbuf_allocation_errors=0,
rx_priority3_dropped=0, rx_priority3_mbuf_allocation_errors=0,
rx_priority4_dropped=0, rx_priority4_mbuf_allocation_errors=0,
rx_priority5_dropped=0, rx_priority5_mbuf_allocation_errors=0,
rx_priority6_dropped=0, rx_priority6_mbuf_allocation_errors=0,
rx_priority7_dropped=0, rx_priority7_mbuf_allocation_errors=0,
rx_undersize_errors=6990079, tx_128_to_255_packets=64273778,
tx_1_to_64_packets=128, tx_256_to_511_packets=43670294,
tx_512_to_1023_packets=153605, tx_65_to_127_packets=881272,
tx_broadcast_packets=10, tx_bytes=25935295292, tx_dropped=0,
tx_errors=0, tx_management_packets=0, tx_multicast_packets=153,
tx_packets=109009906}
stats
sudo ovs-vsctl get Interface dpdk1 statistics
{flow_director_filter_add_errors=126793,
flow_director_filter_remove_errors=37969, mac_local_errors=0,
mac_remote_errors=0, ovs_rx_qos_drops=0, ovs_tx_failure_drops=0,
ovs_tx_invalid_hwol_drops=0, ovs_tx_mtu_exceeded_drops=0,
ovs_tx_qos_drops=0, rx_128_to_255_packets=64435459,
rx_1_to_64_packets=107843, rx_256_to_511_packets=230,
rx_512_to_1023_packets=13, rx_65_to_127_packets=7049788,
rx_broadcast_packets=199058, rx_bytes=12024342488, rx_crc_errors=0,
rx_dropped=0, rx_errors=11, rx_fcoe_crc_errors=0, rx_fcoe_dropped=11,
rx_fcoe_mbuf_allocation_errors=0, rx_fragment_errors=107843,
rx_illegal_byte_errors=0, rx_jabber_errors=0, rx_length_errors=0,
rx_mac_short_packet_dropped=1906, rx_management_dropped=0,
rx_management_packets=13, rx_mbuf_allocation_errors=0,
rx_missed_errors=0, rx_oversize_errors=0, rx_packets=71593333,
rx_priority0_dropped=0, rx_priority0_mbuf_allocation_errors=1131,
rx_priority1_dropped=0, rx_priority1_mbuf_allocation_errors=0,
rx_priority2_dropped=0, rx_priority2_mbuf_allocation_errors=0,
rx_priority3_dropped=0, rx_priority3_mbuf_allocation_errors=0,
rx_priority4_dropped=0, rx_priority4_mbuf_allocation_errors=0,
rx_priority5_dropped=0, rx_priority5_mbuf_allocation_errors=0,
rx_priority6_dropped=0, rx_priority6_mbuf_allocation_errors=0,
rx_priority7_dropped=0, rx_priority7_mbuf_allocation_errors=0,
rx_undersize_errors=7049788, tx_128_to_255_packets=102664472,
tx_1_to_64_packets=1906, tx_256_to_511_packets=68008814,
tx_512_to_1023_packets=126793, tx_65_to_127_packets=1412435,
tx_broadcast_packets=1464, tx_bytes=40693963125, tx_dropped=0,
tx_errors=0, tx_management_packets=199058, tx_multicast_packets=146,
tx_packets=172252389}
Update - 2
dpdk interface
# dpdk-devbind.py -s
Network devices using DPDK-compatible driver
============================================
0000:3b:00.1 'Ethernet Controller 10G X550T 1563' drv=vfio-pci unused=ixgbe
0000:af:00.1 'Ethernet Controller 10G X550T 1563' drv=vfio-pci unused=ixgbe
Network devices using kernel driver
===================================
0000:04:00.0 'NetXtreme BCM5720 2-port Gigabit Ethernet PCIe 165f' if=eno1 drv=tg3 unused=vfio-pci
0000:04:00.1 'NetXtreme BCM5720 2-port Gigabit Ethernet PCIe 165f' if=eno2 drv=tg3 unused=vfio-pci
0000:3b:00.0 'Ethernet Controller 10G X550T 1563' if=int0 drv=ixgbe unused=vfio-pci
0000:af:00.0 'Ethernet Controller 10G X550T 1563' if=int1 drv=ixgbe unused=vfio-pci
OVS
# ovs-vsctl show
595103ef-55a1-4f71-b299-a14942965e75
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
datapath_type: netdev
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port vxlan-0a48042b
Interface vxlan-0a48042b
type: vxlan
options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.72.4.44", out_key=flow, remote_ip="10.72.4.43"}
Port vxlan-0a480429
Interface vxlan-0a480429
type: vxlan
options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.72.4.44", out_key=flow, remote_ip="10.72.4.41"}
Port vxlan-0a48041f
Interface vxlan-0a48041f
type: vxlan
options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.72.4.44", out_key=flow, remote_ip="10.72.4.31"}
Port vxlan-0a48042a
Interface vxlan-0a48042a
type: vxlan
options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.72.4.44", out_key=flow, remote_ip="10.72.4.42"}
Bridge br-vlan
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
datapath_type: netdev
Port br-vlan
Interface br-vlan
type: internal
Port dpdkbond
Interface dpdk1
type: dpdk
options: {dpdk-devargs="0000:af:00.1", n_txq_desc="2048"}
Interface dpdk0
type: dpdk
options: {dpdk-devargs="0000:3b:00.1", n_txq_desc="2048"}
Port phy-br-vlan
Interface phy-br-vlan
type: patch
options: {peer=int-br-vlan}
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
datapath_type: netdev
Port vhu87cf49d2-5b
tag: 7
Interface vhu87cf49d2-5b
type: dpdkvhostuserclient
options: {vhost-server-path="/var/lib/vhost_socket/vhu87cf49d2-5b"}
Port vhub607c1fa-ec
tag: 7
Interface vhub607c1fa-ec
type: dpdkvhostuserclient
options: {vhost-server-path="/var/lib/vhost_socket/vhub607c1fa-ec"}
Port vhu9a035444-83
tag: 8
Interface vhu9a035444-83
type: dpdkvhostuserclient
options: {vhost-server-path="/var/lib/vhost_socket/vhu9a035444-83"}
Port br-int
Interface br-int
type: internal
Port int-br-vlan
Interface int-br-vlan
type: patch
options: {peer=phy-br-vlan}
Port vhue00471df-d8
tag: 8
Interface vhue00471df-d8
type: dpdkvhostuserclient
options: {vhost-server-path="/var/lib/vhost_socket/vhue00471df-d8"}
Port vhu683fdd35-91
tag: 7
Interface vhu683fdd35-91
type: dpdkvhostuserclient
options: {vhost-server-path="/var/lib/vhost_socket/vhu683fdd35-91"}
Port vhuf04fb2ec-ec
tag: 8
Interface vhuf04fb2ec-ec
type: dpdkvhostuserclient
options: {vhost-server-path="/var/lib/vhost_socket/vhuf04fb2ec-ec"}
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
ovs_version: "2.13.3"
I have created guest vms using openstack and they can see them they are connected using vhost socket (Ex: /var/lib/vhost_socket/vhuf04fb2ec-ec)
When I did load-testing, I found performance is average and after 200kpps packet rate I noticed packet drops. In short DPDK host hit high PMD cpu usage and loadtest showing bad audio quality. when i do same test with SRI
[Answer] this observation is not true based on the live debug done so far. The reason as stated below
qemu launched were not pinned to specific cores.
comparison done against PCIe pass-through (VF) against vhost-client is not apples to apples comparison.
with OpenStack approach, there are at least 3 bridges before the packets to flow through before reaching VM.
OVS threads were not pinned which led to all the PMD threads running on the same core (causing latency and drops) in each bridge stage.
To have a fair comparison against SRIOV approach, the following changes have been made with respect to similar question
External Port <==> DPDK Port0 (L2fwd) DPDK net_vhost <--> QEMU (virtio-pci)
Numbers achieved with iperf3 (bidirectional) is around 10Gbps.
Note: requested to run trex, pktgen to try out Mpps. Expectation is to reach minimum 8 MPPS with the current setup.
Hence this is not DPDK, virtio-client, qemu-kvm or SRIOV related issue, instead a configuration or platform setup issue.

Is there a way to check the Socket Priority with Wireshark or Tcpdump?

I am doing some changes in the SO_PRIORITY of the socket that sends UDP packets, using the command setsockopt, is there a way to see that changes with Wireshark or Tcpdump.
I read that can be DSF (Differentiated Services Field), but I am not sure because when I make the changes I see that this field is 00.
I am running a Linux Mint 19.3
It is part of the 802.1Q header. For ex:
> 802.1Q Virtual LAN, PRI: 5, DEI: 0, ID: 4
101. .... .... .... = Priority: Voice, < 10ms latency and jitter (5)
...0 .... .... .... = DEI: Ineligible
.... 0000 0000 0100 = ID: 4
Type: IPv6 (0x86dd)

Get IP address by subnet in Puppet

I have a couple machines, each with multiple network interfaces:
lead$ ip addr
2: enp0s2: ...
inet 10.1.1.11/24 brd 10.1.1.255 scope global enp0s2
3: enp0s3: ...
inet 10.2.2.11/24 brd 10.2.2.255 scope global enp0s3
iron$ ip addr
2: enp0s99: ...
inet 10.1.1.12/24 brd 10.1.1.255 scope global enp0s99
3: enp0s3: ...
inet 10.2.2.12/24 brd 10.2.2.255 scope global enp0s3
Note that on lead, 10.1.1.0/24 is on enp0s2, but on
iron, 10.1.1.0/24 is on enp0s99.
In Puppet, how would I get the IP address(es) (or interface name)
corresponding to the subnet 10.1.1.0/24? Using $::ipaddress_enp0s2
clearly won't work, as the interface name is different across machines.
What I want is something like $::ipaddress_10_1_1_0 (with the value
10.1.1.11 on lead and the value 10.1.1.12 on iron).
For reference: In Ansible, I would do something like:
- shell: "ip route get 10.1.1.0 | awk '{print $6}'"
register: ipaddr
- debug:
msg: "{{ ipaddr.stdout }} is my IP on 10.1.1.0/24"
You will need to use the same approach in Puppet, using custom facts to create your own.
Puppet Labs actually has an implementation of exactly this in their puppetlabs-openstack module:
require "ipaddr"
module Puppet::Parser::Functions
newfunction(:ip_for_network, :type => :rvalue, :doc => <<-EOS
Returns an ip address for the given network in cidr notation
ip_for_network("127.0.0.0/24") => 127.0.0.1
EOS
) do |args|
addresses_in_range = []
range = IPAddr.new(args[0])
facts = compiler.node.facts.values
ip_addresses = facts.select { |key, value| key.match /^ipaddress/ }
ip_addresses.each do |pair|
key = pair[0]
string_address = pair[1]
ip_address = IPAddr.new(string_address)
if range.include?(ip_address)
addresses_in_range.push(string_address)
end
end
return addresses_in_range.first
end
end
Rather than making a custom fact, this loops over the existing facts and finds ones that look like IP addresses.

Configure local virtual network with multihoming

I need to test an application with more than 64K connections. I want to use the same host.
Today I start server that listents on 127.0.0.1 and connect to it from same machine, but of course it is limited to 64K connections.
I want to simulate a situation like I have one server and many clients connecting to a single server on specific single IP.
Server Listen: 1.2.3.4
Client Connect to 1.2.3.4 From 2.1.2.1
Client Connect to 1.2.3.4 From 2.1.2.2
Client Connect to 1.2.3.4 From 2.1.2.3
Client Connect to 1.2.3.4 From 2.1.2.4
So I need to setup a virtual network with multihoming so the client would be able to connect from several addressed and a server that listen on.
How can this be configured? On Linux?
Step 1: Add addresses to your loopback; I will add 10.0.0.1, 10.0.0.2, 10.0.0.3, 10.0.0.4 and 10.0.0.5 to lo
[mpenning#tsunami ~]$ ip addr show lo
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
[mpenning#tsunami ~]$ sudo ip addr add 10.0.0.1/24 dev lo
[mpenning#tsunami ~]$ sudo ip addr add 10.0.0.2/24 dev lo
[mpenning#tsunami ~]$ sudo ip addr add 10.0.0.3/24 dev lo
[mpenning#tsunami ~]$ sudo ip addr add 10.0.0.4/24 dev lo
[mpenning#tsunami ~]$ sudo ip addr add 10.0.0.5/24 dev lo
[mpenning#tsunami ~]$ sudo ip addr show lo
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet 10.0.0.1/24 scope global lo
inet 10.0.0.2/24 scope global secondary lo
inet 10.0.0.3/24 scope global secondary lo
inet 10.0.0.4/24 scope global secondary lo
inet 10.0.0.5/24 scope global secondary lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
[mpenning#tsunami ~]$
Step 2: Increase max connections
Step 3: Start lots of connections...
from socket import socket, AF_INET, SOCK_STREAM
import select
class Server(object):
def __init__(self, addr="0.0.0.0", port=2525):
self.listen_addr = addr
self.listen_port = port
self.server = socket(AF_INET, SOCK_STREAM)
self.server.setblocking(0)
self.server.bind(self.socket_tuple)
self.server.listen(5)
def __repr__(self):
return "TCP Server listening on %s:%s" % (self.listen_addr,
self.listen_port)
#property
def socket_tuple(self):
return (self.listen_addr, self.listen_port)
def close(self):
self.server.close()
class Client(object):
def __init__(self, addr="0.0.0.0"):
self.listen_addr = addr
self.server_addr = None
self.server_port = None
self.client = socket(AF_INET, SOCK_STREAM)
self.client.setblocking(0)
finished = False
while not finished:
try:
### The point of my answer is here...
self.client.bind((addr,0)) # <- Bind to specific IP and rnd port
finished = True
except:
pass
def __repr__(self):
return "TCP Client %s->%s:%s" % (self.listen_addr,
self.server_addr, self.server_port)
def connect(self, socket_tuple=("0.0.0.0",0)):
self.server_addr = socket_tuple[0]
self.server_port = socket_tuple[1]
self.client.connect_ex(socket_tuple)
def close(self):
self.client.close()
READ_ONLY = select.POLLIN | select.POLLPRI | select.POLLHUP | select.POLLERR
clients = list()
servers = list()
for server_addr, port in [('10.0.0.1', 2525), ('10.0.0.2', 2526)]:
ss = Server(addr=server_addr, port=port)
servers.append(ss)
for client_addr in ['10.0.0.3', '10.0.0.4', '10.0.0.5']:
for ii in xrange(0, 25000):
cc = Client(addr=client_addr)
connection = cc.connect(socket_tuple=ss.socket_tuple)
finished = False
print " %s conns" % len(clients)
while not finished:
poller = select.poll()
poller.register(ss.server, READ_ONLY)
# Map file descriptors to socket objects...
fd_to_socket = {ss.server.fileno(): ss.server,}
events = poller.poll(1)
for fd, flag in events:
s = fd_to_socket[fd]
if flag & (select.POLLIN|select.POLLPRI):
if s is ss.server:
# server socket is ready to accept a connection
connection, client_address = s.accept()
connection.setblocking(0)
fd_to_socket[connection.fileno()] = connection
poller.register(connection, READ_ONLY)
finished = True
clients.append(cc)
print cc
print "Total Clients:", len(clients)
for cc in clients:
cc.close()
for ss in servers:
ss.close()
Adjust values above to your preferences... however, you should keep in mind that tuning a linux server to accept so many connections may be challenging... but at this point, that isn't the question you are asking...

Resources