I have created 2 Standard HB120rs v2(RDMA capable) Virtual Machines on Azure with centos-based HPC,
On "ifconfig" I am getting below ips for 2 machines
I tried testing rping using the below commands
On instance 1 executed
rping -v -s -a 172.16.1.66 -C 10
On instance 2 executed
rping -v -c -a 172.16.1.66 -C 10
On doing this I am getting below error
cma event RDMA_CM_EVENT_ADDR_ERROR, error -110
waiting for addr/route resolution state 1
rping is not working. How to connect between two Mellanox devices from 2 VMs in azure.
Related
I am deploying cassandra on two public networks, when nodes are started i can see all the node has joined the ring. Also nodetool describecluster shows all nodes are reachable.
After sometime i see nodes are not able to connect to each other and nodetool describecluster shows all nodes in unreachable list.
FYI, i have used public_ip as BROADCAST_ADDRESS AND RPC_ADDRESS. Listen address is the private_ip.
One reason this can happen, is that firewalls are sometimes configured to find and kill idle connections. The Linux kernel has default TCP "keepalive" settings that it can use to refresh long-running connections. The default values for these settings can be seen using sysctl:
$ sudo sysctl -a | grep keepalive
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 7200
In an effort to combat this problem, DataStax recommends adjusting these values in production deployments:
$ sudo sysctl -w \
net.ipv4.tcp_keepalive_time=60 \
net.ipv4.tcp_keepalive_probes=3 \
net.ipv4.tcp_keepalive_intvl=10
You can also add each of those values to your system's equivalent of the/etc/sysctl.conf file (minus the backslashes) and implement that via sysctl also:
sudo sysctl -p /etc/sysctl.conf
Server: Intel® Core™ i7-3930K 6 core, 64 GB DDR3 RAM, 2 x 3 TB 6 Gb/s HDD SATA3
OS (uname -a): Linux *** 3.2.0-4-amd64 #1 SMP Debian 3.2.65-1+deb7u1 x86_64 GNU/Linux
Redis server 2.8.19
On the server to spin Redis-server, whose task is to serve requests from two PHP servers.
Problem: The server is unable to cope with peak loads and stops processing incoming requests or makes it very slowly.
Which attempts to optimize server I made:
cat /etc/rc.local
echo never > /sys/kernel/mm/transparent_hugepage/enabled
ulimit -SHn 100032
echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse
cat /etc/sysctl.conf
vm.overcommit_memory=1
net.ipv4.tcp_max_syn_backlog=65536
net.core.somaxconn=32768
fs.file-max=200000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
cat /etc/redis/redis.conf
tcp-backlog 32768
maxclients 100000
What are the settings I found on redis.io, which is in blogs.
Tests
redis-benchmark -c 1000 -q -n 10 -t get,set
SET: 714.29 requests per second
GET: 714.29 requests per second
redis-benchmark -c 3000 -q -n 10 -t get,set
SET: 294.12 requests per second
GET: 285.71 requests per second
redis-benchmark -c 6000 -q -n 10 -t get,set
SET: 175.44 requests per second
GET: 192.31 requests per second
By increasing the number of customers is reduced query performance and the worst thing, Redis-server stops processing incoming requests and servers in PHP there are dozens of types of exceptions
Uncaught exception 'RedisException' with message 'Connection closed' in [no active file]:0\n
Stack trace:\n
#0 {main}\n thrown in [no active file] on line 0
What to do? What else to optimize? How do such a machine can pull customers?
Thank you!
I have been using wire-shark to analyse the packets of socket programs, Now i want to see the traffic of other hosts traffic, as i found that i need to use monitor mode that is only supported in Linux platform, so i tried but i couldn't capture any packets that is transferred in my network, listing as 0 packets captured.
Scenario:
I'm having a network consisting of 50+ hosts (all are powered by windows Except mine), my IP address is 192.168.1.10, when i initiate a communication between any 192.168.1.xx it showing the captured traffic.
But my requirement is to monitor the traffic of 192.168.1.21 b/w 192.168.1.22 from my host i,e. from 192.168.1.10.
1: is it possible to capture the traffic as i mentioned?
2: If it is possible then is wire-shark is right tool for it (or should i have to use differnt one)?
3: if it is not possible, then why?
Just adapt this a bit with your own filters and ips : (on local host)
ssh -l root <REMOTE HOST> tshark -w - not tcp port 22 | wireshark -k -i -
or using bash :
wireshark -k -i <(ssh -l root <REMOTE HOST> tshark -w - not tcp port 22)
You can use tcpdump instead of tshark if needed :
ssh -l root <REMOTE HOST> tcpdump -U -s0 -w - -i eth0 'port 22' |
wireshark -k -i -
You are connected to a switch which is "switching" traffic. It bases the traffic you see on your mac address. It will NOT send you traffic that is not destined to your mac address. If you want to monitor all the traffic you need to configure your switch to use a "port mirror" and plug your sniffer into that port. There is no software that you can install on your machine that will circumvent the way network switching works.
http://en.wikipedia.org/wiki/Port_mirroring
Facing a peculiar problem when doing load testing on my laptop with 2000 comcurrent users using cometd. Following all steps in http://cometd.org/documentation/2.x/howtos/loadtesting.
These tests run fine for about 1000 concurrent client.
But when I increase the load to about 2000 CCUs, the terminal just shuts down.
Any idea what's happening here?
BTW, i have followed all the OS level settings as per the site. i.e.
# ulimit -n 65536
# ifconfig eth0 txqueuelen 8192 # replace eth0 with the ethernet interface you are using
# /sbin/sysctl -w net.core.somaxconn=4096
# /sbin/sysctl -w net.core.netdev_max_backlog=16384
# /sbin/sysctl -w net.core.rmem_max=16777216
# /sbin/sysctl -w net.core.wmem_max=16777216
# /sbin/sysctl -w net.ipv4.tcp_max_syn_backlog=8192
# /sbin/sysctl -w net.ipv4.tcp_syncookies=1
Also, I have noticed this happened even when I run load tests for other platforms. I know this has to be something related to the OS, but I cannot figure out what it could be.
ulimit command has been executed correctly? I read something about it in Ubuntu forum archive and Ubuntu apache problem.
I'm writing a testing tool that requires known traffic to be captured from a NIC (using libpcap), then fed into the application we are trying to test.
What I'm attempting to set-up is a web server (in this case, lighttpd) and a client (curl) running on the same machine, on an isolated test network. A script will drive the entire setup, and the goal is to be able to specify a number of clients as well as a set of files for each client to download from the web server.
My initial approach was to simply use the loopback (lo) interface... run the web server on 127.0.0.1, have the clients fetch their files from http://127.0.0.1, and run my libpcap-based tool on the lo interface. This works ok, apart from the fact that the loopback interface doesn't emulate a real Ethernet interface. The main problem with that is that packets are all inconsistent sizes... 32kbytes and bigger, and somewhat random... it's also not possible to lower the MTU on lo (well, you can, but it has no effect!).
I also tried running it on my real interface (eth0), but since it's an internal web client talking to an internal web server, traffic never leaves the interface, so libpcap never sees it.
So then I turned to tun/tap. I used socat to bind two tun interfaces together with a tcp connection, so in effect, i had:
10.0.1.1/24 <-> tun0 <-socat-> tcp connection <-socat-> tun1 <-> 10.0.2.1/24
This seems like a really neat solution... tun/tap devices emulate real Ethernet devices, so i can run my web server on tun0 (10.0.1.1) and my capture tool on tun0, and bind my clients to tun1 (10.0.2.1)... I can even use tc to apply shaping rules to this traffic and create a virtual WAN inside my linux box... but it just doesn't work...
Here are the socat commands I used:
$ socat -d TCP-LISTEN:11443,reuseaddr TUN:10.0.1.1/24,up &
$ socat TCP:127.0.0.1:11443 TUN:10.0.2.1/24,up &
Which produces 2 tun interfaces (tun0 and tun1), with their respective IP addresses.
If I run ping -I tun1 10.0.1.1, there is no response, but when i tcpdump -n -i tun0, i see the ICMP echo requests making it to the other side, just no sign of the response coming back.
# tcpdump -i tun0 -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tun0, link-type RAW (Raw IP), capture size 65535 bytes
16:49:16.772718 IP 10.0.2.1 > 10.0.1.1: ICMP echo request, id 4062, seq 5, length 64
<--- insert sound of crickets here (chirp, chirp)
So am I missing something obvious or is this the wrong approach? Is there something else i can try (e.g. 2 physical interfaces, eth0 and eth1???).
The easiest way is just to use 2 machines, but I want all of this self-contained, so it can all be scripted and automated on a single machine, without and other dependencies...
UPDATE:
There is no need for the 2 socats to be connected with a tcp connection, it's possible (and preferable for me) to do this:
socat TUN:10.0.1.1/24,up TUN:10.0.2.1/24,up &
The same problem still exists though...
OK, so I found a solution using Linux network namespaces (netns). There is a helpful article about how to use it here: http://code.google.com/p/coreemu/wiki/Namespaces
This is what I did for my setup....
First, download and install CORE: http://cs.itd.nrl.navy.mil/work/core/index.php
Next, run this script:
#!/bin/sh
core-cleanup.sh > /dev/null 2>&1
ip link set vbridge down > /dev/null 2>&1
brctl delbr vbridge > /dev/null 2>&1
# create a server node namespace container - node 0
vnoded -c /tmp/n0.ctl -l /tmp/n0.log -p /tmp/n0.pid > /dev/null
# create a virtual Ethernet (veth) pair, installing one end into node 0
ip link add name veth0 type veth peer name n0.0
ip link set n0.0 netns `cat /tmp/n0.pid`
vcmd -c /tmp/n0.ctl -- ip link set n0.0 name eth0
vcmd -c /tmp/n0.ctl -- ifconfig eth0 10.0.0.1/24 up
# start web server on node 0
vcmd -I -c /tmp/n0.ctl -- lighttpd -f /etc/lighttpd/lighttpd.conf
# create client node namespace container - node 1
vnoded -c /tmp/n1.ctl -l /tmp/n1.log -p /tmp/n1.pid > /dev/null
# create a virtual Ethernet (veth) pair, installing one end into node 1
ip link add name veth1 type veth peer name n1.0
ip link set n1.0 netns `cat /tmp/n1.pid`
vcmd -c /tmp/n1.ctl -- ip link set n1.0 name eth0
vcmd -c /tmp/n1.ctl -- ifconfig eth0 10.0.0.2/24 up
# bridge together nodes using the other end of each veth pair
brctl addbr vbridge
brctl setfd vbridge 0
brctl addif vbridge veth0
brctl addif vbridge veth1
ip link set veth0 up
ip link set veth1 up
ip link set vbridge up
This basically sets up 2 virtual/isolated/name-spaced networks on your Linux host, in this case, node 0 and node 1. A web server is started on node 0.
All you need to do now is run curl on node 1:
vcmd -c /tmp/n1.ctl -- curl --output /dev/null http://10.0.0.1
And monitor the traffic with tcpdump:
tcpdump -s 1514 -i veth0 -n
This seems to work quite well... still experimenting, but looks like it will solve my problem.