debug *spanning-tree events* not showing up on 2960 router? - cisco

Doing my ccna3 on lab 5.5.1 in packet tracer. question asks to do a debug spanning-tree events command on the 2960 router. this does not work. all routers have STP on.
the image below is a capture of the switch with the root bridge. this command does not work on any of the 3 switches and it is clearly written in the book to do this command on all 3.
any help would be greatly appreciated
S2#show spanning-tree
VLAN0001
Spanning tree enabled protocol ieee
Root ID Priority 32769
Address 0001.643C.50E9
This bridge is the root
Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec
Bridge ID Priority 32769 (priority 32768 sys-id-ext 1)
Address 0001.643C.50E9
Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec
Aging Time 20
Interface Role Sts Cost Prio.Nbr Type
---------------- ---- --- --------- -------- --------------------------------
Fa0/1 Desg FWD 19 128.1 P2p
Fa0/2 Desg FWD 19 128.2 P2p
Fa0/6 Desg FWD 19 128.6 P2p
Fa0/11 Desg FWD 19 128.11 P2p
Fa0/18 Desg FWD 19 128.18 P2p
S2#debug ?
ip IP information
sw-vlan vlan manager
S2#debug spanning-tree events
^
% Invalid input detected at '^' marker.

This is an old thread, but, it's probably a limitation of Packet Tracer. I would assume from the output given that this isn't a router and actually a switch. In which case, it is a limitation.

Related

interpret ntpd configuration used on raspberry pi

I am still very new to the nptd program so my question should be relatively general. I understand that the program slowly adjusts device's clock rate and syncs to the server time through polling. In the following result, IP address that has a * in the front is the actual time source whereas a + is the backup time source. The backup has a better offset than the actual time source. My question is, in the configuration file, there are 4 servers listed. I wonder which server is the one that the program is syncing with?
server 0.us.pool.ntp.org iburst
server 1.us.pool.ntp.org iburst
server 2.us.pool.ntp.org iburst
server 3.us.pool.ntp.org iburst
remote refid st t when poll reach delay offset jitter
==============================================================================
+173.255.215.209 80.72.67.48 3 u 60 64 377 30.713 -0.762 2.808
-64.113.44.55 129.6.15.29 2 u 142 64 274 75.356 3.479 0.970
*144.172.126.201 129.7.1.66 2 u 62 64 375 47.863 2.927 0.418
+108.61.56.35 216.218.254.202 2 u 6 64 377 83.564 1.172 1.423
After some digging, when you run ntpq -pn it will tell you which stratum the device is currently synced to, which is denoted by *.

Traffic Shaping tc-htb, burst has no effect

I’m doing some tests to try to understand the tc-htb arguments. I’m using VmWare Player (version 2.0.5) with Windows 7 as host and Ubuntu (version 4.4.0-93) as guest.
My plan is to use iperf to generate a known data stream(udp 100Mbits/sec) via localhost and then limit the bandwidth with tc-htb. Monitoring the result with Wireshark.
Iperf setup:
server:
iperf –s –u –p 12345
client:
perf –c 127.0.0.1 –u –p 12345 –t 30 –b 100m
Testing rate argument:
I start Wireshark and start sending data with iperf, after 10 sec I execute a script with the tc commands:
tc qdisc add dev lo root handle 1: htb
tc class add dev lo parent 1: classid 1:1 htb rate 50mbit ceil 75mbit
tc filter add dev lo protocol ip parent 1: prio 1 u 32 match ip dport 12345
0xffff flowid 1:1
The I/O Graph in Wireshark shows that the bandwidth drops from 100 Mbit/s to 50 Mbit/s. Ok.
Testing burst argument:
I’m starting with the same bandwidth limitation as above and after another 10 sec I run a script with the command:
tc class change dev lo parent 1: classid 1:1 htb rate 50mbit ceil 75mbit burst 15k
In the I/O Graph I’m expecting a peek from 50mbit (rate level) up to 75mbit (ceil level). The change command has no effect, the level is at 50mbit.
I have also tested with larger burst values, no effect. What am I doing wrong?
'ceil' specifies how much bandwidth a traffic class can borrow from a parent class if there is spare bandwidth available from peer classes. However ,when applied to the root qdisc there is no parent to borrow from - so specifying ceil different to rate is meaningless for a class on a root qdisc.
'burst' specifies the amount of packets that are sent (at full link speed) from one class before stopping to serve another class, & the rate shaping being achieved by averaging the bursts over time. If applied to root with no child classes, it will only affect the accuracy of the averaging (smoothing), & won't do anything to the true average rate.
try adding child classes:
tc qdisc add dev lo root handle 1: htb
tc class add dev lo parent 1: classid 1:1 htb rate 100mbit
tc class add dev lo parent 1:1: classid 1:10 htb rate 50mbit ceil 100mbit
tc class add dev lo parent 1:1: classid 1:20 htb rate 50mbit ceil 75mbit
tc filter add dev lo protocol ip parent 1: prio 1 u 32 match ip dport 12345 0xffff flowid 1:10
tc filter add dev lo protocol ip parent 1: prio 1 u 32 match ip dport 54321 0xffff flowid 1:20
an iperf session to port 12345 should hit 100mbps, then drop to 50mbps each when iperf session to 54321 is started. Stop iperf to port 12345, then traffic to 54321 should hit 75mbps.

How to exclude port ranges via ematch in Linux traffic control (tc)?

I am currently facing a trouble in my code.
Mainly, I emulate the connection between two computers, connected via an ethernet bridge (Raspberry Pi, Raspbian). So I am able to influence parameters of this connection (like bandwidth, latency and much more) via tc qdisc.
This works out fine, as you can see in the code down below.
But now to my problem:
I am also trying to exclude specific port ranges, what means ports that aren't influenced by my given parameters (latency etc..).
For that I created two prio bands. The prio band 0 (higher priority) handles my port exclusion (already in the parent root).
Afterwards in prio band 1 (lower priority), I decline a latency via netem.
The whole data traffic will pass through my influenced prio band 1, the remaining (excluded data) will pass uninfluenced through prio band 0.
I don't get kernel errors while executing my code! But I only receive filter parent 1: protocol ip pref 1 basic after typing sudo tc filter show dev eth1.
My match is not even mentioned. What did I wrong?
Can you explain me why I don't get my excpected output?
THIS IS MY CODE (in right order of executioning):
PARENT ROOT
sudo tc qdisc add dev eth1 root handle 1: prio bands 2 priomap 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
This creates two priobands (1:1 and 1:2)
BAND 0 [PORT EXCLUSION | port 100 - 800]
sudo tc qdisc add dev eth1 parent 1:1 handle 10: tbf rate 512kbit buffer 1600 limit 3000
Creates a tbf (Token Bucket Filter) to set bandwidth
sudo tc filter add dev eth1 parent 1: protocol ip prio 1 handle 0x10 basic match "cmp(u16 at 0 layer transport lt 100) and cmp(u16 at 0 layer transport gt 800)" flowid 1:1
Creates a filter with specific handle, that excludes port 100 to 800 from the prioband 1 (the influenced data packets)
BAND 1 [NET EMULATION]
sudo tc qdisc add dev eth1 parent 1:2 handle 20: tbf rate 1024kbit buffer 1600 limit 3000
Compare with tbf above
sudo tc qdisc add dev eth1 parent 20:1 handle 21: netem delay 200ms
Creates via netem a delay of 200ms
Here you can see my hierarchy as an image
The question again:
My filter match is not even mentioned. What did I wrong?
Can you explain me why I don't get my excpected output?
I appreciate any kind of help! Thanks for your efforts!
~rotsechs
It seems like I have to neglect the missing output! Nevertheless, it works perfectly.
I established a SSH connection to my Ethernet Bridge (via MobaXterm). Afterwards I laid a delay of 400ms on it. The console inputs slowed down as expected.
Finally I created the filter and excluded the port range from 20 to 24 (SSH has port 22). The delay of my SSH connection disappeared immediately!

How to automate measuring of bandwidth usage between two hosts

I have an application that has a TCP client and a server. I set up the client and server on separate machines. Now I want to measure how much bandwidth is being consumed ( bytes sent and received during a single run of the application). I have discovered that wireshark is one such tool that can help me get this statistic. However, wireshark seems to be GUI dependent. What I wanted was a way to automate the measuring and reporting of this statistic. I dont care about the information about individual packets captured by wireshark. I dont need that information. Is there some way to run wireshark so that all it does is write to a file, the total bytes sent and received between two hosts while the application was running on both ends?
Also, is there a better way to capture this statistic ? Through netstat or /proc/dev/net or any other tool ?
Both my machines have ubuntu 10.04 or later running on them.
Bro is an appropriate tool to measure connection-oriented statistics. You can either record a trace of your application communication or analyze it in realtime:
bro -r <trace>
bro -i <interface>
Thereafter, have a look at the connection log (conn.log) in the same directory for the amount of bytes sent and received by the application. Specifically, you're interested in the TCP payload size, which conn.log exposes via the columns orig_bytes and resp_bytes. Here is an example:
bro-cut id.orig_h id.resp_h conn_state orig_bytes resp_bytes < conn.log | head
which yields the following output:
192.168.1.102 192.168.1.1 SF 301 300
192.168.1.103 192.168.1.255 S0 350 0
192.168.1.102 192.168.1.255 S0 350 0
192.168.1.103 192.168.1.255 S0 560 0
192.168.1.102 192.168.1.255 S0 348 0
192.168.1.104 192.168.1.255 S0 350 0
192.168.1.104 192.168.1.255 S0 549 0
192.168.1.103 192.168.1.1 SF 303 300
192.168.1.102 192.168.1.255 S0 - -
192.168.1.104 192.168.1.1 SF 311 300
Each row represents a single connection, transport-layer ports omitted. The last two columns represent the bytes sent by the originator (first column) and responder (second column). The column conn_state represents the connection status. Please refer to the documentation for all possible field values. Some important values are:
S0: Connection attempt seen, no reply.
S1: Connection established, not terminated.
SF: Normal establishment and termination. Note that this is the same symbol as for state S1. You can tell the two apart because for S1 there will not be any byte counts in the summary, while for SF there will be.
REJ: Connection attempt rejected.

Traffic shaping with tc is inaccurate with high bandwidth and delay

I'm using tc with kernel 2.6.38.8 for traffic shaping. Limit bandwidth works, adding delay works, but when shaping both bandwidth with delay, the achieved bandwidth is always much lower than the limit if the limit is >1.5 Mbps or so.
Example:
tc qdisc del dev usb0 root
tc qdisc add dev usb0 root handle 1: tbf rate 2Mbit burst 100kb latency 300ms
tc qdisc add dev usb0 parent 1:1 handle 10: netem limit 2000 delay 200ms
Yields a delay (from ping) of 201 ms, but a capacity of just 1.66 Mbps (from iperf). If I eliminate the delay, the bandwidth is precisely 2 Mbps. If I specify a bandwidth of 1 Mbps and 200 ms RTT, everything works. I've also tried ipfw + dummynet, which yields similar results.
I've tried using rebuilding the kernel with HZ=1000 in Kconfig -- that didn't fix the problem. Other ideas?
It's actually not a problem, it behaves just as it should. Because you've added a 200ms latency, the full 2Mbps pipe isn't used at it's full potential. I would suggest you study the TCP/IP protocol in more detail, but here is a short summary of what is happening with iperf: your default window size is maybe 3 packets (likely 1500 bytes each). You fill your pipe with 3 packets, but now have to wait until you get an acknowledgement back (this is part of the congestion control mechanism). Since you delay the sending for 200ms, this will take a while. Now your window size will double in size and you can next send 6 packets, but will again have to wait 200ms. Then the window size doubles again, but by the time your window is completely open, the default 10 second iperf test is close to over and your average bandwidth will obviously be smaller.
Think of it like this:
Suppose you set your latency to 1 hour, and your speed to 2 Mbit/s.
2 Mbit/s requires (for example) 50 Kbit/s for TCP ACKs. Because the ACKs take over a hour to reach the source, then the source can't continue sending at 2 Mbit/s because the TCP window is still stuck waiting on the first acknowledgement.
Latency and bandwidth are more related than you think (in TCP at least. UDP is a different story)

Resources