Difference between tx and rx? - voip

With asterisk I can set the volume of TX and RX. But what are those options? I've already googled this but can't find anything.
Whats is the difference between TX and RX?

RX is receive, incoming.
TX is transmitting, outgoing.

Related

RX Packets sent from dummy Linux network device driver are dropped

I am having two (may be related) issues but I will describe the one mentioned in title first.
I am modifying the dummy network device driver to echo back the transmitted UDP packet to the transmitting interface. In the callback function of ndo_start_xmit, I have added following piece of code to echo back the transmitted packet:
struct sk_buff *skb2;
unsigned char *ptr;
skb2 = netdev_alloc_skb(dev, pkt_len + 2);
if(skb2)
{
ptr = skb_put(skb2 , pkt_len);
memcpy(ptr , (void*)skb->data, pkt_len);
/* Code to swap source and destination IP & Ports and increment tx rx counts here */
netif_rx(skb2);
}
Now if I assign IP to interface after inserting this module, send packets on this interface and then run ifconfig dummy0, I get following output:
dummy0 Link encap:Ethernet HWaddr 42:cd:19:7d:52:3f
inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::40cd:19ff:fe7d:523f/64 Scope:Link
UP BROADCAST RUNNING NOARP MTU:1500 Metric:1
RX packets:4 errors:0 dropped:4 overruns:0 frame:0
TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:192 (192.0 B) TX bytes:258 (258.0 B)
Here we can see that along with packet TX and RX count, RX Drop count is also increasing. Can someone point to the reason why packets are being dropped?
Now coming to the second issue, if I try to run tcpdump to capture the packets, as soon as some packet arrives on RX of this dummy0 interface the whole virtual machine hangs (I guess the kernel panics). Is there something that I am missing in the code which causes this issue?
I was able to solve both issues by adding:
skb2->protocol = eth_type_trans(skb2, dev);
just before netif_rx(skb2).

RealTek 8168 (r8169 linux driver) - Rx descriptor ring confusion

I'm working on control system stuff, where the NIC interrupt is used as a trigger. This works very well for cycle times greater than 3 microseconds. Now i want to make some performance tests and measure the transmission time, respectively the shortest time between two interrupts.
The sender is sending 60 byte packages as fast as possible. The receiver should generate one interrupt per package. I'm testing with 256 packets, the size of the Rx descriptor ring. The packet data won't handled during the test. Only the interrupt is interesting.
The trouble is, that the reception is very fast up to less then 1 microsecond between two interrupts, but only for around 70 interrupts / descriptors. Then the NIC sets the RDU (Rx Descriptor Unavailable) bit and stops the receiving before reaching the end of the ring. The confusing thing is, when i increase the size of the Rx descriptor ring up to 2048 (e.g.), then the number of interrupts is increasing too (around 800). I don't understand this behavior. I thought he should stop again after 70 interrupts.
It seems to be a time problem, but why? I'm overlooking something, but what? Can somebody help my?
Thanks in advance!
What I think is that due to large RX packet rate, your receive interrupts are missing . Don't count interrupts to see how many packets are received.Rely on "own" bit of Receive descriptors.
Receive Descriptor unavailable will be set only when you reach end of the ring unless you have made some error in programming RX descriptors (e.g. forgot to set ownership bit)
So if your RX ring has 256 descriptors, I think you should receive 256 packets without recycling RX descriptors.
If you are doubtful whether you are reaching the end of ring or not, try setting interrupt on completion bit of only last RX descriptor.In this way you receive only one interrupt at the end of ring.

How to get RSSI in a linux AP (iw station dump doesn't work)

I'm trying to measure the RSSI from a station connected to my AP which is running OpenWRT. I know that by using iw wlan0 station dump or iw wlan0 station get [MAC], I should be able to see it. Though for some reason it doesn't show the RSSI on my AP.
Here is the output that I get:
~# iw wlan0 station get 40:b0:fa:c1:75:41
Station 40:b0:fa:c1:75:41 (on wlan0)
inactive time: 75 ms
rx bytes: 17588
rx packets: 134
tx bytes: 10771
tx packets: 76
tx retries: 3
tx failed: 0
tx bitrate: 6.0 MBit/s
rx bitrate: 6.0 MBit/s
authorized: yes
authenticated: yes
preamble: short
WMM/WME: yes
MFP: no
TDLS peer: no
I'm running hostapd and dnsmasq. Any ideas of how I can get the RSSI? Maybe somehow in C?
Thanks!
UPDATE
I was checking the code of iw, and for some reason NL80211_STA_INFO_SIGNAL comes up NULL - If anyone has an idea of why this could be happening, would be a great help!
UPDATE 2
Apperently the source of iw in the project I was working was changed and the line with the RSSI for some reason has been commented. This change has never been documented. Thank you for everyone that answered this question.
Sounds like either your are using a radio card/driver that does not provide the RSSI to the kernel, or you are using an out-of-date kernel module (package mac80211)
Dit you try the command "iwinfo wlan0 assoc" ? You might have better luck with this.
although it was posted long ago.. may be helpful
did you try :
sudo iw dev wlan0 station get [MAC]
(change [MAC] to STA mac address)
there is field "signal" if that help you.

Linux kernel and realtek rtl8139 driver

I'm trying to write driver for rtl8139 for linux 2.6 from scratch. I've already written TX path, but I have some problems with RX.
I put RX into promiscous mode and receiving RX irqs. I set RBSTART into physical address of allocated memory by kmalloc.
I don't know how to find out how many received packets there are and how long they are.
I thought that ERBCR, CAPR, CBR registers tell it, but they are == 0.
Maybe I'm doing something wrong? How to find out anything about received packets?
I answer to my question myself.
The received packets are located starting at RBSTART. The first two bytes of rx-ed packet are status bytes, and the next 2 are length of the frame + 4 bytes of crc.
Maybye someone find this info helpful.
On receiving a packet, the data received from the line is stored in the receive FIFO. When Early Receive Threshold is met, the data is moved from FIFO to Recieve Buffer.
So, once you get an interrupt. You need to check the Interrupt Status Register for ROK. Then check the Early Rx status register which gives you the status of the packet received. If EROK is set, then check the Receive buffer status for ROK. Check for are any errors in the ISR and ERSR. Also check your Rx Configuration register for the threshold configuration for Rx FIFO, RX buf length.

lpcxpresso UART and DMA

Does anybody have experience with the UART and DMA on the 1769 lpcxpresso board without the baseboard? I can get it to transmit the FIFO buffer but I am not able to get it to acknowledge any received chars. If it matters I am using the MCB1700, DMA-UART example code. Is there any place where I can get code examples to get this to work?
I am using the MCB1700 example code which comes with the lpcxpresso code_red. This uses UART0 and UART1 thru a RS-232 cable and 2 DMA channels. but the RS232 connection to connect the 2 ports is on the baseboard but I don't have one. I am just using wires on a breadboard to hook up UART0 TX to UART1 RX and the UART1 TX to the UART0 RX. I am notified that I received the 16 byte FIFO buffer from UART1 but when UART0 transmits to UART1, I am not getting the flag set saying I received the characters on the other DMA channel. I am wondering if something is not being initialized correctly

Resources