I have a 'server' process that produces some logs. I want the user (or some other service) to be able to view that log stream (like tail -f), but I don't want to write those logs to the filesystem. Can I do this on Linux?
My first attempt was to use UDP, on the loopback interface. The server sends packets to localhost on port 12345, and clients can bind to that port to receive them. Doesn't work. Because only one client can bind to the same socket. Ah! But you might say use SO_REUSE_ADDR, that lets two clients bind to one port, but only one receives the messages.
Next up, I tried UDP multicast on the loopback interface. That one didn't get so far, as my kernel doesn't support multicast on the loopback interface. According to ifconfig:
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:186 errors:0 dropped:0 overruns:0 frame:0
TX packets:186 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:11904 (11.6 KiB) TX bytes:11904 (11.6 KiB)
Note the lack of MULTICAST (or BROADCAST, indeed), above.
Does anyone have any ideas? Could I use named pipes, or Unix Domain Sockets to solve this?
I'd like to avoid anything that allows the (unpriviledged) listeners to affect the (privileged) server. I'd rather drop logs than block the server, for example.
I'm doing all this in Python, if that makes any difference at all.
You could take a look at ZeroMQ. What you're describing is a need to a publisher/subscriber pattern, which is exactly the kind of thing ZeroMQ does really quite elegantly. It has the added advantage of being very flexible on what sort of transport is used underneath; IPC, TCP, etc. That makes putting bits of your program elsewhere on a network quite simple. Using ZeroMQ you will end up with very simple source code, the complexity all being hidden inside the zmq library. You could start by taking a look at this part of the guide
You could also consider NanoMSG (the up and coming ZeroMQ-done-better), though I'm not sure that's got Python bindings as yet.
I can think of two generic approaches.
1) Using POSIX shared memory objects.
See the shm_open(3) manual page for more information.
Your application would create a shared memory object, where it will write its log messages, and any client application can open a shared memory object, and read it. Although the POSIX shared memory API looks like a filesystem-based API, it's not.
Now, bear in mind, that you're going to get just a chunk of memory, of some size that you request. You'll have to figure out how your application will structure, and manage this chunk of memory in some meaningful way that your client applications can parse, and poll for changed contents.
2)
Your application bears the burden of opening and listening on a localhost socket, or a filesystem domain socket, that any client can connect to, and your application will simply write its log messages to every client connection that currently exists.
This is a bit tricky to get right. Your application will need to be able to constantly accept new connections from clients, whenever they come in, write messages to all concurrent connections, detect when some client gets stuck, does not read from its end of the socket, hence making the local sockets internal buffers full, so a blocking write would block and hang the main application; hence all writes must be non-blocking rights, and the application would automatically close any socket that becomes full, etc... etc... etc...
Take a look at message queue pattern, popular solutions are rabbitMQ or Redis.
They all have python client !
Related
Hellow, i'm working with network programing and it's been so hard to create a logic that allows to stream a video from a single server to multiple clients with no delay.
which means that i have to implement a parallel execution during the stream to all connected clients in order to display the images at the same time.
and why is that important for my project it's because i'm intending to have large number of clients (from 200 to approximately 700), now with 10 clients that delay is nothing but with 700 clients could significantly increase the delay to several minutes (not sure but possible).
for those who don't know what's the cause of the dely, it's from the for loop that i'm using which contain the send function for each frame, and that is a serial execution.
i tried threading and multiprocessing and even function schedule but every thing got messy, previously i was using socket & opencv, but for some reason it caused issues during the streaming, now i switched to Netgear & Vidgear but i'm still struggling.
Hope someone can help.
PS: multicast is just not right for the job, after i tried it i was receiving errors because of the length of the transmitted images, UDP protocol will NOT accept more then 65535 byte.
Per your comment, everything is in the same network, and we have multicast for exactly your problem. Rather than sending the same data over and over to multiple hosts, you can send a single stream of traffic to many receivers.
You set up the clients to subscribe to a multicast group, normally a group in the 239.0.0.0/8 Organization-Local scope. Your server then sends its traffic to the same multicast group to which the clients have subscribed. The single traffic stream will be received and processed by every client subscribed to the multicast group.
Because multicast sends to multiple clients, you must use a connectionless transport protocol, e.g. UDP. Connection-oriented transport protocols, e.g. TCP, create connections between two hosts, so they cannot be used with multicast, which is one-to-many.
By default, multicast only works in the same network. We do have multicast routing to send traffic to other networks, but it is very different than the usual unicast routing. Also, you cannot multicast on the public Internet because the ISPs do not have multicast routing. You can multicast to a different site across the Internet by using a tunnel that supports multicast, e.g. GRE. Both the source and destination routers need to be configured for multicast routing, as well as any routers in the path of the multicast packets (the Internet routers on see the unicast tunnel packets, not the multicast packets, so you can send the multicast across the Internet).
Hellow, i'm working with network programing and it's been so hard to create a logic that allows to stream a video from a single server to multiple clients with no delay.
Hey #zaki-lazhari I'm the creator of VidGear Video Processing Python Project. Actually, NetGear is not right API choice for multi-casting task, instead you should be using WebGear API. WebGear can acts as powerful Video Streaming Server that transfers live video-frames to any web browser on a network. So you can easily setup WebGear Server in few lines of code as follows:
# import required libraries
import uvicorn
from vidgear.gears.asyncio import WebGear
#various performance tweaks
options={"frame_size_reduction": 40, "frame_jpeg_quality": 80, "frame_jpeg_optimize": True, "frame_jpeg_progressive": False}
#initialize WebGear app
web=WebGear(source="foo.mp4", logging=True, **options)
#run this app on Uvicorn server at address http://0.0.0.0:8000/
uvicorn.run(web(), host='0.0.0.0', port=8000)
#close app safely
web.shutdown()
So every device (even a smartphone with any browser installed) on the same network can access real-time frames on there browser without any extra dependencies. More code samples can be found here: https://abhitronix.github.io/vidgear/gears/webgear/advanced/
Hope it helps. Good luck!
I've written my own packet sniffer in Linux.
I open a socket with socket(AF_PACKET, SOCK_RAW, htons(ETH_P_ALL)) and then process the Ethernet packets - unpacking ARP packets, IP packets (and ICMP / TCP / UDP packets inside those).
This is all working fine so far.
Now I can read packets like this - and I can also inject packets by wrapping up a suitable Ethernet packet and sending it.
But what I'd like is a means to block packets - to consume them, as it were, so that they don't get further delivered into the system.
That is, if a TCP packet is being sent to port 80, then I can see the packet with my packet sniffer and it'll get delivered to the web server in the usual fashion.
But, basically, I'd like it that if I spot something wrong with the packet - not coming from the right MAC address, malformed in some way, or just breaking security policy - that I can just "consume" the packet, and it won't get further delivered onto the web server.
Because I can read packets and write packets - if I can also just block packets as well, then I'll have all I need.
Basically, I don't just want to monitor network traffic, but sometimes have control over it. E.g. "re-route" a packet by consuming the original incoming packet and then writing out a new slightly-altered packet to a different address. Or just plain block packets that shouldn't be being delivered at all.
My application is to be a general "network traffic management" program. Monitors and logs traffic. But also controls it too - blocking packets as a firewall, re-routing packets as a load balancer.
In other words, I've got a packet sniffer - but if it sniffs something that smells bad, then I'd like it to be able to stop that packet. Discard it early, so it's not further delivered anywhere.
(Being able to alter packets on the way through might be handy too - but if I can block, then there's always the possibility to just block the original packet completely, but then write out a new altered packet in its place.)
What you are looking for is libnetfilter_queue. The documentation is still incredibly bad, but the code in this example should get you started.
I used this library to develop a project that queued network packets and replayed them at a later time.
A bit of a tangent, but it was relevant when I was resolving my problem. Blocking raw packets is relatively complicated, so it might make sense to consider doing that at a different layer. In other words, does your cloud provider let you set up firewall rules to drop specific kind of traffic?
In my case it was easier to do, which is why I'm suggesting such a lateral solution.
Suppose an application is writing to a udp multicast, and all subscribers quit (or perhaps no processes ever register to read the multicast). Does anything go out on the wire?
The source host always sends the datagram. It is up to the router to decide whether there are group members on the other side, and if so to forward the datagram, otherwise drop it.
The packet will always be sent out. IGMP messages, which contain information about hosts joining/leaving multicast groups, are typically only processed by routers so they know where to route multicast traffic. So hosts generally don't have that information.
Even then, routers may not forward IGMP messages but may have static multicast routes set up to forward certain traffic anyway. In that situation, multicast traffic could pass through routers to an intended destination even in the absence of IGMP.
Regarding which interface(s) the source host sends on, that's application defined behavior. The sending socket sets the IP_MULTICAST_IF or IPV6_MULTICAST_IF socket option to dictate which interface multicast traffic is sent out on. If this option is not set, the system chooses a default interface to send multicast packets out on.
I'm getting thousands of dropped packages from a Broadcom Network Card:
eth1 Link encap:Ethernet HWaddr 01:27:B0:14:DA:FE
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:2746252626 errors:0 dropped:1151734 overruns:0 frame:0
TX packets:4109502155 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:427998700000 (408171.3 Mb) TX bytes:3530782240047 (3367216.3 Mb)
Interrupt:40 Memory:d8000000-d8012700
Here is the installed version:
filename: /lib/modules/2.6.27.54-0.2-default/kernel/drivers/net/bnx2.ko
version: 1.8.0
license: GPL
description: Broadcom NetXtreme II BCM5706/5708/5709 Driver
The packets get dropped in bulks ranging from 500 to 5000 packets several times an hour. The Server (running Postgres) is running fine - just the dropps are annoying.
After trying lots of different things, I'm asking: How may I find out where the packets came from and why were they dropped?
A dropped packet means that the buffer that is used to store the packet for forwarding/processing is full. The act of looking into the packet's data for information implies that you have the data to look at in the first place (which you don't, because there was no room to store it).
A nice way around this, so you can see what data is being dropped, is to look through a dump of your traffic for the TCP retransmission requests leaving your server. When a TCP packet is missing, for whatever reason, your server is going to ask for it to be re-sent. The retransmit will give you the conversation context that you're looking for.
I'd actually suggest taking a look at the switch/router that your server is connected to. It will be able to give you a nice idea of the loss and throughput over the interface to your server, letting you diagnose, for example, if your card is too slow for the wire.
EDIT
This blog post cites a tool called dropwatch, which may give you some clues as well.
You may ran into https://www.novell.com/support/kb/doc.php?id=7007165.
quote:
Beginning with kernel 2.6.37, it has been changed the meaning of dropped packet count. Before, dropped packets was most likely due to an error. Now, the rx_dropped counter shows statistics for dropped frames because of:
Softnet backlog full -- (Measured from /proc/net/softnet_stat)
Bad / Unintended VLAN tags
Unknown / Unregistered protocols
IPv6 frames when the server is not configured for IPv6
If any frames meet those conditions, they are dropped before the protocol stack and the rx_dropped counter is incremented.
(For the benefit of those that come to this via a search) I've seen the same problem (also with a bnx2 module, IIRC).
You might try turning off the irqbalance service. In my case, it completely stopped the solution.
Please also note that not so long ago, there were plenty of updates (RHEL 6) for irqbalance. Firmware updates should also be checked for both main system and the ethernet board(s).
We were seeing this only a very large subnet with a very large amount of broadcast/multicast activity. We weren't seeing this on the same equipment on a less noisy -- but still very active -- part of the network.
Potentially, setting the ethernet ring buffer size for the NIC can also be of use. I know there were some alterations for sysctl on that busy network...
I have several programs listening to the same multicast stream, I'm wondering will this doubling the traffic compared with only one program listening or the traffic/bandwidth usage are the same? thanks!
The short answer is no, the amount of traffic is the same. I'll caveat that with "in most cases". Multicast packets are written to the wire using a MAC address constructed from the multicast group address. Joining a multicast group is essentially telling the NIC to listen to the appropriate MAC address. This makes each listener receive the same ethernet frame. The caveat has to do with how multicast routing may or may not work. If you have a multicast aware router then multicast traffic may traverse the router onto other networks if someone has joined the group on another subnet.
I recommend reading "TCP/IP Illustrated, Volume 1" if you plan on doing a lot of network programming. This is the best way to really understand how all of the protocols fit together.
Are the clients on the same network?
For wireless 802.11 multicast, it depends on the implementation of Multicast at the wireless access point.
Some wireless access points do multicast to unicast conversion at the datalink layer and thus send a data separately to EACH client that has joined the multicast group.
If the AP is not doing unicast conversion, generally, your network utilization does not increase.