How to limit the log size assigned to each device/host in graylog? - graylog2

I use graylog for log management. logs send from multiple host to graylog, I want:
1- limit storage that allocate to per host
2- limit number of logs received from any host
e.g:
1- if total size is 250GB, max size of host1 log is 100 GB, host2 100 GB and host3 50GB
2- if graylog process 5000 msg/s, max log rate that host1 can send is 3000 msg/s, host2 is 1500 msg/s, host3 is 100 msg/s

You can use an individual index set for each device you have. Within an index set, you can configure the rotation and retention strategies for the data it contains.
See http://docs.graylog.org/en/2.4/pages/configuration/index_model.html for more details.

Related

How do I apply a traffic limit to those connecting from a specific port with iptables or tc?

To my CentOS server, TCP connection is provided over port 643 and UDP connection is provided over port 6194. I want to add 1 hour drop rule to each IP address that consumes 50MB traffic from each of these ports.
Can I do this using iptables or tc? If I can how do I do it? I don't know enough about the subject, can you help me please?
It is possible to limit incoming and outgoing bandwidth and latency with tc (Traffic Control). This means you can control the throughput, the data amount over time only.
According your description for 50MB/hr you would need to set something like 125kBps for your rate. Since it is bandwidth limitation to make sure that only a specific amount of traffic can be used, there is no time limitation.
Further Questions and Answers on this topic are
Limiting interface bandwidth with tc
Limit network bandwith for an IP
How to limit network bandwidth
A more Advanced Solution could be
Rate limit network but allow bursting per TCP connection before limiting
Even if it is possible to cut of TCP/IP connections via in example cutter tool or set block time with iptables, I am not aware of any production ready solution for controlling the time of a network session. You may also have a look into wondershaper or trickle.

How to limit just the incoming traffic rate using netem?

How to limit the incoming bandwidth to say 1Mbps using netem? I have searched for tc commands but I could not understand whether the commands that were used limited both incoming and outgoing traffic.
In my case I just require my incoming bandwidth to be set at 1Mbps for 2 minutes and then at 2Mbps for next 2 minutes and so on without any delay or anything else on the packets.
I will be using google QUIC protocol instead of TCP.

Linux TCP: high Send-Q on sender, zero Recv-Q on receiver

How can it be that:
There is a TCP socket between two machines
After some succesful bidirectional communications, sender application is stuck on writing to the socket and receiver on reading from it
netstat reports high Send-Q (a few megabytes) for the socket on the sender (and the value does not change even after a couple of hours of waiting)
netstat reports zero Recv-Q for the socket on the receiver
tcpdump reports that the only activity on the socket is a periodic (biminutely) ACK with no data from the sender and immediate ACK response with no data from the receiver
Why doesn't the sender machine attempt to send queued data to the receiver?
I my case, client was writing data in chunks of 8KB and server was trying to read 8KB and server would then write it to RAID0 disks. For uploading large files, I faced a similar situation and increasing the amount of data I was reading from socket on server side helped. I bumped up the size of internal buffer that was reading from socket to 1MB (from 8 kB) and it helped. I don't know for sure whether it was because of RAID or tcp but it could be another thing you might want to try out.
This is more likely caused by other problem, but below might help if you haven't tried (these numbers are examples, find your own numbers):
Estimate your sender and receiver file system read/write speed as well as network speed, and set appropriate bandwidth limit in rsync: --bwlimit=1024 (1024 KBps)
If sender and receiver have dedicated NIC in this local network, do yourself a favor, increase MTU on these NICs: ifconfig eth1 mtu 65744
Increase sender transmission queue length: ifconfig eth1 txqueuelen 4096
Increase kernel send/receive memory: add these to /etc/sysctl.conf file
net.core.wmem_max=16777216
net.core.rmem_max=16777216
net.ipv4.tcp_rmem=4096 262144 16777216
net.ipv4.tcp_wmem=4096 262144 16777216
run sysctl -p afterwards.
If you rsync a very large file system, make sure fs.file-max is large enough,to check it: sysctl fs.file-max
to increase it, add a line fs.file-max=327679 to the file /etc/sysctl.conf,
on your rsync user, run: ulimit -n 327679
I have the same problem, maybe the conntrack have been delete in receiver side, check your
/proc/net/nf_conntrack file, if there is no information about this socket, then there is the problem
also see: Connection Reset By Peer - with driver 2.8.0 and mongo 4.0.9 on a k8s cluster

apr_socket_recv: connection reset by peer (104) nodejs Ubuntu 14.10 x64 8GB RAM 4 Core (VPS)

I am working on a project in node.js. The project is about location(GPS) tracking. The ultimate target for my project is to server 1M concurrent requests. What i have done are,
Created a server in node.js listing on port 8000
A html document with google map to locate user positions/GPS locations
A single socket connection between server and html document to pass location information
An API to get user location from client devices ( it can be a mobile app )
Once a server receives user location via the API mentioned it will emit that information to client (HTML document) via socket.
Its working well and good.
Problem
I am using apachebench for load test my server. When i increase the concurrency benchmarking breaks frequently with the error
apr_socket_recv: Connection reset by peer (104)
how can i solve this, Whats the actual cause of this problem.
Note: if i run the same server in my local windows machine it serving 20K requests successfully.
I have changed the
ulimit -n 9999999
and the soft and hard file openings limit to 1000000
neither solves my problem.
please help me to understands the problem clearly. How i increase the concurrency to 1M. is it possible with some more hardware/network tunings?
Edit:
I am using socketCluster on the server with the no of worker equivalent to the no of Core/CPU (i.e) 4 workers in my case
The CPU usage with the htop command in server terminal is 45%
Memory usage was around 4GB / 8GB & swap space not used
The ab command i used to load the server was
ab -n 20000 -c 20000 "http://IP_ADDRESS:8000/API_INFO"

Can DHCP server give out a lease time greater than requested by DHCP client?

I have a centOS DHCP server configured to give out a lease of 20 mins. But a client is requesting a lease for 10 mins. Is it possible to configure dhcpd to give out a lease of 20 mins even if client requests for a 10 mins lease.
Yes it can. DHCPD server will provide the Lease based on dhcpd options
min-lease-time 120; <- Mean minimum lease DHCP server will assign greater than 120 secs
max-lease-time 120; <- Mean maximum lease DHCP server will assign less than 120 secs
default-lease-time 120; <- Mean default lease DHCP server will assign is 120 secs
you want all 3 options for all the networks ranges you will setting up otherwise what happens client may get weird leases.
You can check man pages on linux server you have dhcpd configured
man 5 dhcpd
or check https://linux.die.net/man/5/dhcpd.conf
See dhcpd.conf(5):
The min-lease-time statement
min-lease-time time;
Time should be the minimum length in seconds that will be assigned to a lease.

Resources