FreeRadius , How to exclude certain urls from accounting packet - cisco

I have many users doing Authentication + Accounting packet sent from Network Device [LNS] to freeradius server.
in the accounting packet, client send's how much KB used in their current session.
I have a limit for each user and this limit gets decreased on each accounting packet sent and i stop the user when their limit is reached .
how can I exclude certain URLs from being added into Accounting Packet .

You cant do it on RADIUS - it only recieve that NAS sends to it. You can do in on few network access types, that uses queues to account, ie hotspots, by adding walled garden rule. But not l2tp - it sends bytes on interface.
Usual way to account this is NetFlow. It sends accounting data for each connection.

Related

FreeRadius in combination with a vulnerability scan / software status check

What i have:
I am running a freeradius server fully configured of how i need it to be. Everything works just fine right now.
What i need:
I need the radius to put the devices in a seperate vlan before authentication and to run a vulnerability scan (nessus / openvas etc) on the devices in this vlan to check for software status ( antivirus etc. )
if the device passes the test the authentication should be done normaly.
if it fails it should be put into a third ( fourth if you count the unauth-vid ) vlan.
can someone tell me if this is doable in freeradius ?
thanks in advance for your answers
Yes. But this is a very broad question and is dependent on the networking equipment being used. I'll give you an overview of how I'd design such a system.
In general, you'll have an easier time if you can use the same DHCP server/IP range for your NAC and full access VLAN. That means you don't have to signal the higher networking layers in the client that there's been a state change, you can swap out VLANs behind the scenes to change what they can access.
You'd set up a database with an entry for each client. This doesn't have to be pre-populated, it could be populated during the first auth attempt. Part of each client entry would be a status field detailing when they last completed NAC.
You'd also need an accounting database, to store information about where each client is connected to the network.
If the client had never completed NAC checks before, you'd assign the client to the NAC VLAN, and signal your NAC processes to start interrogating it.
FreeRADIUS can act as both a RADIUS and DHCPv4 server, so you'd probably do signal the NAC process from the DHCPv4 side because then you'd know what IP the client received.
Binding the RADIUS and DHCPv4 sides can be done in a couple of ways. The most obvious is MAC, another common way is NAS/Port ID using the accounting table.
Once the NAC checks had completed, you'd have the NAC process write out a receipt in detail file format, and have that read back in by a detail file listener (there are examples of this in sites-available/ in the 'decoupled-accounting' virtual server files). When reading those entries back in, you'd change the state in the database, and send a CoA packet to the switch using information from the accounting database to identify the client. This would flip the VLAN and allow them to the standard set of networking resources.
I know this is very high level, documenting it properly would probably exceed StackOverflow's character limit. If you need more help with this, I suggest you research what I've described above and then start asking the RADIUS related questions on the FreeRADIUS user's mailing list https://freeradius.org/support/.

DNS Amplification attacks

I have this question on my homework about DNS Amplification attacks that I am having trouble figuring out.
In order to implement a DNS amplification attack, the attacker must
trigger the creation of a sufficiently large volume of DNS response
packets from the intermediary to exceed the capacity of the link to
the target organization. Consider an attack where the DNS response
packets are 500 bytes in size (ignoring framing overhead). How many of
these packets per second must the attacker trigger to flood a target
organization using a 0.5-Mbps link? A 2 Mbps link? Or a 10 Mbps link?
If the DNS request packet to the intermediary is 60 bytes in size, how
much bandwidth does the attacker consumer to send the necessary rate
of DNS request packets for each of the three cases?
I know that an amplification attack has to do with converting a small request into a large response. The information I'm reading in my book doesn't give an exact value of how much they can amplify however, it at one point says "over 4000 bytes" and thats it. I assume the first part of the question it simply 0.5 MB / 500 bytes = how many packets per second it takes to flood the target, but that seems to simple (new to this topic). But that might just be me over thinking it. The second part I assume is just 60 bytes * the answer from the first part for each of the three cases, but I am unsure of the answer for the first part. Am I overthinking this or do I already have it solved?
You are correct that they are simply divisions. But you must first understand how the amplification attack works: the attacker makes a request to the intermediary, but spoofs the source address. The intermediary will then respond to the spoofed source instead of the attacker.
This works only in a couple of conditions: connectionless protocols (like UDP, ICMP), the intermediary is well connected having lots of available bandwidth, and the responses are much larger than the request packets. That is the amplification part. So if the attacker sends a 60 byte response, and in return triggers a 500 byte reply, the attacker will need less bandwidth to overwhelm the network connection of the victim.
Hopefully this is sufficient for you to figure out the rest of the homework.

How to temporarily buffer incoming network traffic for latency-sensitive HFT application?

We are running a Java-based trading application, and there are certain periods where we want to prioritize outgoing network traffic as much as possible for about 10 ms. Is there a way to temporarily buffer all incoming network traffic during a short time period, either on the network card or via a process or buffer on our Redhat Linux box?
The rationale behind this is that the incoming network traffic spikes during this same period, and the application processing this traffic is stealing CPU cycles from the process we are trying to prioritize. We do not have fine-grained control over the application treating the incoming network traffic.
We're on a 1 Gbps connection so a buffer of about 1 MB should be sufficient. We would prefer not dropping the incoming traffic and requesting retransmission as this would increase load on our network during quite busy periods.
Possible using Qos on the router, or using trickle to control your bandwidth by a sample configuration of :
/etc/trickled.conf.
see example in url.
I am not sure whether I understand your problem correctly. Your concern is sometimes you have priority to deal with output network traffic and at this time the incoming traffic will build up and finally might cause package drop or retransmission which you don't want. Therefore, you want to buffer your incoming traffic.
If my understanding is correct and your are using TCP, try to make your tcp buffer bigger.
http://kaivanov.blogspot.com/2010/09/linux-tcp-tuning.html and then Use netstat to check whether your change is effective.
Adrian, have you tried setting the priority of your outgoing communication process to be higher than that of the process receiving the incoming data? Using the nice command this can be achieved. Note that in Unix/Linux the lower the number the higher the priority.
Otherwise I am not sure this is possible without having a direct tie in between the two applications that are sending / receiving, allowing you to effectively ignore the incoming connections that are ready to read from until any data you have is sent out.

When establishing a BLE connection, can the first two data packets contain application data?

When a BLE device (M) is scanning and another one (S) is advertising, to create a connection M must send a CONNECT_REQ packet to S. Immediately, the connection is considered created.
In order to establish the connection, one data packet must be sent (by M) and acknowledged (by S).
I noticed that generally this very first data packet exchange consists of Empty PDU packets.
Question: does the standard (4.1) allow these very first packets to also contain application data? (e.g. an ATT request)
After long searches, it turns out it is indeed possible to send application data in the first two data packets.
The standard doesn't approve or deny explicitly this hypothesis, but Robin Heydon explains it in this book.

Cloud Architecture On Azure for Internet of Things

I'm working on a server architecture for sending/receiving messages from remote embedded devices, which will be hosted on Windows Azure. The front-facing servers are going to be maintaining persistent TCP connections with these devices, and I need a way to communicate with them on the backend.
Problem facts:
Devices: ~10,000
Frequency of messages device is sending up to servers: 1/min
Frequency of messages originating server side (e.g. from user actions, scheduled triggers, etc.): 100/day
Average size of message payload: 64 bytes
Upward communication
The devices send up messages very frequently (sensor readings). The constraints for that data are not very strong, due to the fact that we can aggregate/insert those sensor readings in a batched manner, and that they don't require in-order guarantees. I think the best way of handling them is to put them in a Storage Queue, and have a worker process poll the queue at intervals and dump that data. Of course, I'll have to be careful about making sure the worker process does this frequently enough so that the queue doesn't infinitely back up. The max batch size of Azure Storage Queues is 32, but I'm thinking of potentially pulling in more than that: something like publishing to the data store every 1,000 readings or 30 seconds, whichever comes first.
Downward communication
The server sends down updates and notifications much less frequently. This is a slightly harder problem, as I can see two viable paradigms here (with some blending in between). Could either:
Create a Service Bus Queue for each device (or one queue with thousands of subscriptions - limit is for number of queues is 10,000)
Have a state table housed in a DB that contains the latest "state" of a specific message type that the devices will get sent to them
With option 1, the application server simply enqueues a message in a fire-and-forget manner. On the front-end servers, however, there's quite a bit of things that have to happen. Concerns I can see include:
Monitoring 10k queues (or many subscriptions off of a queue - the
Azure SDK apparently reuses connections for subscriptions to the same
queue)
Connection Management
Should no longer monitor a queue if device disconnects.
Need to expire messages if device is disconnected for an extended period of time (so that queue isn't backed up)
Need to enable some type of "refresh" mechanism to update device's complete state when it goes back online
The good news is that service bus queues are durable, and with sessions can arrange messages to come in a FIFO manner.
With option 2, the DB would house a table that would maintain state for all of the devices. This table would be checked periodically by the front-facing servers (every few seconds or so) for state changes written to it by the application server. The front-facing servers would then dispatch to the devices. This removes the requirement for queueing of FIFO, the reasoning being that this message contains the latest state, and doesn't have to compete with other messages destined for the same device. The message is ephemeral: if it fails, then it will be resent when the device reconnects and requests to be refreshed, or at the next check interval of the front-facing server.
In this scenario, the need for queues seems to be removed, but the DB becomes the bottleneck here, and I fear it's not as scalable.
These are both viable approaches, and I feel this question is already becoming too large (although I can provide more descriptions if necessary). Just wanted to get a feel for what's possible, what's usually done, if there's something fundamental I'm missing, and what things in the cloud can I take advantage of to not reinvent the wheel.
If you can identify the device (may be device id/IMEI/Mac address) by the the message it sends then you can reduce the number of queues from 10,000 to 1 queue and not have 10000 subscriptions too. This could also help you in the downward communication as you will be able to identify the device and send the message to the appropriate socket.
As you mentioned the connections last longer you could deliver the command to the device that is connected and decide what to do with the commands to the device that are not connected.
Hope it helps

Resources