Sending files with varying lengths over lwm2m - coap

I am using Eclipse Leshan to access the resources of a zolertia RE-MOTE. Long story short, I want to send a binary file from my laptop to the board. However, I see that the leshan server may not start the transmission, depending on the file size. More specifically, I see that files that are 64B, 128B can be transmitted while files of 705 Bytes cannot be transmitted (just an example). In addition, this limitation does not hold if the file is larger than 1Kb, as in this case all the files that I have tested managed to be transmitted. Do you know what may go wrong? Is it normal?

That depends in the first place from your client: what do you use?
Your client is required to implement RFC7959 - CoAP blockwise transfer.
Leshan's CoAP communication is based on Eclipse/Californium. To limit misuse, it requires to be configured with the largest expected resource body in the "Californium.properties" using the property "MAX_RESOURCE_BODY_SIZE=???" default is 8192.
If that doesn't help, please try to capture the traffic and post it (preferred as issue in Eclipse/Californium).

Related

Buffer issue in Classic ASP

We have a classic ASP application running on Windows Server 2012 and IIS (version 8) web server and had to modify a page to allow retrieval of a larger data set from the database. When we run this without amending any IIS settings we receive the error below in IE;
We have tried amending the buffer level at the site level and IIS application level from the standard 4194304 (4Mb) limit to 20971520 (20Mb) but when we do the output changes to the image below in IE and in chrome it continually asks for credentials every 20 seconds or so.
Why is this happening? How do we resolve please?
You're probably best disabling the buffer using Response.Buffer = False
By default, IIS buffers all output, which means as a webpage is being built it get stored in memory (a buffer) until your script has finished executing, and then the whole page is sent from the buffer to the clients machine as one file. If you're constructing a very large page with a lot of data you risk overflowing the buffer. Increasing the buffer size limit is one solution, although I can't see why it would start asking for credentials, you must have misconfigured something in IIS.
Another solution would be to use Response.Flush() to intermittently
flush data from the buffer and send the HTML to the clients machine in chunks. But disabling the buffer entirely will do this for you without the need for Response.Flush().

FreeRadius in combination with a vulnerability scan / software status check

What i have:
I am running a freeradius server fully configured of how i need it to be. Everything works just fine right now.
What i need:
I need the radius to put the devices in a seperate vlan before authentication and to run a vulnerability scan (nessus / openvas etc) on the devices in this vlan to check for software status ( antivirus etc. )
if the device passes the test the authentication should be done normaly.
if it fails it should be put into a third ( fourth if you count the unauth-vid ) vlan.
can someone tell me if this is doable in freeradius ?
thanks in advance for your answers
Yes. But this is a very broad question and is dependent on the networking equipment being used. I'll give you an overview of how I'd design such a system.
In general, you'll have an easier time if you can use the same DHCP server/IP range for your NAC and full access VLAN. That means you don't have to signal the higher networking layers in the client that there's been a state change, you can swap out VLANs behind the scenes to change what they can access.
You'd set up a database with an entry for each client. This doesn't have to be pre-populated, it could be populated during the first auth attempt. Part of each client entry would be a status field detailing when they last completed NAC.
You'd also need an accounting database, to store information about where each client is connected to the network.
If the client had never completed NAC checks before, you'd assign the client to the NAC VLAN, and signal your NAC processes to start interrogating it.
FreeRADIUS can act as both a RADIUS and DHCPv4 server, so you'd probably do signal the NAC process from the DHCPv4 side because then you'd know what IP the client received.
Binding the RADIUS and DHCPv4 sides can be done in a couple of ways. The most obvious is MAC, another common way is NAS/Port ID using the accounting table.
Once the NAC checks had completed, you'd have the NAC process write out a receipt in detail file format, and have that read back in by a detail file listener (there are examples of this in sites-available/ in the 'decoupled-accounting' virtual server files). When reading those entries back in, you'd change the state in the database, and send a CoA packet to the switch using information from the accounting database to identify the client. This would flip the VLAN and allow them to the standard set of networking resources.
I know this is very high level, documenting it properly would probably exceed StackOverflow's character limit. If you need more help with this, I suggest you research what I've described above and then start asking the RADIUS related questions on the FreeRADIUS user's mailing list https://freeradius.org/support/.

How to set Azure Webapp Websocket output buffer/frame size?

We're sending rather large chunks of data over websockets from an Azure Web App. It all works fine, but for some reason, the outgoing buffer size is 4096 bytes, which gives a lot of overhead for large data transmissions.
On a local developer machine this IIS/.Net buffer seems to be 16384 (or possible 16383 cause i'm getting the stream in one frame with 1 byte, and the next frame 16383 and on it goes). The reading buffer in the client in 65536 for reach frame.
All code is written in .NET so the sending side is simply creating a large ArraySegment and sending it with the ClientWebSocket.SendAsync which is much to high up the chain to actually decide how it's sent.
My question is, is it possible to change the size of the frames/buffers on the Azure Web App?
Clearly it's possible to change it on either OS or IIS level (http.sys?), since our Windows 10 dev machines have a different send buffer, but I really can't find where and how.

Communicate password securely to another program (separate shell/dbus)

I am writing a build script which has some password protected files (keys). I need a way to prompt the user once for the password and then use this key across multiple scripts. These scripts do not live inside the same shell, and may spawn other windows via dbus. I can then send them commands, one of which must have access to the password.
I have this working already, but at a few points the passphrase is either used directly on a command-line (passed via dbus), or is put into a file (the name then passed to the other script). Both of these are less secure than I want*. The command-line ends up in a history which may be stored in a file, as well as appearing in the process list, and the second option stores in a file which can be read by somebody else.
Is there some standard way to create a temporary communications channel between two processes which could communicate the password and not be intercepted by another user on the system (including root)?
*Note: This is primarily an exercise to be fully secure. For my current project the temporary in-file storage of the password is okay.
Setting "root being all-powerful" aside, I would imagine that a Private DBus Connection would do the trick although the documentation I could find seems a little light on what exactly makes a private connection private.
However, the DBus Specification, more specifically, the Message Bus Specification subsection on eavesdropping says in part:
Receiving a unicast message whose DESTINATION indicates a different
recipient is called eavesdropping. On a message bus which acts as a
security boundary (like the standard system bus), the security policy
should usually prevent eavesdropping, since unicast messages are
normally kept private and may contain security-sensitive information.
So you may not even need to use private connections which incur more overhead costs. But on a risk/reward basis with security being paramount, that may be the more secure alternative for you. Hope that helps.

Avahi Hostname Resolution: Is it caching somewhere?

I am using Fedora 18 with the avahi command line tools (version 0.6.31)
I use avahi-resolve-host-name to discover the IP address of units on my subnet, for testing purposes during development. I monitor the request and response with Wireshark. After one successful request and response, no further requests show up on Wireshark, but the tool still returns an IP address. Is it possible the computer/avahi daemon/something else is 'caching' the result?
The Question: I wish to send out the request packet with EVERY CALL of avahi-resolve-host-name. Is this possible?
The Reason: I'm getting 'false positives' so to speak. I try to resolve 'test1.local', and I am getting a resulting IP, but the unit is no longer located at this IP. I want the request sent every time so I can avoid seeing units at incorrect IP addresses.
I see that I'm a bit late to answer your question but I'm going to leave a generic answer in case someone else stumbles upon this.
My answer is based on avahi-0.6.32_rc.
Is it possible the computer/avahi daemon/something else is 'caching' the result?
Yes, avahi-daemon is caching lookup results. While this doesn't seem to be explicitly listed in features, the avahi-daemon(8) manpage tips it:
The daemon [...] provides two IPC APIs for local programs to make use of the mDNS record cache the avahi-daemon maintains.
I wish to send out the request packet with EVERY CALL of avahi-resolve-host-name. Is this possible?
Yes, it is. The relevant option is cache-entries-max (from avahi-daemon.conf(5)):
cache-entries-max= Takes an unsigned integer specifying how many resource records are cached per interface. Bigger values allow mDNS work correctly in large LANs but also increase memory consumption.
To achieve the desired effect, you can simply set:
cache-entries-max=0
This will disable the caching entirely and force avahi-daemon to reissue the MDNS packets on every request, therefore making it possible for you to monitor them.
However, I should note here that this will also render avahi pretty much useless for normal use. While avahi-daemon will be issuing lookup packets, it will be unable to store the results and every call of avahi-resolve-host-name (as well as other command-line tools, nss-mdns, D-Bus APIā€¦) will fail.
I just stumbled upon this problem myself and found solution that doesn't require changing the config. It seems that simply killing the daemon (avahi-daemon --kill) flushes the cache. I'm on Ubuntu 18.04 and the daemon is restarted automatically. If on some other distro it isn't running after being killed, it can be restarted with avahi-daemon --daemonize.
Note that root is needed to kill avahi daemon, so this might not be the best option in some cases.

Resources