Audit netlink response don't have the right packet length - linux

I have been trying to read the linux audit logs from go using mdlayher/netlink. I am able to make a connection and set the PID as well to be able to receive logs from the netlink socket over unicast and multicast.
The problem is, when the library tries to parse the messages from netlink, it fails, and not because of the library. I tried to dump the messages that were sent to my connection and this is what I found.
([]uint8) (len=48 cap=4096) {
00000000 1d 00 00 00 28 05 00 00 00 00 00 00 00 00 00 00 |....(...........|
00000010 61 75 64 69 74 28 31 36 31 32 30 33 31 38 36 32 |audit(1612031862|
00000020 2e 36 33 31 3a 32 37 31 30 34 29 3a 20 00 00 00 |.631:27104): ...|
}
This is one of the messages from the log stream. According to the packet structure, The first 16 bytes are part of the netlink message header nlmsghdr.
struct nlmsghdr {
__u32 nlmsg_len; /* Length of message including header */
__u16 nlmsg_type; /* Message content */
__u16 nlmsg_flags; /* Additional flags */
__u32 nlmsg_seq; /* Sequence number */
__u32 nlmsg_pid; /* Sending process port ID */
};
Note, how the nlmsg_len is marked as length of the message including header. If you look at the message dump, the first __u32 is 1d 00 00 00, which in the network byte order is 29. Which means the whole packet should be 29 bytes. But, if you count the bytes, it's 45 + 3 bytes of padding which is how the packets are aligned. The message ends at byte 45 which is (29 + 16) or 29 + length of the message header size.
But, the strange thing is, it only happens on audit log messages and not on audit control message replies. Refer to https://play.golang.com/p/mA7_MJdVSv8 for examples on how the packet structures are parsed.
Is this expected ? Looking at the go stdlib syscall.ParseNetlinkMessage, it seems to obey the header + body constraint. I can't make it out in the userspace-audit code which is responsible for auditd, auditctl and it's family of tools.
Another popular library, slackhq/go-audit seems to not rely on the header length and parse based on the size of the buffer read from the socket.
This diff on the mdlayher/netlink library seems to fix the above issue and can also get the byte dumps of the payload. But this shouldn't be the case.
diff --git a/conn_linux.go b/conn_linux.go
index ef18ef7..561ac69 100644
--- a/conn_linux.go
+++ b/conn_linux.go
## -11,6 +11,8 ## import (
"time"
"unsafe"
+ "github.com/davecgh/go-spew/spew"
+ "github.com/josharian/native"
"golang.org/x/net/bpf"
"golang.org/x/sys/unix"
)
## -194,7 +196,13 ## func (c *conn) Receive() ([]Message, error) {
raw, err := syscall.ParseNetlinkMessage(b[:n])
if err != nil {
- return nil, err
+ spew.Dump(b[:n])
+ bl := native.Endian.Uint32(b[:4]) + syscall.NLMSG_HDRLEN
+ native.Endian.PutUint32(b[:4], bl)
+ raw, err = syscall.ParseNetlinkMessage(b[:n])
+ if err != nil {
+ return nil, err
+ }
}
msgs := make([]Message, 0, len(raw))
code for reproducing the above behaviour
Code to read the audit log: https://play.golang.com/p/Kj4DOl0PKRQ
Code to see how different packets are parsed by golang: https://play.golang.com/p/mA7_MJdVSv8
The audit.h file: include/audit.h
uname -a: Linux linux-dev 4.15.0-135-generic #139-Ubuntu SMP Mon Jan 18 17:38:24 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Update 1
This above behavior seems to crop up when I try to alter the audit state via the AUDIT_SET message type. If I try to connect to the read-only multicast group AUDIT_NLGRP_READLOG, It doesn't seem to happen. Also, If I close the unicast connection and then try multicast, the issue is back again. Basically, as long as my PID is bound to the socket, this issue comes back up.
Sample dump when only connecting via multicast group
([]uint8) (len=76 cap=4096) {
00000000 49 00 00 00 1d 05 00 00 00 00 00 00 00 00 00 00 |I...............|
00000010 61 75 64 69 74 28 31 36 31 32 31 36 35 31 33 31 |audit(1612165131|
00000020 2e 30 31 36 3a 33 33 34 37 32 29 3a 20 61 72 67 |.016:33472): arg|
00000030 63 3d 32 20 61 30 3d 22 61 75 64 69 74 63 74 6c |c=2 a0="auditctl|
00000040 22 20 61 31 3d 22 2d 73 22 00 00 00 |" a1="-s"...|
}
Notice how the size if 0x49 = 73, the exact packet length.

so I don't know anything personally about this, but I found this question via a search, and one other result to that search was this blog article: https://blog.des.no/2020/08/netlink-auditing-and-counting-bytes/
To summarize, someone else found that behavior too, and it seems to be a bug.
Here's the relevant quote:
Bug #3: The length field on audit data messages does not include the
length of the header.
This is jaw-dropping. It is so fundamentally wrong. It means that
anyone who wants to talk to the audit subsystem using their own code
instead of libaudit will have to add a workaround to the Netlink layer
of their stack to either fix or ignore the error, and apply that
workaround only for certain message types.
How has this gone unnoticed? Well, libaudit doesn’t do much input
validation.
(...)
The odds of these bugs getting fixed is approximately zero, because
existing applications will break in interesting ways if the kernel
starts setting the length field correctly.

Related

How to convert Big Endian Hex to ASCII in Node?

I'm trying to interpret a HEX message that is transmitted in network byte order (big endian), how should I proceed to achieve conversion to ASCII?
server.on('message', function (message, remote) {
//I receive message via UDP in HEX.
});
The message comes in this format:
2B 41 43 4B 19 EF 24 10 01 02 03 02 56 50 22 00 0A 02 3B 01 00 00 4E 07 DD 02 17 11 21 20 46 AD 4E 1E 0D 0A
With that being said, each byte has a parameter attached, let's say I have 4 bytes to parameter1, 2 bytes to parameter2 and 8 bytes to parameter 3, how would I interpret it?

Unknown Bittorrent Message

I have been receiving an odd/unknown message while attempting to communicate with some bittorrent peers. In this particular case I am in the middle of downloading pieces and all of a sudden this new/odd message pops up in front of a piece response.The message is odd because it doesn't appear to follow the protocol, all messages are supposed to look like this
'<length prefix><message ID><payload>'
length prefix is 4 bytes, message id is 1 byte and the payload. I am including a capture to show what I mean, on line 509 of the capture you will
see a request for a piece, on line 510 you will see the beginning of the response.
The first 4 bytes of the response are 00 00 00 00, ie 0 length message (Which is causing me issues), the next 4 bytes are the actual length of the message which is 30. The actual response to the piece request starts on line 513, so I get the piece I was requesting but this new/odd message is messing me up. I'm certain I can find a workaround but I would really like to understand what this means.
Also, I have no idea what the actual message means, and cannot find any information about it anywhere.
Here is the Wireshark capture.
https://1drv.ms/u/s!Agj06pa-wu0tnFqsYn_KnHmVz3x2
Data from packet 510:
0000 00 00 00 00 00 00 00 1e 14 01 64 35 3a 61 64 64 ..........d5:add
0010 65 64 36 3a 63 f2 7a 48 17 f4 37 3a 64 72 6f 70 ed6:c.zH..7:drop
0020 70 65 64 30 3a 65 ped0:e
00 00 00 00 4 bytes keep-alive message
00 00 00 1e message length 30 bytes
14 message type extended message (BEP10)
01 extended message ID = 1 as specified by the previous extension handshake: ut_pex
64 35 3a 61 64 64 65 64 36 3a 63 f2 7a 48 17 f4 37 3a 64 72 6f 70 70 65 64 30 3a 65
d5:added6:c.zH..7:dropped0:e
ut_pex message data (bencoded)
d
5:added
6:c.zH..
7:dropped
0:
e
ut_pex message data (bencoded with added white space)
The first 4 bytes of the response are 00 00 00 00, ie 0 length message (Which is causing me issues)
The bittorrent spec says
Messages of length zero are keepalives, and ignored.

T=0 and T=1 and Get Response command relation?

I have a Java Card with an applet installed on it that returns the following response when I send 00 40 00 00 to it:
Connect successful.
Send: 00 40 00 00
Recv: 61 32
Time used: 15.000 ms
Send: 00 C0 00 00 32
Recv: 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F 20 21 22 23 24 25 26 27 28 29 2A 2B 2C 2D 2E 2F 30 31 32 90 00
Time used: 15.000 ms
The tool that I use (PyAPDUTool), have an option labeled "Auto Get Response". When I check this option, I don't need to send Get Response command (00 c0 00 00 32) anymore:
Send: 00 40 00 00
Recv: 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F 20 21 22 23 24 25 26 27 28 29 2A 2B 2C 2D 2E 2F 30 31 32 90 00
Time used: 15.000 ms
Okay. Now I want to have the above behavior on another Java Card. So I wrote the following program:
package testPrjPack;
import javacard.framework.*;
public class TestPrj extends Applet
{
public static byte[] data = {(byte)0x01 ,(byte)0x02 ,(byte)0x03 ,(byte)0x04 ,(byte)0x05 ,(byte)0x06 ,(byte)0x07 ,(byte)0x08 ,(byte)0x09 ,(byte)0x0A ,(byte)0x0B ,(byte)0x0C ,(byte)0x0D ,(byte)0x0E ,(byte)0x0F ,(byte)0x10 ,(byte)0x11 ,(byte)0x12 ,(byte)0x13 ,(byte)0x14 ,(byte)0x15 ,(byte)0x16 ,(byte)0x17 ,(byte)0x18 ,(byte)0x19 ,(byte)0x1A ,(byte)0x1B ,(byte)0x1C ,(byte)0x1D ,(byte)0x1E ,(byte)0x1F ,(byte)0x20 ,(byte)0x21 ,(byte)0x22 ,(byte)0x23 ,(byte)0x24 ,(byte)0x25 ,(byte)0x26 ,(byte)0x27 ,(byte)0x28 ,(byte)0x29 ,(byte)0x2A ,(byte)0x2B ,(byte)0x2C ,(byte)0x2D ,(byte)0x2E ,(byte)0x2F ,(byte)0x30 ,(byte)0x31 ,(byte)0x32};
public static void install(byte[] bArray, short bOffset, byte bLength)
{
new TestPrj().register(bArray, (short) (bOffset + 1), bArray[bOffset]);
}
public void process(APDU apdu)
{
if (selectingApplet())
{
return;
}
byte[] buf = apdu.getBuffer();
switch (buf[ISO7816.OFFSET_INS])
{
case (byte)0x40:
ISOException.throwIt((short)0x6132);
break;
case (byte)0xC0:
Util.arrayCopyNonAtomic(data,(short)0, buf, (short)0, (short)0x32);
apdu.setOutgoingAndSend((short)0,(short)0x32);
break;
default:
ISOException.throwIt(ISO7816.SW_INS_NOT_SUPPORTED);
}
}
}
After installing the .cap file on the new Java Card, I have the following response for both Auto Get Response option checked and unchecked:
As you see above, the Auto Get Response doesn't work anymore and I need to send Get Response command manually.
I'm curious to know what is wrong with this tool or my program? Is the issue related to the communication protocol? (The first card working with T=0 and the second one working with T=1).
Nothing is wrong. T=1 just doesn't use GET RESPONSE at all, so there is no reason for Python to handle it automatically.
Important: note that Java Card also handles the GET RESPONSE automatically, so you should never have to implement it explicitly.

Would it be possible to read out physical keyboard strokes in node.js?

I have a node application which runs on a raspberry pi that keeps track of a bunch of UPnP-players (Sonos), which I would like to be able to control through a physical remote. I have a couple of airmouses, which has small keyboards as well as volume buttons that I would like to use.
I have tried to get a grip on how to read out physical key strokes on a linux machine, and come to the conclusion that I need to read events from the input device, which in my case would be:
/dev/input/by-id/usb-Dell_Dell_QuietKey_Keyboard-event-kbd
How to find the device and stuff like that is not a problem, the real issue is how to interpret the data that you read from it.
I know that you would receive a C struct, like this:
struct input_event {
struct timeval time;
unsigned short type;
unsigned short code;
unsigned int value;
};
But I'm not sure how I would go about reading this from node. If I could run an external app that would be triggered from pre-defined keystrokes, and then invoke an HTTP-request against my node, that would be my second option, a python script or some native daemon. I have however looked at some hotkey-daemons, but none of them worked.
If would of course be nice if I could contain it within node somehow.
EDIT: So I did some testing, and made a simple snippet:
var fs = require('fs');
var buffer = new Buffer(16);
fs.open('/dev/input/by-id/usb-HJT_Air_Mouse-event-kbd', 'r', function (err, fd) {
while (true) {
fs.readSync(fd, buffer, 0, 16, null);
console.log(buffer)
}
});
This outputs something like this (for space):
<Buffer a4 3e 5b 51 ab cf 03 00 04 00 04 00 2c 00 07 00>
<Buffer a4 3e 5b 51 c3 cf 03 00 01 00 39 00 01 00 00 00>
<Buffer a4 3e 5b 51 cb cf 03 00 00 00 00 00 00 00 00 00>
<Buffer a4 3e 5b 51 ba 40 06 00 04 00 04 00 2c 00 07 00>
<Buffer a4 3e 5b 51 cd 40 06 00 01 00 39 00 00 00 00 00>
<Buffer a4 3e 5b 51 d2 40 06 00 00 00 00 00 00 00 00 00>
I realize that the first four bytes are some sort of timestamp, and the following 3 bytes could be something like a micro/millisecond thing.
Another odd thing is that not all keypresses produces output, but a subsequent press might sent twice as much data, and most of the time it starts blasting out data which would stop after subsequent keypresses (or after about 20 seconds or so). I'm not really sure how to interpret that. I have tried to read the source for this daemon https://github.com/baskerville/shkd/blob/master but C is not my strongest language and I can't identify how he handles it (or if it should even be handled). And that daemon didn't even work for me (compiled it on a raspberry pi).
Well, let's have a look at that struct.
struct input_event {
struct timeval time;
unsigned short type;
unsigned short code;
unsigned int value;
};
A struct timeval has this structure:
struct timeval
{
__time_t tv_sec; /* Seconds. */
__suseconds_t tv_usec; /* Microseconds. */
};
The definition of those time types are
typedef signed long time_t;
typedef signed long suseconds_t;
A signed long is 4 bytes (well, not if you just follow the spec, but in practice, it is), so the first 8 bytes are a typestamp. Next, you have a type and a code. Both are short, so in practice, they're 2 bytes each. Now there's just the value left, and that's an int again, which will be four bytes. Also, a compiler could theoretically add padding between the fields here, but I'm pretty sure he won't.
So, first chop the bytes you've read into chunks of 4+4+2+2+4=16 bytes. Each of those chunks is an event. This fits your sample data. Next, extract the values from the buffer (as little endian values because you're on an ARM system – on a normal PC, you'd need big endian) and interpret the values. For instructions on how to do that, read http://www.mjmwired.net/kernel/Documentation/input/event-codes.txt. The values of the constants aren't written down there, but you can usually find those using grep -R NAME_OF_CONSTANT /usr/include.
Let's chop up
<Buffer a4 3e 5b 51 ab cf 03 00 04 00 04 00 2c 00 07 00>
as an example.
<Buffer a4 3e 5b 51 ab cf 03 00 04 00 04 00 2c 00 07 00>
| tv_sec | tv_usec |type |code | value |
tv_sec in hex is 0x515b3ea4 (reversed order because it's little endian), which is 1364934308 in decimal. A simple unix time converter reports that this means 02.04.2013 - 22:25:08. Looks good!
tv_usec is 0x0003cfab=249771, so actually, the event happened 249771 microseconds after that time.
Type is 0x0004=4. /usr/include/linux/input.h tells us that this is a EV_MSC.
Given the type, we can also see the the code, 0x0004=4, means MSC_SCAN.
The value is 0x0007002c. Turns up nowhere in input.h. Hmm.
I think what you're looking for is fs.createReadStream, so you can install some event handlers.
You can parse input events into structs by using the Buffer.readX routines:
var i = 0;
while((buf.length-i) >= 16) {
var event = {
tssec: buf.readUInt32LE(i+0),
tsusec: buf.readUInt32LE(i+4),
type: buf.readUInt16LE(i+8),
code: buf.readUInt16LE(i+10),
value: buf.readUInt32LE(i+12)
};
i += 16;
}

DNS CNAME type Records have incorrect RDLENGTH fields?

I've been using RFC 1035.4.1.3 as a reference for DNS RR format:
http://www.freesoft.org/CIE/RFC/1035/42.htm
The RFC says that RDLENGTH is "an unsigned 16 bit integer that specifies the length in octets of the RDATA field" but in the datagrams I'm getting RDLENGTH is sometimes 2 less than it should be. I've checked with wireshark to ensure that I'm getting the datagram correctly. Here's a CNAME record I got while looking up google:
C0 0C 00 05 00 01 00 03 95 FC 00 10 03 77 77 77
01 6C 06 67 6F 6F 67 6C 65 03 63 6F 6D 00
So that's the name: C0 0C (a pointer to www.google.com earlier in the dgram)
Then the type: 00 05 (CNAME)
Then the class: 00 01 (IN)
Then the TTL: 00 03 95 FC (whatever)
Then RDLENGTH: 00 10 (that's 16 bytes, yes?)
Then RDATA:
03 77 77 77 01 6C 06 67 6F 6F 67 6C 65 03 63 6F 6D 00 (www.l.google.com - format is correct)
As you can see, the RDATA is 18 bytes in length. 18 bytes is 0x12, not 0x10.
The type A records that come after that correctly report RDLENGTH 4 for the address data. Am I missing something here? I'd dismiss it as an error, but I get this for every DNS servers and every domain.
I guess really what I'm asking is why the RDATA is longer than RDLENGTH and what rules should I follow to adapt to it so I can parse any type of record. (Specifically, can I expect this kind of thing from other RR types?)
Thank you in advance to anyone who gives advice. :)
The response data appears to be messed up - either RDLENGTH should be 18 (0x00 0x12), or RDATA should be different.
I just ran a few google lookups from here, and I do not see that problem.
I get RDLENGHT of 7 and RDATA to match (a compressed name).
Is something messing with your packet data?

Resources