Bro IDS signature_match trigger - signature

I am new to BRO and just started to test signature on BRO. I have one script, main.bro, and a signature file, protosigs.sig. The idea is to compare the signature and do something within the rewritten event function - signature_match. I tried to use the following measure to test a pcap file but the test didn't generate a notice.log. It seemed the function - signature_match wasn't get called. Can anyone let me know what's going on here? Many thanks!
How I test the script and signature:
bro -r ./bittorrent.Transfer.pcap ./main.bro -s ./protosigs.sig
My signature:
signature torrent-request-tcp {
ip-proto == tcp
payload /^\x13/
tcp-state originator
event "torrent-request-tcp"
}
My script - main.bro:
#load base/frameworks/notice
#load base/frameworks/signatures/main
#load base/utils/addrs
#load base/utils/directions-and-hosts
##load-sigs ./protosigs.sig
module bittorrent;
export {
redef enum Notice::Type += {
Torrent,
};
}
event signature_match(state: signature_state, msg: string, data: string)
&priority=-5
{print "Triggerd!"; //at least this one should be triggered, but..
if ( /torrent/ in state$sig_id )
{
print "TTTTTTTTTTTTTTTTTTT";
NOTICE([$note=bittorrent::Torrent,
$msg="Torrent whatsoever",
$sub=data,
$conn=state$conn,
$identifier=fmt("%s%s", state$conn$id$orig_h, state$conn$id$resp_h)]);
}
}

Xifeng your setup seems to work fine here (putting aside the "//" comment delimiter, where you'll want "#" instead). The signature should be fine too, since the BitTorrent protocol indeed starts with 0x13 by the originator. The pcap I used is this one:
https://pcapr.net/view/gerald/2009/0/3/10/Comcast_Bittorrent_no_RST.pcap.html
Are you sure your pcap is okay? Make sure it contains the TCP handshake so Bro can properly bootstrap the connection state.

Related

System() function is hang in linux

Situation: In my project, I want modify OVS source code to perform some my functions. I want when OVS receive a specific packet, it will add a flow to userspace don't need controller, to do that i used system() function in a source c file to execute the following command:
ovs-ofctl add-flow s1 priority=5,tcp,in_port="s1-eth1",eth_src=32:3b:8c:9d:13:5f,eth_dst=d2:5f:67:a6:80:81,ipv4_src=10.0.0.1,ipv4_dst=10.0.0.2,tcp_src=25000,tcp_dst=59174,action=output:"s1-eth2"
Problem: After remake source code, whenever OVS run to my funtions which I added, system() funtion will hang like in image. Even if I stop OVS, process system() have been running .
image
Someone can help me?
I add my functions to connmgr_send_async_msg() in ovs/ofproto/connmgr.c . This function will recieve packet coming to OVS.
It looks like :
void connmgr_send_async_msg(struct connmgr *mgr, const struct ofproto_async_msg *am)
{
struct dp_packet packet_in; // this is packet comes in OVS
// then i get interface name, MAC, IPv4, TCP port of packet_in into
//in, out, sMAC, dMAC, sIP, dIP, sPort, dPort
char cmd[1000]; // command i want pass to system()
snprintf(bar1, sizeof(bar1), "ovs-ofctl add-flow s1 priority=5,tcp,in_port=\"%s\",eth_src=%s,eth_dst=%s, ipv4_src=%s,ipv4_dst=%s,tcp_src=%u,tcp_dst=%u,action=output:\"%s\""
,in ,sMAC ,dMAC, sIP, dIP, sPort, dPort, out); // pass agr to cmd
// call system() funtion with cmd
int systemRet1 = system(cmd);
// logging to my log file
log = fopen("/home/log_file.txt", "a");
fprintf(log, "Status when add flow %d \n", systemRet1);
fclose(log);
.................................// normal OVS source code
}

How do a get the data from Haxe's http.customRequest without crashing?

Forewarning: I’m very new to Haxe.
I’m trying to use http.customRequest (with the intention of later making PUT and DELETE requests). But when I try to access the result bytes I get a segmentation fault with C++ and a NullPointerException with Java.
I’ve googled for some other uses of customRequest, and what I’m doing doesn’t seem wrong, but clearly it is.
class Main {
static function main() {
var req = new haxe.Http("https://httpbin.org/put");
var responseBytes = new haxe.io.BytesOutput();
req.onError = function(err) {
trace("onError");
trace(err); // Java says NullPointerException
};
req.onStatus = function(status) {
trace("About to get bytes");
// Removing these lines prevents the errors
var b = responseBytes.getBytes();
trace("Got the bytes");
trace(b.length); // Shouldn't be empty, but is
};
req.customRequest(false, responseBytes, null, "PUT");
}
}
I’ve tried this with the current release and with HEAD (via Brew).
I think my command lines are pretty basic:
$ haxe -cp src -main Main -java bin/java
$ java -jar bin/java/Main.jar
src/Main.hx:12: About to get bytes
src/Main.hx:16: Got the bytes
src/Main.hx:17: 0
src/Main.hx:7: onError
src/Main.hx:8: java.lang.NullPointerException
$ haxe -cp src -main Main -cpp bin/cpp
$ ./bin/cpp/Main
src/Main.hx:12: About to get bytes
src/Main.hx:16: Got the bytes
src/Main.hx:17: 0
[1] 54544 segmentation fault ./bin/cpp/Main
In case it’s useful, here’s the differently broken output for Python:
$ haxe -cp src -main Main -python bin/Main.py
$ python3 bin/Main.py
onError
True
About to trace err
SSLError(1, '[SSL: TLSV1_ALERT_INTERNAL_ERROR] tlsv1 alert internal error (_ssl.c:1045)')
I’d really appreciate any guidance. TIA
(Reposted from the Haxe forum, where I haven't had a response.)
That's quite an interesting interaction there, here's what happens in order:
sys.Http receives the response and calls onStatus().
You call responseBytes.getBytes(), which ends up invalidating the internal buffer in haxe.io.BytesBuffer.getBytes(). The docs of that method state "Once called, the buffer can no longer be used", as it sets the internal buffer b to null.
The Http class then attempts to write to that same buffer, which is no longer valid.
Since there's a catch-all around the entire logic, the onError() callback is called due to the null reference.
The status code passed to onStatus() is 200 (OK), so apart from the getBytes() call, the request seems to work as expected. And according to apitester.com, data is empty for this particular request.

Passing a stream only if digest passes

I've got a pipeline in an express.js module in which I take a file, decrypt it, pass it through a digest to ensure it is valid, and then want to return it as the response if the digest passes. The code looks something like this:
function GetFile(req,res) {
...
}).then(() => {
var p1 = new Promise(function(resolve,reject) {
digester = digestStream("md5", "hex", function(md5,len) {
// compare md5 and length against expected values
// what do i do if they don't match?
resolve()
}
}
infile.pipe(decrypter).pipe(digester).pipe(res)
return p1
}).then(() => {
...
}
The problem is, once I pipe the output to res, it pipes it whether or not the digest passes. But if I don't pipe the output of the digester to anything, then nothing happens - I guess there isn't pressure from the right end to move the data through.
I could simply run the decryption pipeline twice, and in fact this was what was previously done, but I'm trying to speed things up so everything only happens once. One idea I had was to pipe the digester output to a buffer, and if the digest matches, then send the buffer to res. This will require memory proportional to the size of the file, which isn't horrible in most cases. However, I couldn't find much on how to .pipe() directly to a buffer. The closest thing I could find was the bl module, however in the section in which it demonstrates piping to a function which collects the data, there is this caveat mentioned:
Note that when you use the callback method like this, the resulting
data parameter is a concatenation of all Buffer objects in the list.
If you want to avoid the overhead of this concatenation (in cases of
extreme performance consciousness), then avoid the callback method and
just listen to 'end' instead, like a standard Stream.
I'm not familiar enough with bl to understand what this really means with regards to how efficient this is. Specifically, I don't understand why it is talking about concatenating buffer objects - why is there more than one buffer object that must be concatenated, for example?). I'm not sure how I can follow its advice and still have a simple pipe either.
The bl module is going to collect buffers when it is piped to. How many buffers depends on what the input stream does. If you don't want to concatenate them together, store them in the BufferList, and if the hash passes, then pipe the BufferList to your output.
Something like this works for me:
function GetFile(req,res) {
...
var bl
}).then(() => {
var p1 = new Promise(function(resolve,reject) {
digester = digestStream("md5", "hex", function(md5,len) {
if (md5 != expectedmd5) throw "bad md5"
if (len != expectedlen) throw "bad length"
resolve()
}
}
bl = new BufferList()
infile.pipe(decrypter).pipe(digester).pipe(bl)
return p1
}).then(() => {
bl.pipe(res)
...
}

How can I do stdin.close() with the new Streams API in Dart?

I ask user for input in my command line (dart:io) app. After I get my answer from the user, I want to unsubscribe from the Stream. Then, later, I may want to listen to it again (but with different listener, so pause() and resume() don't help me).
On startup, I have this:
cmdLine = stdin
.transform(new StringDecoder());
Later, when I want to gather input:
cmdLineSubscription = cmdLine.listen((String line) {
try {
int optionNumber = int.parse(line);
if (optionNumber >= 1 && optionNumber <= choiceList.length) {
cmdLineSubscription.cancel();
completer.complete(choiceList[optionNumber - 1].hash);
} else {
throw new FormatException("Number outside the range.");
}
} on FormatException catch (e) {
print("Input a number between 1 and ${choiceList.length}, please.");
}
});
This works as intended, but it leaves stdin open at the end of program execution. With the previous API, closing stdin was as easy as calling stdin.close(). But with the new API, stdin is a Stream, and those don't have the close method.
I think that what I'm doing is closing (read: unsubscribing from) the transformed stream, but leaving the raw (stdin) stream open.
Am I correct? If so, how can I close the underlying stdin stream on program exit?
To close stdin, you just unsubscribe from it:
cmdLineSubscription.cancel();
This is the equivalent way of doing it. So your intuition was right. I'm not sure if I understood the question -- was there a problem with this approach?

Protocol Buffers to file?

I am new to Protocol Buffers and seeing this as good approach to go.
I created a proto file, and using compiler, I generated Java beans.
Using this Java Beans, I initialize the object, and try to write it to file.
The purpose is just to see how big the file is.
I don't have client/server test ready at this moment to test via HTTP.
I am just trying to show my team at this point, how the request/response look like using protocol buffers.
Code I have is something like this:
=== Proto file ===
Profile {
optional string name=1
optional string id=2
message DocGuids {
required string docguids=3
}
repeated DocGuids docguids=4
}
=== Sample code ===
ProfileRequest.Builder profile = ProfileRequest.newBuilder();
profile.setName("John");
profile.setId("123");
for (int i=0;i<10;i++) {
ProfileRequest.DocGuids.Builder docGuids = ProfileRequest.DocGuids.newBuilder();
docGuids.setDocguid(GUID.guid());
profile.addDocguids(docGuids);
}
//write to disk
try {
// Write the new address book back to disk.
FileOutputStream output = new FileOutputStream("c:\\testProto.txt");
DataOutputStream dos = new DataOutputStream(output);
dos.write(profile.build().toByteArray());
dos.close();
output.close();
} catch (Exception e) {
}
When I check testProto.txt, I saw the file was written as text file, not binary file, eventhough I use toByteArray.
Anyone could help?
Thanks
By the way, this is the code to read this file:
// Read from disk
FileInputStream input = new FileInputStream("c:\\testProto.txt");
DataInputStream dis = new DataInputStream(input);
profileBuild.mergeFrom(dis);
dis.close();
input.close()
I am able to read the object and get the value just fine, but just wondering if this is the correct approach to do?
I'm not sure why you're creating the DataOutputStream and calling dos.write in the first place...
I would normally use this:
profile.build().writeTo(output);
However, I would still have expected the file to be a binary file really. Sure, it would include the text "123" and "John" as those are just UTF-8 strings... but there should have been non-text in there as well. Are you sure it's just text? Could you post the output?

Resources