Graylog content pack -> get fields - graylog2

I have installed graylog to analyse the logs of my haproxy better. I've installed to content pack, and now I can see the haproxy logs flowing in. However, the log message is 1 string. I'm trying to substract the different fields which are defined in the content pack (https://github.com/Graylog2/graylog-contentpack-haproxy/blob/master/content_pack.json): Remote Address, server, .... How can I do this?
Thanks!

For those looking for the same:
Create an input
Manage extractors
Add an extractor
Type: Regular expression
Source: Message

Related

Custom audit log message send from Oracle DB version 12c to SIEM via syslog

i need to send audit log from Oracle DB version 12c to my SIEM via syslog on IBM AIX. The problem is it not include the information i need. For exp:
<134>Mar 4 11:00:25 Message forwarded from abc: Oracle Audit[5374348]: LENGTH : '494' ACTION :[344] '9'),chartorowid('AAAAJCAABAAAA+nABn'),chartorowid('AAAAJCAABAAAA+nABv'),chartorowid('AAAAJCAABAAAisBAAP'),chartorowid('AAAAJCAABAAAisBAC9'),chartorowid('AAAAJCAABAAAisBADQ'),chartorowid('AAAAJCAABAAAisDACn'),chartorowid('AAAAJCAABAAAisEABG'),chartorowid('AAAAJCAABAAAisEABf'),chartorowid('AAAAJCAABAAAisEABn'),chartorowid('AAAAJCAABAAAisEACb'),' DATABASE USER:[3] 'SYS' PRIVILEGE :[4] 'NONE' CLIENT USER:[0] '' CLIENT TERMINAL:[7] 'UNKNOWN' STATUS:[1] '0' DBID:[10] '2346730987'
It not have the information about source IP which really needed to parsing log for security purpose. Is there possile for us to modify and include some information that we need into it ? Thank!
It is not possible to modify the information generated by Oracle's internal auditing. If you need to supplement the data going to your SIEM, then either
the SIEM tool needs to generate SQL queries to the audit trail
and any other necessary tables within the database, instead of
relying on syslog; or
you need to write a custom PL/SQL function to run the
appropriate queries and use UTL_FILE to write the output to an
external log file that the SIEM can read.
That said, it looks like your log sample is an audit of SYS actions, which may not even exist in the internal audit trail depending on your specific setup and version. If that is the case, what you see is all there is.

Azure Stream Analytics - no output events

I have a problem with azure stream analytics job. Job monitor shows incoming input events (from Event Hub) but there are no output events or errors. Job is really simple, just to write every input to azure blob storage:
SELECT * FROM input
Any suggestions what could be wrong?
Update!
It was a bug in Azure Stream Analytics and it's already solved by Microsoft.
Did you try to include INTO clause?
SELECT
*
INTO
[output]
FROM
[input]
Since you have verified that events are coming into the system, it's likely that the job is encountering an error during processing or writing to output. Make sure that your input fields are in the set of supported data types and use a CAST statement if they aren't. To hone in on the root cause, you may also want to pick a field or two to project instead of using a SELECT *.
You mentioned that there aren't any errors but make sure to check the following sources of troubleshooting/diagnostic information:
Top-level status of your job (Processing, Degraded, etc.). Definition for each status is here: http://azure.microsoft.com/en-us/documentation/articles/stream-analytics-developer-guide/
Use the "Test Connection" button on your inputs and outputs to verify connectivity
Check the "Diagnosis" value for your inputs and outputs and click the name of the input/output for more detail, if applicable
Look in the Operations Logs for any Warnings or Errors

Where can I see the nodejs logs after I deployed on Google App Engine?

I deployed a nodejs app on Google App engine following this tutorial https://github.com/GoogleCloudPlatform/appengine-nodejs-quickstart it was successful and now I want to check the logs of the nodejs server, like in development from the terminal console. The Vms are managed by google but even if I ssh to them I don't know where to look for the logs.
You can read the stdout of the docker container that your app runs by doing docker logs <container id> in the VM instance. You can get the container id from docker ps.
No need to SSH into the instance though. You can simply fetch the logs from the Developers Console under Monitoring > Logs.
As #tamberg mentioned in a comment, the easiest option I've found for looking at logs produced by Google App Engine instances running Node.js is to simply use the log viewer at:
https://console.cloud.google.com/logs/viewer?resource=gae_app
Detailed instructions from https://cloud.google.com/appengine/docs/standard/nodejs/building-app/viewing-service-logs are as follows:
Open the Logs Viewer in the GCP Console: https://console.cloud.google.com/logs/viewer?resource=gae_app
In the first filter dropdown at the top of the page, ensure GAE Application is selected, and choose Default Service.
Use the second filter dropdown to select only stdout and click OK. This limits the viewer to logs sent to standard output.
Use the text field above the dropdowns to search for the name you used when you submitted your form. You should see the logs corresponding to your submissions.
The default logging is really awful. None of my console.log messages show up! There are a few ways you can fix this.
1) Write logs to a log file.
For example, /var/log/app_engine/custom_logs/applogs.log
https://cloud.google.com/appengine/articles/logging
"Cloud Logging and Managed VMs apps Applications using App Engine
Managed VMs should write custom log files to the VM's log directory at
/var/log/app_engine/custom_logs. These files are automatically
collected and made available in the Logs Viewer. Custom log files
must have the suffix .log or .log.json. If the suffix is .log.json,
the logs must be in JSON format with one JSON object per line. If the
suffix is .log, log entries are treated as plain text."
2) Use winston with winston-gae.
Create a transport that will send the logs to appengine.
3) Use gcloud-logging module
Too verbose for my liking, but it is another option.

Not getting data on kibana UI with elasticSearch/Logstash on Windows using IIS

I'm installing Logstash with elasticSearch on Windows with Kibana UI. I'm using IIS for this. I'm following this tutorial to configure all these on my laptop.
I did exactly the same what was shown in the turorial, I have configured ElasticSearch and logstash on auto-start and I'm able to get the Main page of kibana when I go to the url loghost.kibanaproject.net (actually its 127.0.0.1:80) but, I'm unable to get sample log-data from the a sample Json file.
The file is placed in the root directory in the kibana folder (C:/kibanaproject/kibana) but its not showing the data from the file while according to the tutorial it should display the data on UI.
Also when I go to the IIS(Internet Information Service):
Manager -> User -> Sites -> loghost.kibanaproject.net -> Basic Settings -> Test Setting
there is a warning icon on Authorization showing:
Cannot verify access to path(C:/kibanaproject/kibana)".
Don't know how to resolve it or I'm missing something ? Any help will be highly appreciated.

Can I send data to a RemoteApp using Remote Desktop Services?

When I launch a RemoteApp via Remote Desktop Web Access, is there a way to send data to the remote app?
Desired senario:
A user logs into a website with their credentials. They also provide demographic information such as first name, last name, address, etc.
The website connects to the RemoteApp via SSO and makes the demographic information available to the RemoteApp.
For example, if the RemoteApp is a Windows Forms app, can I get this information and display it in a message box?
Edit1: TomTom's response in this question mentions using named pipes to send data. Is that applicable to this problem?
It turns out you can pass command line parameters to the RemoteApp using the remoteapplicationcmdline property like such:
remoteapplicationcmdline:s:/Parameter1: 5234 /Parameter2: true
(The names "/Parameter1" and "/Parameter2" are just examples. Your remote app will have to define and handle these as appropriate.)
This setting is part of the RdpFileContents property of the MsRdpClientShell object.
Here is a resource for other RdpFileContents properties.
Your code might end up looking something like this:
MsRdpClientShell.PublicMode = true;
MsRdpClientShell.RdpFileContents = 'redirectclipboard:i:1 redirectposdevices:i:0 remoteapplicationcmdline:s:/Parameter1: 5234 /Parameter2: true [Other properties here...]';
MsRdpClientShell.Launch();
For larger amounts of information, we might send preliminary data to a web service, retrieve an identifier back, pass this identifier to the RemoteApp via the command line, then have the RemoteApp query the web service to get all the information.
Of course, for the parameters to be of use the program must be looking for them. Setting up a database to query has a little security issue if it is sensitive data.
If the program (RemoteApp) is looking for data in the form of a CSV or table or something, then you might be able to send a lot of data to be processed. It just depends upon what parameters (and form) the program is going to use.

Resources