I'd like to do this (from log4net docu) with nlog:
This example shows how to deliver only significant events. A LevelEvaluator is specified with a threshold of WARN. This means that an email will be sent for each WARN or higher level message that is logged. Each email will also contain up to 512 (BufferSize) previous messages of any level to provide context. Messages not sent will be discarded.
Is it possible?
I only found this on codeproject.
But it uses a wrapper target that flushes in behalf of the number of messages, not on the log level.
Thanks
Tobi
There is a QueuedTargetWrapper ( a target that buffers log events and sends them in batches to the wrapped target)
that seems address the requirement. I haven't tried it yet.
There is a related discussion "The Benefits of Trace Level Logging in Production Without the Drawback of Enormous Files"
Simple solution that will write last 200 log-events on error:
<target name="String" xsi:type="AutoFlushWrapper" condition="level >= LogLevel.Error" flushOnConditionOnly="true">
<target xsi:type="BufferingWrapper"
bufferSize="200"
overflowAction="Discard">
<target xsi:type="wrappedTargetType" ...target properties... />
</target>
</target>
See also: https://github.com/nlog/NLog/wiki/BufferingWrapper-target
Related
I have a spring-integration flow that starts with a file inbound-channel-adapter activated by a transactional poller (tx is handled by atomikos).
The text in the file is processed and the message goes down through the flow until it gets sent to one of the JMS queues (JMS outbound-channel-adapter).
In the middle, there are some database writes within a nested transaction.
The system is meant to run 24/7.
It happens that the single message flow, progressively slows down and when I investigated, I found that the stage that is responsable for the increasing delay is the read from filesystem.
Below, the first portion fo the integration flow:
<logging-channel-adapter id="logger" level="INFO"/>
<transaction-synchronization-factory id="sync-factory">
<after-commit expression="payload.delete()" channel="after-commit"/>
</transaction-synchronization-factory>
<service-activator input-channel="after-commit" output-channel="nullChannel" ref="tx-after-commit-service"/>
<!-- typeb inbound from filesystem -->
<file:inbound-channel-adapter id="typeb-file-inbound-adapter"
auto-startup="${fs.typeb.enabled:true}"
channel="typeb-inbound"
directory="${fs.typeb.directory.in}"
filename-pattern="${fs.typeb.filter.filenamePattern:*}"
prevent-duplicates="${fs.typeb.preventDuplicates:false}" >
<poller id="poller"
fixed-delay="${fs.typeb.polling.millis:1000}"
max-messages-per-poll="${fs.typeb.polling.numMessages:-1}">
<transactional synchronization-factory="sync-factory"/>
</poller>
</file:inbound-channel-adapter>
<channel id="typeb-inbound">
<interceptors>
<wire-tap channel="logger"/>
</interceptors>
</channel>
I read something about issues related to the prevent-duplicates option that stores a list of seen files, but that is not the case because I turned it off.
I don't think that it may be related to the filter (filename-pattern) because the expression I use in my config (*.RCV) is easy to apply and the input folder does not contain a lot of files (less than 100) at the same time.
Still, there is something that gradually makes the read from filesystem slower and slower over time, from a few millis to over 3 seconds within a few days of up-time.
Any hints?
You should remove, or move files after they have been processed; otherwise the whole directory has to be rescanned.
In newer versions, you can use a WatchServiceDirectoryScanner which is more efficient.
But it's still best practice to clean up old files.
Finally I got the solution.
The issue was related to the specific version of Spring I was using (4.3.4) that is affected by a bug I had not discovered yet.
The problem is something about DefaultConversionService and the use of converterCache (look at this for more details https://jira.spring.io/browse/SPR-14929).
Upgrading to a more recent version has resolved.
I try to analyze logs from snoopy.
For example:
Dec 2 07:58:31 local.server snoopy[14165]: [uid:1660 sid:14056 tty:/dev/pts/1 cwd:/home/myuser filename:/usr/bin/ssh]: ssh root#remote.server
I wrote a decoder:
<decoder name="snoopy-logger">
<program_name>^snoopy</program_name>
</decoder>
and:
<group name="snoopy-test">
<rule id="100040" level="0">
<decoded_as>snoopy-logger</decoded_as>
<description>Ignore Snoopy logger events</description>
</rule>
<rule id="100041" level="15">
<if_sid>100040</if_sid>
<match>ssh root#</match>
<description>snoopy root</description>
</rule>
</group>
And when I tested via logtest, I got:
**Phase 1: Completed pre-decoding.
full event: 'Dec 2 07:58:31 local.server snoopy[14165]: [uid:1660 sid:14056 tty:/dev/pts/1 cwd:/home/myuser filename:/usr/bin/ssh]: ssh root#remote.server'
hostname: 'local.server'
program_name: 'snoopy'
log: '[uid:1660 sid:14056 tty:/dev/pts/1 cwd:/home/myuser filename:/usr/bin/ssh]: ssh root#remote.server'
**Phase 2: Completed decoding.
decoder: 'snoopy-logger'
**Phase 3: Completed filtering (rules).
Rule id: '100041'
Level: '15'
Description: 'snoopy root'
**Alert to be generated.
So it works, but in SIEM i got event with src_ip and dst_ip = 0.0.0.0.
What I missed? I need src_ip = local.server and dst_ip = remote.server.
Thanks in advance for any suggestions :)
Looks like my answer is a little bit late, but unfortunately the OSSEC rules are only half of the parsing issue in AlienVault.
Once OSSEC parses the event and it has a high enough level to generate an OSSEC alert, it gets written to /var/ossec/logs/alerts/alerts.log where it is then picked up by ossim-agent which is reading the alerts file. ossim-agent is the sensor process that is responsible for reading the raw text logs and then parsing them using regular expressions defined in a plugin (in this case, the ossec-single-line.cfg plugin in /etc/ossim/agent/plugins/).
You will probably need to add an additional rule to your plugin by creating an ossec-single-line.cfg.local file in /etc/ossim/agent/plugins/ to add rules to the original plugin for OSSEC.
More info on creating rules and plugin files can be found in AlienVault's docs here:
https://www.alienvault.com/doc-repo/usm/security-intelligence/AlienVault-USM-Plugins-Management-Guide.pdf
Check out the Customizing Plugins section starting on page 35.
Happy Tuning!
I understand how aggregating based on size works but I also want to make the release strategy depend on another step in the pipeline to be still running. The idea is that i move files to a certain dir "source", aggregate enough files and then move from "source" to "stage" and then process the staged files. While this process is running I dont want to put more files in stage but I do want to continue to add more files to source folder (that part is handled by using a dispatcher channel connected with file inbound adapter before the aggregator)
<int:aggregator id="filesBuffered"
input-channel="sourceFilesProcessed"
output-channel="stagedFiles"
release-strategy-expression="size() == 10"
correlation-strategy-expression="'mes-group'"
expire-groups-upon-completion="true"
/>
<int:channel id="stagedFiles" />
<int:service-activator input-channel="stagedFiles"
output-channel="readyForMes"
ref="moveToStage"
method="move" />
so as you can see I dont want to release the aggregated messages if an existing instance of moveToStage service activator is running.
I thought about making the stagedFiles channel a queue channel but that doesnt seems right because I do want the files to be passed to moveToStage as a Collection not a single file which I am assuming by making stagedFiles a queue channel it will send a single file. Instead I want to get to a threshold e.g. 10 files, pass those to stagedFiles which allows the moveToStage to process those files but then until this step is done I want the aggregator to continue to aggregate files and then release all aggregated files.
Thanks
I suggest you to have some flag as a AtomicBoolean bean and use it from your moveToStage#move and check it's state from:
release-strategy-expression="size() >= 10 and #stagingFlag.get()"
I've got the EventLog target set up like so:
<target xsi:type="EventLog"
name="EventLog"
layout="${longdate:universalTime=true}|${level:uppercase=true}|${logger}|${message}"
source="MyApp"
log="Application" />
Now, obviously not all my events will have the same ID, so I want to set event ID on a per message basis, rather than setting a static ID in the config. I believe this should work:
var logger = LogManager.GetCurrentClassLogger();
var logEvent = new LogEventInfo(LogLevel.Warn, logger.Name, "Test message");
logEvent.Properties.Add("EventID", 4444);
logger.Log(logEvent);
...but my events always have event ID set to 0. Anyone know how to get this working?
I figured it out - you have to use a layout in the eventId property of the target:
<target xsi:type="EventLog"
name="EventLog"
layout="${longdate:universalTime=true}|${level:uppercase=true}|${logger}|${message}"
source="MyApp"
>> eventId="${event-properties:EventID:whenEmpty=0}" <<
log="Application" />
I've also created the Timber logging facade called for both NLog and log4net, which makes logging messages with different event IDs very simple.
On the github hub repo there's an example config for EventLog targets that includes an eventId. The eventId will use a Layout that renders an event ID.
https://github.com/NLog/NLog/wiki/Eventlog-target
How do you configure cruiseControl to send out emails that contains the error log whenever a build fails? I've gotten it to send out emails to users when the build fails, but it does not include the actual error that caused the build to fail. I know that if I only configure it to send out emails to the users that have made modifications, the error log is included in those emails. This is a sample of what I have:
< publishers>
< rss/>
< xmllogger/>
< email from="abc#abc.com" mailhost="abc.abc.com" includeDetails="TRUE">
< users>
< user name="Joe" group="devs" address="joe#abc.com"/>
< user name="Jim" group="devs" address="jim#abc.com"/>
< /users>
< groups>
< group name="devs" notification="Failed"/>
< /groups>
< /email>
< /publishers>
You can check if \cruisecontrol.net\server\xsl\compile.xsl is the same as \cruisecontrol.net\webdashboard\xsl\compile.xsl.
Compile.xsl is the default file used to print the error messages from your error log. The one in \webdashboard\ is used for the web dashboard (as the name implies) and the one under \server\ is used for emails.
You can also check ccnet.exe.config whether or not \cruisecontrol.net\server\xsl\compile.xsl is used for emails.
Mine's for example points to compile.xsl on \server:
<!-- Specifies the stylesheets that are used to transform the build results when using the EmailPublisher -->
<xslFiles>
<file name="xsl\header.xsl" />
<file name="xsl\compile.xsl" />
<file name="xsl\unittests.xsl" />
<file name="xsl\fit.xsl" />
<file name="xsl\modifications.xsl" />
<file name="xsl\fxcop-summary.xsl" />
</xslFiles>
Your email publisher will take the buildlog.xml and transorm it against whatever XSL's are configured either in your console or service config depending on which you use. There should be no difference in the content of the email though no matter on who you have it configured to be sent to and when. As long as you have the merge before the email publiseher and the email in the publishers section. I don't see how it could be different Are you sure the same failure produces different emails? My guess would be you are failing somewhere bad and the build log is not being genereted one way.
The build log is getting generated. I can see the error. It's just not included in the email.