Serilog writes the logs twice - asp.net-core-2.0

I'm using Serilog with Elasticsearch sink with the configurations like this:
Log.Logger = new LoggerConfiguration()
.MinimumLevel.Verbose()
.MinimumLevel.Override("Microsoft", LogEventLevel.Verbose)
.Enrich.FromLogContext()
.Enrich.WithExceptionDetails()
.Enrich.WithProperty("Application", "abc")
.Enrich.WithProperty("Environment", env.EnvironmentName)
.WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri(Configuration["LoggingEndpoint"]))
{
AutoRegisterTemplate = true,
CustomFormatter = new ExceptionAsObjectJsonFormatter(renderMessage: true) // Better formatting for exceptions
})
// and later:
services.AddLogging(loggingBuilder =>
loggingBuilder.AddSerilog());
But I can see every log twice, with a couple of milliseconds difference in their timestamp, on the Kibana. I tried the solutions provided here, just in case they might help, but no luck.

This is probably the case because you are configuring Serilog inside your appsettings.json and also inside code. This wil Log everything twice.

Related

Kafka to Elasticsearch consumption with node.js

I know there are quite a few node.js modules that implement a Kafka consumer that gets msgs and writes to elastic. But I only need some of the fields from each msg and not all of them. Is there an existing solution I don't know about?
The question is asking for an example from node.js. The kafka-node module provides a very nice mechanism for getting a Consumer, which you can combine with the elasticsearch-js module:
// configure Elasticsearch client
var elasticsearch = require('elasticsearch');
var esClient = new elasticsearch.Client({
// ... connection details ...
});
// configure Kafka Consumer
var kafka = require('kafka-node');
var Consumer = kafka.Consumer;
var client = new kafka.Client();
var consumer = new Consumer(
client,
[
// ... topics / partitions ...
],
{ autoCommit: false }
);
consumer.on('message', function(message) {
if (message.some_special_field === "drop") {
return; // skip it
}
// drop fields (you can use delete message['field1'] syntax if you need
// to parse a more dynamic structure)
delete message.field1;
delete message.field2;
delete message.field3;
esClient.index({
index: 'index-name',
type: 'type-name',
id: message.id_field, // ID will be auto generated if none/unset
body: message
}, function(err, res) {
if (err) {
throw err;
}
});
});
consumer.on('error', function(err) {
console.log(err);
});
NOTE: Using the Index API is not a good practice when you have tons of messages being sent through because it requires that Elasticsearch create a thread per operation, which is obviously wasteful and it will eventually lead to rejected requests if the thread pool is exhausted as a result. In any bulk ingestion situation, a better solution is to look into using something like Elasticsearch Streams (or Elasticsearch Bulk Index Stream that builds on top of it), which builds on top of the official elasticsearch-js client. However, I've never used those client extensions so I don't really know how well they do or do not work, but usage would simply replace the part where I am showing the indexing happening.
I'm not convinced that the node.js approach is actually better than the Logstash one below in terms of maintenance and complexity, so I've left both here for reference.
The better approach is probably to consume Kafka from Logstash, then ship it off to Elasticsearch.
You should be able to use Logstash to do this in a straight forward way using the Kafka input and Elasticsearch output.
Each document in the Logstash pipeline is called an "event". The Kafka input assumes that it will receive JSON coming in (configurable by its codec), which will populate a single event with all of the fields from that message.
You can then drop those fields that you have no interest in handling, or conditionally the entire event.
input {
# Receive from Kafka
kafka {
# ...
}
}
filter {
if [some_special_field] == "drop" {
drop { } # skip the entire event
}
# drop specific fields
mutate {
remove_field => [
"field1", "field2", ...
]
}
}
output {
# send to Elasticsearch
elasticsearch {
# ...
}
}
Naturally, you'll need to configure the Kafka input (from the first link) and the Elasticsearch output (and the second link).
The previous answer is not scaleable for production.
You will have to use ElasticSearch bulk API. You can use this NPM package https://www.npmjs.com/package/elasticsearch-kafka-connect It allows you to send data from Kafka to ES (duplex connection ES to kafka is still in development as per May 2019)

Append to Azure Append Blob Using AppendTextAsync Results in Missing Data

I'm attempting to create a logger for an application in Azure using the new Azure append blobs and the Azure Storage SDK 6.0.0. So I created a quick test application to get a better understanding of append blobs and their performance characteristics.
My test program simply loops 100 times and appends a line of text to the append blob. If I use the synchronous AppendText() method everything works fine, however, it appears to be limited to writing about 5-6 appends per second. So I attempted to use the asynchronous AppendTextAsync() method; however, when I use this method, the loop runs much faster (as expected) but the append blob is missing about 98% of the appended text without any exception being thrown.
If I add a Thread.Sleep and sleep for 100 milliseconds between each append operation, I end up with about 50% of the data. Sleep for 1 second and I get all of the data.
This seems similar to an issue that was discovered in v5.0.0 but was fixed in v5.0.2: https://github.com/Azure/azure-storage-net/releases/tag/v5.0.2
Here is my test code if you'd like to try to reproduce this issue:
static void Main(string[] args)
{
var accountName = "<account-name>";
var accountKey = "<account-key>;
var credentials = new StorageCredentials(accountName, accountKey);
var account = new CloudStorageAccount(credentials, true);
var client = account.CreateCloudBlobClient();
var container = client.GetContainerReference("<container-name>");
container.CreateIfNotExists();
var blob = container.GetAppendBlobReference("append-blob.txt");
blob.CreateOrReplace();
for (int i = 0; i < 100; i++)
blob.AppendTextAsync(string.Format("Appending log number {0} to an append blob.\r\n", i));
Console.WriteLine("Press any key to exit.");
Console.ReadKey();
}
Does anyone know if I'm doing something wrong with my attempt to append lines of text to an append blob? Otherwise, any idea why this would just lose data without throwing some kind of exception?
I'd really like to start using this as a repository for my application logs (since it was largely created for that purpose). However, it would be quite unreliable if logs would just go missing without warning if the rate of logging went above 5-6 logs per second.
Any thoughts or feedback would be greatly appreciated.
I now have a working solution based upon the information provided by #ZhaoxingLu-Microsoft. According to the the API documentation, the AppendTextAsync() method should only be used in a single-writer scenario because the API internally uses the append-offset conditional header to avoid duplicate blocks which does not work in a multiple-writer scenario.
Here is the documentation that specifies this behavior is by design:
https://msdn.microsoft.com/en-us/library/azure/mt423049.aspx
So the solution is to use the AppendBlockAsync() method instead. The following implementation appears to work correctly:
for (int i = 0; i < 100; i++)
{
var message = string.Format("Appending log number {0} to an append blob.\r\n", i);
var bytes = Encoding.UTF8.GetBytes(message);
var stream = new MemoryStream(bytes);
tasks[i] = blob.AppendBlockAsync(stream);
}
Task.WaitAll(tasks);
Please note that I am not explicitly disposing the memory stream in this example as that solution would entail a using block with an async/await inside the using block in order to wait for the async append operation to finish before disposing the memory stream... but that causes a completely unrelated issue.
You are using async method incorrectly. blob.AppendTextAsync() is non-blocking, but it doesn't really finish when it returns. You should wait for all the async tasks before exiting from the process.
Following code is the correct usage:
var tasks = new Task[100];
for (int i = 0; i < 100; i++)
tasks[i] = blob.AppendTextAsync(string.Format("Appending log number {0} to an append blob.\r\n", i));
Task.WaitAll(tasks);
Console.WriteLine("Press any key to exit.");
Console.ReadKey();

When using gulp. Is there any way to suppress the 'Started' and 'Finished' log entries for certain tasks

When using gulp. Is there any way to suppress the 'Started' and 'Finished' log entries for certain tasks? I want to use the dependency tree, but I have a few tasks in the tree that I don't want logging for because they are intermediary steps that have their own logging facilities.
You can use the --silent flag with the gulp CLI to disable all gulp logging.
https://github.com/gulpjs/gulp/blob/master/docs/CLI.md
[UPDATE]
As of July 2014, a --silent option has been added to gulp (https://github.com/gulpjs/gulp/commit/3a62b2b3dbefdf91c06e523245ea3c8f8342fa2c#diff-6515adedce347f8386e21d15eb775605).
This is demonstrated in #slamborne answer below, and you should favor using it instead of the below solution if it matches your use case.
[/UPDATE]
Here is a way of doing it (inside your gulpfile):
var cl = console.log;
console.log = function () {
var args = Array.prototype.slice.call(arguments);
if (args.length) {
if (/^\[.*gulp.*\]$/.test(args[0])){
return;
}
}
return cl.apply(console, args);
};
... and that will ignore EVERY message sent using gutil.log.
The trick here obviously is to inspect messages sent to console.log for a first argument that looks like "[gulp]" (see gulp.util.log source code) and eventually ignore it entirely.
Now, this really is dirty - you really shouldn't do that without parental supervision, and you have been warned :-)
Bit late but i think it would be better to use noop from gulp-util no?
var gutil = require('gulp-util');
// ...
gutil.log = gutil.noop;
// or
gutil.log = function() { return this; };
As addressed here

Trouble getting transaction working with SubSonic

I'm having a little trouble getting a multi delete transaction working using SubSonic in an ASP.NET/SQL Server 2005. It seems it always makes the change(s) in the database even without a call the complete method on the transactionscope object?
I've been reading through the posts regarding this and tried various alternatives (switching the order of my using statements), using DTC, not using DTC etc but no joy so far.
I'm going to assume it's my code that's the problem but I can't spot the issue - is anyone able to help? I'm using SubSonic 2.2. Code sample below:
using (TransactionScope ts = new TransactionScope())
{
using (SharedDbConnectionScope sts = new SharedDbConnectionScope())
{
foreach (RelatedAsset raAsset in relAssets)
{
// grab the asset id:
Guid assetId = new Select(RelatedAssetLink.AssetIdColumn)
.From<RelatedAssetLink>()
.Where(RelatedAssetLink.RelatedAssetIdColumn).IsEqualTo(raAsset.RelatedAssetId).ExecuteScalar<Guid>();
// step 1 - delete the related asset:
new Delete().From<RelatedAsset>().Where(RelatedAsset.RelatedAssetIdColumn).IsEqualTo(raAsset.RelatedAssetId).Execute();
// more deletion steps...
}
// complete the transaction:
ts.Complete();
}
}
The order of your using statements is correct (I remember the order myself with this trick: The Connection needs to know about the transaction while it is created, and it does that by checking `System.Transactions.Transaction.Current).
One hint: you don't need to use double brackets. And you don't need a reference to the SharedDbConnectionScope().
This looks far more readable.
using (var ts = new TransactionScope())
using (new SharedDbConnectionScope())
{
// some db stuff
ts.Complete();
}
Anyway, I don't see, why this shouldn't work.
If the problem would be related to the MSDTC, an exception would occur.
I only could imagine, that there is a problem in the SqlServer 2005 Configuration but I am not a SqlServer expert.
Maybe you should try some demo code to verify that transactions work:
using (var conn = new SqlConnection("your connection String");
{
conn.Open();
var tx = conn.BeginTransaction();
using (var cmd = new SqlCommand(conn)
cmd.ExecuteScalar("DELETE FROM table WHERE id = 1");
using (var cmd2 = new SqlCommand(conn)
cmd2.ExecuteScalar("DELETE FROM table WHERE id = 2");
tx.Commit();
}
And subsonic supports native transactions without using TransactionScope: http://subsonicproject.com/docs/BatchQuery

Is there a way to programmably flush the buffer in log4net

I'm using log4net with AdoNetAppender. It's seems that the AdoNetAppender has a Flush method. Is there anyway I can call that from my code?
I'm trying to create an admin page to view all the entries in the database log, and I will like to setup log4net with bufferSize=100 (or more), then I want the administrator to be able to click an button on the admin page to force log4net to write the buffered log entries to the database (without shutting down log4net).
Is that possible?
Assuming you're using log4net out of the box, you can dig your way down & flush the appender like this:
public void FlushBuffers()
{
ILog log = LogManager.GetLogger("whatever");
var logger = log.Logger as Logger;
if (logger != null)
{
foreach (IAppender appender in logger.Appenders)
{
var buffered = appender as BufferingAppenderSkeleton;
if (buffered != null)
{
buffered.Flush();
}
}
}
}
Edit: I wrote the above under the assumption that you wanted to flush the appenders for a specific ILog (probably a bad assumption now that I re-read the question), but as Stefan points out in a comment below, you can simplify the code a little if you want to flush all appenders across the whole repository as follows:
public void FlushBuffers()
{
ILoggerRepository rep = LogManager.GetRepository();
foreach (IAppender appender in rep.GetAppenders())
{
var buffered = appender as BufferingAppenderSkeleton;
if (buffered != null)
{
buffered.Flush();
}
}
}
Today simpler option is available:
LogManager.Flush();
Flushes logging events buffered in all configured appenders in the default repository.
https://logging.apache.org/log4net/release/sdk/html/M_log4net_LogManager_Flush.htm
It is highly recommended to add a timeout, like
LogManager.Flush(3000);
Even if Flush is used or ImmediateFlush is set, changes are not reflected immediately in the log file. In order for the FileSystemWatcher events to fire, you can do this
if (appender.ImmediateFlush)
{
using (FileStream fs = new FileStream(appender.File,
FileMode.Open, FileAccess.Read, ileShare.ReadWrite))
{ }
}

Resources