Disable SQL logging in ORMLite - servicestack

How do I turn off SQL logging?
I have NLOG registered like so:
LogManager.LogFactory = new NLogFactory();
SetConfig(new HostConfig
{
AddRedirectParamsToQueryString = true,
DebugMode = false
});
SQL is being written to log on production and I would like to turn it off. I am using ORMLite with PostgreSQL How do I do that?

OrmLite logs it's SQL using the configured logger at the Debug log-level, so you'd need to disable the Debug log level in NLog.

Related

Filtering out Azure ServiceBus logs from WebJob Application Insights

I have been trying to filter out information messages that my ServiceBus triggered webjob is sending in the Application Insights. These messages consist of these two logs:
ReceiveBatchAsync start. MessageCount = 1
ReceiveBatchAsync done. Received '0' messages. LockTokens =
My goal is to only log information traces that I am logging in my code and ignore the logs coming from Azure.Messaging.ServiceBus. How can I achieve this?
So far I have tried to add a filter using the following code in my program.cs file
b.AddFilter("Azure.Messaging.ServiceBus", LogLevel.Warning);
and I have also tried to add the following settings in my appsettings.json file
"Logging": {
"LogLevel": {
"Default": "Information",
"Azure.Messaging.ServiceBus": "Warning"
}
},
As for my set up I am using the following packages which are of concern:
Microsoft.Extensions.Logging.Console
Microsoft.Azure.WebJobs.Extensions.ServiceBus
The following code is my Logging configuration in the program.cs file.
I had the same issue to filter some Service Bus or EFCore logs.
I was able to partially solve this adding some hardcoded filter into the logging configuration code :
builder.ConfigureLogging((context, b) => {
// If the key exists in settings, use it to enable Application Insights.
string instrumentationKey = context.Configuration["EASY_APPINSIGHTS_INSTRUMENTATIONKEY"];
if (!string.IsNullOrEmpty(instrumentationKey)) {
b.AddApplicationInsightsWebJobs(o => o.InstrumentationKey = instrumentationKey);
}
b.AddConsole();
b.AddFilter("Azure.Messaging.ServiceBus", LogLevel.Error);
b.AddFilter("Microsoft.EntityFrameworkCore", LogLevel.Error);
});
But i would like to know how can i setup logging just in updating settings from AppService's settings.

How to configure multiple cassandra contact-points for lagom?

In Lagom it appears the contact point get loaded from the service locator which accepts only a single URI. How can we specify multiple cassandra contact-points?
lagom.services {
cas_native = "tcp://10.0.0.120:9042"
}
I have tried setting just the contact points in the akka persistence config but that doesn't seem to override the service locator config.
All that I was missing was the session provider to override service lookup:
session-provider = akka.persistence.cassandra.ConfigSessionProvider
contact-points = ["10.0.0.120", "10.0.3.114", "10.0.4.168"]
was needed in the lagom cassandra config

HikariCP poolName configuration for Slick 3.0

I am using the following typesafe configuration in application.conf for Slick 3.0. HikariCP is the default connection pool of Slick 3.0. I set the poolName as "primaryPool":
slick.dbs.primary= {
driver="com.typesafe.slick.driver.ms.SQLServerDriver$"
db {
url = "DB URL"
driver = com.microsoft.sqlserver.jdbc.SQLServerDriver
user = "myUser"
password = "myPassword"
poolName="primaryPool"
}
}
From the HikariCP log, I saw
Before cleanup pool stats db (total=21, inUse=0, avail=21, waiting=0)
The default connection pool name "db" is used but not what I expected primaryPool. I suspect the configuration format is not correct.
So my question is how to configure poolName in application.conf using Typesafe configuration?
Note: Because I will have several connection pools in my application, I hope particular pool name is logged to distinguish different pool.
I find a workaround by setting poolName in my own code:
val dbConfig = dbConfigProvider.get[JdbcProfile]
val poolName = dbConfig.config.getConfig("db").getString("poolName")
dbConfig .db.source.asInstanceOf[HikariCPJdbcDataSource].ds.setPoolName(poolName)
It is not a good solution since I hard code HikariCPJdbcDataSource, but it can meet my requirement at least.
Still hope get the help how to configure poolName correctly in the application.conf.

Log4net.Azure configuration

Recently, we moved our solution (ASP.NET MVC4) to Windows Azure and so far, it is working
fine. Our only concern is that we are not able to locate our logs files no matter what
method we do implement:
Actually, our existing application uses log4net framework for Logging purpose.
Once we have moved our solution on Windows Azure, we still want to use log4net in
azure with minimal change in our existing code.
We have followed many blogs and tutorials in order to implement the following methods:
Synchronizing a log file to blob storage using Windows Azure Diagnostics module.
Using a custom log4net appender to write directly to table storage.
Logging to the Trace log and synchronizing to table storage.
Unfortunatly, none of the above has delivered the desired result. We are still not
able to get access to our logs. Is there any official source about how to use
Log4net with Windows Azure?
Step1: I imported Log4net.Azure as a reference to my MVC4 WebRole application
Step2: I added configuration lines in the On_Start method of WebRole class
public class WebRole : RoleEntryPoint
{
private static readonly ILog _logger = LogManager.GetLogger(typeof(WebRole));
public override void Run()
{
_logger.InfoFormat("{0}'s entry point called", typeof(WebRole).Name);
while (true)
{
Thread.Sleep(10000);
_logger.Debug("Working...");
}
}
public override bool OnStart()
{
BasicConfigurator.Configure(AzureAppender.New(conf =>
{
conf.Level = "Debug";
conf.ConfigureRepository((repo, mapper) =>
{
repo.Threshold = mapper("Debug"); // root
});
conf.ConfigureAzureDiagnostics(dmc =>
{
dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Information;
});
}));
return base.OnStart();
}
Step3: I create an instance of ILog whenevr I need to log, here is an example:
public class TestController : ApiController
{
private static readonly ILog _logger = LogManager.GetLogger(typeof(WebRole));
[HttpGet]
public String Get()
{
_logger.InfoFormat("{0}'s entry point called", typeof(WebRole).Name);
_logger.Debug("<<<<<<<<<< WS just invoked >>>>>>>>>>>>...");
return "hello world logs on Azure :)";
}
}
You can use AdoNetAppender with Azure SQL database and config like this example: http://logging.apache.org/log4net/release/config-examples.html
Notice: create Log table using this statement:
CREATE TABLE [dbo].[Log](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Date] [datetime] NOT NULL,
[Thread] [varchar](255) NOT NULL,
[Level] [varchar](50) NOT NULL,
[Logger] [varchar](255) NOT NULL,
[Message] [varchar](4000) NOT NULL,
[Exception] [varchar](2000) NULL,
CONSTRAINT [PK_Log] PRIMARY KEY CLUSTERED (
[Id] ASC
))
Re: log4net.Azure
The logs won't be visible as the implementation uses the BufferingAppenderSkeleton base class which has a buffer size of 512 by default. You will have to make the application create 513 logs entries in ram before they are flushed. I did it this way to make it more performant.
You have 3 options to make it work per your expectations in an MVC/ASP.NET environment:
Change the buffer size in the config file
Call flush (but only in debug mode, this is a performance killer)
Call flush when your application shuts down so that it does an immediate write
If you are using a full IIS in your webrole (which is the default configuration), the website and the webrole run in seperate processes.
Because of this you'll have to setup the logging twice. Once in the OnStart() of your WebRole, and once in the Application_Start() of your Global.asax.

Reconnect to DB within log4j

If I have a JDBCAppender configured to throw log messages to MySQL
and, while my system is up I restart the database is it reconnect to DB?
I have had this use case occur over this past weekend. My database is hosted by Amazon AWS. It rolled over my log database and all of the instances logging to that database via the Log4j JDBC Appender stopped logging. I bounced one of the apps and it resumed logging.
So the answer to this question, through experience, appears to be No.
If the database goes down and comes back online, the JDBC appender does not reconnect automatically.
edit
JDBCAppender getConnection might be overridden to fix.
JDBCAppender in log4j 1.2.15 has following code
protected Connection getConnection() throws SQLException {
if (!DriverManager.getDrivers().hasMoreElements())
setDriver("sun.jdbc.odbc.JdbcOdbcDriver");
if (connection == null) {
connection = DriverManager.getConnection(databaseURL, databaseUser,
databasePassword);
}
return connection;
}
so if connection is not null, but is broken (needs reconnect) log4j will return broken connection to its logic, and executing statement which does logging to db will fail.
Not a workaround, but a proper solution is to replace log4j with logback: see related answer: Log to a database using log4j

Resources