log4net to SQLServer : what happens if database is unavailable? - log4net

I have a log4net ado appender writing to a SQL Server database. I like it, I think it's neat. Before I send it into production, I want to know what the behaviour will be if the database goes down.
I don't want the application to stop because the logging database is unavailable. I am presuming that log4net will just silently fail and not do anything, at least that's what I'm hoping. Can anyone confirm this or (better) point me to some documentation that will confirm this?

The appender (like all log4net appenders that I am aware of) will fail silently and thus stop logging. You can configure the appender to try to reconnect though:
<reconnectonerror value="True" />
In that case you probably want to specify a connection time out in your db connection string:
Connect Timeout=1
If you do not do that your application will be very slow if the db is offline.

If in .Net 4.5.1 or later you will also have to set ConnectRetryCount=0; in your connection string.
Appender Configuration:
<ReconnectOnError value="true" />
<connectionString value="...Connect Timeout=1;ConnectRetryCount=0;" />
If you do Async logging like Log4Net.Async then Connect Timeout may be left at default 15 seconds.
Details
.Net 4.5.1 adds ADO.NET connection resiliency which tells Log4Net the connection is still open even though it isn't and attempts to reconnect using ConnectRetryCount. However since we want to let Log4Net do the reconnecting and ConnectRetryCount has a max of 255 we should set ConnectRetryCount to 0. Source: https://issues.apache.org/jira/browse/LOG4NET-442 https://blogs.msdn.microsoft.com/dotnet/2013/06/26/announcing-the-net-framework-4-5-1-preview/

using reconnectonerror will surely enable you to start logging again once the DB server is available, but the intermediate log messages, while the server was down, will be lost. I think log4Net doesn't have any provision to persist the messages util the DB server is available.

Related

Warnings on startup with atomikos starter dependency

I have a spring boot application with postgresql and rabbitmq. I wanted to use a best-effort JTA transaction that contains both a postgres and rabbitmq transaction.
I have added the spring-boot-started-jta-atomikos dependency. When I start my application I receive this warning multiple times:
atomikos connection proxy for Pooled connection wrapping physical connection org.postgresql.jdbc.PgConnection#99c4993: WARNING: transaction manager not running?
Do I need any additional configuration?
I also get this warning at startup:
AtomikosDataSoureBean 'dataSource': poolSize equals default - this may cause performance problems!
I run with the following settings, but setMinPoolSize is never called
spring.jta.atomikos.connectionfactory.max-pool-size: 10
spring.jta.atomikos.connectionfactory.min-pool-size: 5
The documentation at:
https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.jta.atomikos
https://www.atomikos.com/Documentation/SpringBootIntegration
just says I can add the starter dependency. But it seems like spring boot doesn't properly auto-configure some things.
spring.jta.atomikos.connectionfactory properties are for controlling a JMS connectionFactory.
You should use the spring.jta.atomikos.datasource properties for controlling JDBC DataSource configuration.

JMS connections exhausted using WebSphere MQ

I have configured CachingConnectionFactory that wraps a MQTopicConnectionFactory and MQQueueConnectionFactory with cache size set to 10 each.
These are than used in several jms:outbound-channel-adapter or jms:message-driven-channel-adapter as part of various spring integration workflows that I have in my application.
It is noticed that once in a while the connection count on MQ channel reaches maximum allowed (about 1000) when the process stops functioning. This is a serious problem for a production application.
Bringing the application down does not reduce the connection count so looks like orphaned connections on MQ side? I am not sure if I am missing anything in my spring jms / SI configuration that can resolve this issue, any help would be highly appreciated.
Also I would like to log connection open and close from application but don't see a way to do that.
<bean id="mqQcf" class="com.ibm.mq.jms.MQQueueConnectionFactory">
//all that it needs host/port/ queue manager /channel
</bean>
<bean id="qcf" class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref=" mqQcf "/>
<property name="sessionCacheSize" value="10"/>
</bean>
<bean id="mqTcf" class="com.ibm.mq.jms.MQTopicConnectionFactory">
//all that it needs host/port/ queue manager /channel
</bean>
<bean id="tcf" class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref=" mqTcf "/>
<property name="sessionCacheSize" value="10"/>
</bean>
//Qcf and tcf are than used in spring integration configuration as required
You really need to show your configuration but the Spring CachingConnectionFactory only creates a single connection that's shared for all sessions. Turning on INFO logging for the CCF category emits this log when a new connection is created...
if (logger.isInfoEnabled()) {
logger.info("Established shared JMS Connection: " + this.target);
}
EDIT:
There's nothing in your config that stands out. As I said, each CCF will have at most 1 connection open at a time.
One possibility, if you have idle times, is that the network (a switch or firewall) might be silently dropping connections without telling the client or server. The next time the client tries to use its connection it will fail and create a new one but the server may never find out that the old one is dead.
Typically, for such situations, enabling heartbeats or keepalives would keep the connection active (or at least allow the server to know it's dead).
I was debugging a similar issue in my application about the number of open output counts in MQ when there is only one Connection is opened by the connection factory.
The number output counts in MQ explorer is the number of connection handles created by the IBM MQ classes. Per IBM documentation, A Session object encapsulates an IBM MQ connection handle, which therefore defines the transnational scope of the session.
Since the session cache size was 10 in my application, there were 10 IBM MQ connections handles created (one for each session) stayed open for days and the handle state was inactive.
More info can be found in,
https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.dev.doc/q031960_.htm
As Gary Russell mentioned, Spring doesn't provide way to configure the time outs for these idle connections. IBM has inbuilt properties in MQConnectionFactory which can be configured to setup the reconnect timeouts.
More infor can be found in,
https://www.ibm.com/developerworks/community/blogs/messaging/entry/simplify_your_wmq_jms_client_with_automatic_client_reconnection19?lang=en
The reconnect on exception is true by default for CCF. So care should be taken if IBM throws an exception after time out interval. I am not sure if there is a max number of times reconnect will try before throwing an exception in CCF.

Unavailable exception for CassendraLog4net appenadar

I want to develop a logging techniques using CassandraLog4net Appender. I am getting Unavailable exception.
Can u tell me whether i have to create a keyspace or database before running this code?
Also, I am not able to use NODE TOOL When i click on it, it disappears again.
what changes should I make?
Please, find details of configuration of CassendraLog4netAppendar.
<KeyspaceName value="Logging" /><ColumnFamily value="LogEntries"/>\
<PlacementStrategy value="org.apache.cassandra.locator.NetworkTopologyStrategy" />
<StrategyOptions value="Datacentre1:1" /><ReplicationFactor value="1" />
<ConsistencyLevel value="QUORUM" />
<MaxBufferedRows value="1" />
UnavailableException means there aren't enough replicas available to satisfy your query. From your configuration I see a lot of inconsistency in your cluster config. Your log4net appender strategy options point to "Datacentre1"; your topology file lists a bunch of machines in "DC1", "DC2", and "DC3" with multiple racks; your keyspace is set up with only one DC called "DC1"; nodetool shows a single node listening on 127.0.0.1 (which doesn't correlate to any of your configured machines). So you're getting UnavailableException because you're asking for something that doesn't exist. You need to have a consistent configuration across the various pieces.

Log4Net stops logging following db backup

On some nights our site stops logging at the time the db backup happens - in the last week, we've had 5 days with no issues and 2 days where we had to recycle the IIS app pool to get logging going again. We are logging at DEBUG level. The last item before it stops is a DEBUG-level log.
Our theory is that it only breaks when a request happens at the time of the db backup.
Any ideas as to the potential cause, or a reliable solution?
Log4net stops logging to the database if there is a connection problem. You can set the ReConnectOnError flag on your appender (make sure that you use a very short connection timeout or else your application may hang).

What is the relationship between Web.Config connection strings and ServiceConfiguration connection strings in Azure?

I'm relatively new to Windows Azure and I need to get a better appreciation of how the Azure platform handles connection string configuration settings.
Assume I have an ASP.Net web project and this has a Web.Config connection string setting like the following:
<add name="MyDb" connectionString="Data Source=NzSqlServer01;Initial Catalog=MyAzureDb;User ID=joe;Password=bloggs;"
providerName="System.Data.SqlClient" />
I use this connection string for local testing and such. Let's assume I have a ServiceConfiguration.Local.cscfg file that holds that same connection information.
Now I'm ready to deploy out to my Azure instance. My ServiceConfiguration.Cloud.cscfg file looks like this:
<Setting name="MyDb"
value="Data Source=tcp:e54wn1clij.database.windows.net;Database=MyAzureDb{0};User ID=joe.bloggs#e54wn1clij;Password=reallysecure;Trusted_Connection=False;Encrypt=True;" />
What I'm trying to get my head around is that if I have code in my web application that's looking for a connection string called "MyDb" (for example, by calling this line of code: ConfigurationManager.ConnectionStrings["CeraDb"].ConnectionString), does Azure automagically know to look for a database called MyAzureDb1 or MyAzureDb2 based on the ServiceConfiguration file's connection string, or will the web application's code simply look for whatever's in Web.Config and fail to correctly load-balance the database connections?
You'd need to call RoleEnvironment.GetConfigurationSettingValue(...) to read the one in ServiceConfiguration.Cloud.cscfg.
The advantage to using .cscfg to store settings is that you can change them at runtime without having to deploy new code. Web.config is just another file that's part of your app, so you have to deploy a new package to update it, but the settings in .cscfg can be modified in the portal or by uploading a new .cscfg file without deploying and disturbing the app itself.
Intrinsically, unless you code otherwise, all Azure instances are created equal. In your case, this means that the configuration for two or more instances of the same Web Role will be the same.
So, if you've sharded your database and want different instances to read different databases, you'll need to get clever in your startup code and create something that allocates a shard to an instance. You've access to System.Environment.MachineName which can distinguish in code between instances once they're started.
There's a few ways to do this. One might be to have a central registry in (say) table storage that keeps a log of the last-seen-time of an instance for a shard. A background process on the server periodically writes out to this log. Then, on instance start, check the last-seen-time for each shard -- if any are "stale" (significantly older than the current time less the write interval) then the instance knows it can claim that shard for itself as the old instance has died.
(There are better ways to shard, however, generally around the data your system uses -- e.g. by the largest table in your system.)

Resources