unable to get log4net to use WebServiceAppender from crm 2011 - log4net

I have attempted to use a log4net webserviceappender from within a crm 2011 plugin (sandboxed), log4net apparently gets installed along with the plugin correctly (exception if log4net config file is malformed), but apparently the appender doesn't get called. I can call the webservice directly from within the plugin, so that part is working, but can't figure out what might be wrong with log4net.
Does anyone know of a step by step for using log4net with crm and/or have a good idea as to why the webserviceappender doesn't get called?
Thanks
EDIT: Including log4net.config file upon request.
<!-- WebService parameters. -->
<param name="Url" value="http://my-internal-server/errorlog/ErrorHandler.asmx" />
<param name="TimeoutSeconds" value="60" />
<!-- Proxy parameters. -->
<param name="UseProxy" value="false" />
<param name="ProxyUrl" value="http://myproxy:3128" />
<param name="ProxyBypassOnLocal" value="true" />
<param name="ProxyUseDefaultCredentials" value="true" />
<param name="ProxyCredentialsDomain" value="OFFICE" />
<param name="ProxyCredentialsUserName" value="MyUser" />
<param name="ProxyCredentialsPassword" value="MyPassword" />
</appender>
<root>
<level value="Info" />
<appender-ref ref="WebServiceAppender" />
</root>

It looks like you have deployed the configuration file on disk. This is not the ideal place as you have to deal with different requirements for the different modules.
To simplify the deployment of plugins, which need additional configuration, you have the possibility to pass configuration values to the plugin constructor. You should pass the configuration and configure log4net at runtime. See how to write the plugin constructor.
Another option is to use the webressources of Dynamics CRM 2011. See this blog article which describes all available options.

Related

Log4Net on cloud solution?

I would like to utilize Log4Net on a Dynamics 365 solution (plugins, etc.)
Is this possible somehow - I cannot just deploy a config file I guess, but are there a feasible way to do it anyhow?
The dream of mine is to unify logging, to place logging in one place, no matter if it is plug-ins, integrations or other functionality...
Besides using a configuration file, Log4net can accept its configuration from other sources using eg. a Stream or XmlElement.
To do so, you'll have to use the Log4net api via XmlConfigurator.Configure.
It is important not to apply the XmlConfiguratorAttribute; it's either one or the other.
The example below shows how to apply a configuration via an XmlElement.
The content of this XmlElement can be retrieved from anywhere; hardcoded, an embedded resource file, a record from a dabase, etc.
In the xml configuration, you declare any loggers that can be used within your Dynamics 365 environment.
string xml = #"
<log4net>
<appender name='consoleAppender' type='log4net.Appender.ConsoleAppender' >
<layout type='log4net.Layout.PatternLayout'>
<conversionPattern value='%date | %logger | %level | %message | %exception%n' />
</layout>
</appender>
<root>
<level value='ALL' />
<appender-ref ref='consoleAppender' />
</root>
</log4net>";
XmlDocument doc = new XmlDocument();
doc.LoadXml(xml);
XmlElement config = doc.DocumentElement;
ILoggerRepository repository = log4net.LogManager.GetRepository(Assembly.GetCallingAssembly());
XmlConfigurator.Configure(repository, config);
ILog logger = LogManager.GetLogger("somelog");
logger.Debug("foobar");

Log4net Json Result into NoSQL Database

I am trying out a scenario where in I am able to generate the JSON logs and store them in database.
I have to use log4net as logging mechanism. So far I am able to achieve the log4net Json using the json formater as below.
<appender name="FileAppender" type="log4net.Appender.RollingFileAppender">
<param name="File" value="C:\\TestProj\\jsonlog.txt" />
<param name="AppendToFile" value="true" />
<param name="DatePattern" value="_yyyyMMddHH".log"" />
<param name="RollingStyle" value="Date" />
<param name="StaticLogFileName" value="false" />
<layout type="log4net.Layout.SerializedLayout, log4net.Ext.Json">
</layout>
</appender>
<root>
<level value="ALL" />
<appender-ref ref="FileAppender" />
</root>
I also know how to insert the log4net logs into SQL using adonetappender.
However I am not able to figure out these two questions:
How to insert the json logs into sql server database.
How to insert the json logs into a no-sql database.
I think I got it. I use regular ado.net appender and then use json layout for one of the parameters. –
I think you are not storing your logs in to the actual json file, you are using jsonlog.txt which is a text file. If you want to store the jsonlogs to database you can configure log4net to do it for you. Take a look at this.
https://logging.apache.org/log4net/release/config-examples.html
Inserting into no-sql was the primary reason for SerializedLayout to happen. There's multiple ways this can be achieved, but you'll most likely want some intermediate processor for the logs. I can recommend logstash and nxlog. Logstash can easily store your files in elastic search then.
They can both retrieve logs from:
Files
Network
TCP
UDP
Syslog
Other options would be queues, like RabbitMQ or AMQP. I haven't played with those. It's up to your needs and resources with regards to availability and resilience.
With regards to formatting the JSON bunch, see the answer on another question of yours - https://stackoverflow.com/a/36169213/481812

mule log files into DB

In mule CE version 3.3.0, I have a mule project, and the URL for calling it is http://localhost:8086/mule?msg=Hello-World!!!.
Every time that I call it, in a log file in mule server that it has as a default some lines added as a log.
Now I want to change type of logs in mule? I want to instead the file that mule create it for logs, as a default mule create a table into Database and save important info into it.
Actually I want to have a table log for my projects, and in this table I want to have customers' information such as IP and ...
Is it possible?
How can I do it?
You can use Log4j database appender to insert Mule ESB logs into database. Below code snippet is used to do the same.
<appender name="DB" class="org.apache.log4j.jdbc.JDBCAppender">
<param name="url" value="jdbc:mysql://localhost/DBNAME"/>
<param name="driver" value="com.mysql.jdbc.Driver"/>
<param name="user" value="user_id"/>
<param name="password" value="password"/>
<param name="sql" value="INSERT INTO LOGS VALUES('%level','%message','%X{muleMessage}','%X{payload}')"/>
<layout class="org.apache.log4j.PatternLayout">
</layout>
</appender>
<logger name="log4j.rootLogger" additivity="false">
<level value="DEBUG"/>
<appender-ref ref="DB"/>
</logger>

Setting up Jackrabbit in a cluster environment

I want to set up Jackrabbit in a cluster (I am setting it up with Liferay).
I read this document - http://wiki.apache.org/jackrabbit/Clustering , unfortunately it's very short, so I don't understand some of the concepts and best practices. Let me first explain what is my set up:
we have 2 weblogic servers that share the same filesystem and we deploy the same war to both weblogics. I use Oracle as a db (I have connection pool configured in WL and want to connect using JNDI)
As I understand from the docs each node has to have a separate configuration with it's own repository directory, workspace filesystem and search index.
Both nodes share PersistranceManager, repository filesystem and datastore (if I have and)
Here are the questions:
what is workspace filesystem and how is it different from repository filesystem. And what is workspace - as I understand it's part of repository and repository can have multiple workspaces but what is workspace is not described in docs.
I want performance to be the best, I won't have to much content and users (10s of simultaneous users), so I want to optimize page load time for faster rendering of the pages. What would be the best practice - should I configure PersistanceManager to go to db?
where should repository filesystem point on each node?
where should workspaces point to on each node?
where should workspace filesystem point to?
I tried to point all of them to my db, but I seem to have deadlocks (or db works too slow).
And I enabled logging and I see a lot of unnecessary reads, looks like for each upload of the file jackrabbit opens connection, pre-caches all the files, closes and does it several times (takes about a minute) to upload very small file, most likely something is wrong with my config.
Here is my config file:
<?xml version="1.0"?>
<Repository>
<FileSystem class="org.apache.jackrabbit.core.fs.db.OracleFileSystem">
<param name="driver" value="javax.naming.InitialContext"/>
<param name="url" value="ISG" />
<param name="schema" value="oracle"/>
<param name="schemaObjectPrefix" value="J_R_FS_"/>
</FileSystem>
<Security appName="Jackrabbit">
<AccessManager class="org.apache.jackrabbit.core.security.SimpleAccessManager" />
<LoginModule class="org.apache.jackrabbit.core.security.SimpleLoginModule">
<param name="anonymousId" value="anonymous" />
</LoginModule>
</Security>
<Workspaces rootPath="${rep.home}/workspaces" defaultWorkspace="liferay" />
<Workspace name="${wsp.name}">
<PersistenceManager class="org.apache.jackrabbit.core.persistence.db.OraclePersistenceManager">
<param name="driver" value="javax.naming.InitialContext"/>
<param name="url" value="ISG" />
<param name="tableSpace" value="" />
<param name="schema" value="oracle" />
<param name="schemaObjectPrefix" value="J_PM_${wsp.name}_" />
<param name="externalBLOBs" value="false" />
</PersistenceManager>
<FileSystem class="org.apache.jackrabbit.core.fs.db.OracleFileSystem">
<param name="driver" value="javax.naming.InitialContext"/>
<param name="url" value="ISG" />
<param name="tableSpace" value="" />
<param name="schema" value="oracle"/>
<param name="schemaObjectPrefix" value="J_FS_${wsp.name}_"/>
</FileSystem>
</Workspace>
<Versioning rootPath="${rep.home}/version">
<FileSystem class="org.apache.jackrabbit.core.fs.db.OracleFileSystem">
<param name="driver" value="javax.naming.InitialContext"/>
<param name="url" value="ISG" />
<param name="schema" value="oracle"/>
<param name="schemaObjectPrefix" value="J_V_FS_"/>
</FileSystem>
<PersistenceManager class="org.apache.jackrabbit.core.persistence.db.OraclePersistenceManager">
<param name="driver" value="javax.naming.InitialContext"/>
<param name="url" value="ISG" />
<param name="tableSpace" value="" />
<param name="schema" value="oracle" />
<param name="schemaObjectPrefix" value="J_V_PM_" />
<param name="externalBLOBs" value="false" />
</PersistenceManager>
</Versioning>
<Cluster id="node_1" syncDelay="2000">
<Journal class="org.apache.jackrabbit.core.journal.OracleDatabaseJournal">
<param name="revision" value="${rep.home}/revision.log"/>
<param name="driver" value="javax.naming.InitialContext"/>
<param name="url" value="ISG" />
<param name="tableSpace" value="" />
<param name="schema" value="oracle"/>
<param name="schemaObjectPrefix" value="J_C_"/>
</Journal>
</Cluster>
</Repository>
Liferay's official documentation recommends sharing Jackrabbit data using a database in a clustered scenario, not the file system.
Let's say you're using the file system on each of your Liferay nodes (which is the out of the box Liferay configuration). Node A would not be able to access the Jackrabbit data on Node B and vice versa. As time goes by, the nodes become more and more out of synch. To get around this, you could create a network share and configure each node to point to the share. The problem with doing that is it could result in file corruption if each of the Liferay nodes are writing at the same time.
This leaves you with two options; keep independent file systems and integrate a synchronization utility or put the data in the database. Since file system synchronization is hokey at best, your best option is putting the Jackrabbit data in a database.
There are some pros and cons of using the database. It could decrease performance, true. At the same time, the data is now part of the regular disaster recovery strategy and some could argue it's more portable.
Edit - Addition: An AdvancedFileSystemHook was added at some point in version 5.2 which resolves issues with file corruption and locking concerns when using a shared network file system. In order to implement this, change your portal-ext.properties file to use the AdvancedFileSystemHook, migrate your data to the shared location, point your horizontal nodes to the shared location.
Is Jackrabbit mandatory? Liferay uses the storage engine to store "just" binary data, all the meta data is in Liferay's database, so you don't gain a lot from the JCR repository. This is unfortunate, but the way the current implementation works.
Next: Are you setting up a Jackrabbit cluster or a Liferay cluster? For a Jackrabbit cluster (in a single Liferay node environment) I can't really help. If you cluster Liferay, you'll find some information in the administration guide (click the pdf link - sadly the direct link to the clustering chapter in html is broken, but you'll find the chapter in the pdf - there it's working.)
Some details on Liferay clustering:
Liferay expects the document library to be "atomic" - that is: a document written on one of Liferay's nodes should be immediately readable on every other node in a Liferay cluster. The jackrabbit-solution you find in the administration guide makes jackrabbit use a database to share. But you'll see that the recommended solution is not tu use Jackrabbit, but AdvancedFilesystemHook - other than the default FileSystemHook it stores the documents in multiple subdirectories (works on network shares, SAN recommended). The default FileSystemHook is limited by the number of files allowed (by the OS) in a single directory, AdvancedFileSystemHook will circumvent this by creating multiple subdirectories (like a unix mailspool directory). If it's just for "a few" documents - not reaching any OS limit - I expect FileSystemHook to work as well on a shared directory, but I'm not really sure about file-locking issues there.
As you say you have 10's of users, caring for maximum performance seems to be over the top. I wouldn't expect any difference for any of the possible solutions. Clustering in this order of magnitude is rather about failover (e.g. high availability) than performance - at least from Liferay's point of view.
If you're setting up a Liferay cluster make sure you also follow all the other topics named in that chapter - especially cache synchronization. Otherwise you might be fooled to believe that your document library cluster does not work when it's only a cache that's out of sync.

What happens if Log4Net doesn't have permission to access the file system

I'm implementing a logging solution with Log4Net for a Windows NT Service. I can flip a switch in a configuration file to start logging information to the file system. So far I've been able to accomplish this by having a rolling file appender and having log4net "Watch" the configuration file.
What I've notice is that Log4Net will create empty log files as soon as the service starts, even if all logging is turned off and I don't intend to log.
I can't find much information on this topic other than this post:
How to disable creation of empty log file on app start?
My concern is that someone could set up the service with a very low level set of permissions that wouldn't even have access to create the log files on the file system. I don't want to incur the performance penalty of having exceptions thrown every time I hit a logging statement even if I didn't intend to log.
I've wrapped each logging statement with check to see if the logging level is enabled before attempting to log, but I'm still not sure if in the inner workings of log4net whether or not exceptions are going to still be thrown if the file configured in the appender wasn't created.
if (logger.IsDebugEnabled)
{
logger.DebugFormat(format, traceMessage);
}
Does anyone know what log4net will do if it can't create the log files initially?
Here is a little information about my configuration. Logging works fine when it is turned on, I'm just worried about permissions issues.
Attribute on my logging class to watch for config changes:
[assembly: log4net.Config.XmlConfigurator(ConfigFile = "Logging.config", Watch = true)]
Appender:
<appender name="Test" type="log4net.Appender.RollingFileAppender">
<file type="log4net.Util.PatternString" value=".\\AppLogs\\_test.%appdomain.log" />
<filter type="log4net.Filter.LevelRangeFilter">
<levelMin value="ALL" />
<levelMax value="FATAL" />
<acceptOnMatch value="false"/>
</filter>
<filter type="log4net.Filter.LoggerMatchFilter">
<loggerToMatch value="Test" />
</filter>
<appendToFile value="true" />
<maxSizeRollBackups value="10" />
<maximumFileSize value="6MB" />
<rollingStyle value="Size" />
<staticLogFileName value="true" />
<filter type="log4net.Filter.DenyAllFilter" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%thread] %-5level - %message%newline" />
</layout>
</appender>
"My concern is that someone could set up the service with a very low level set of permissions that wouldn't even have access to create the log files on the file system. " - on service start up detect your minimum set of permisions required is present and throw an exception (or otherwise handle) if they are not.
I don't think this is a logging issue per se; it's more an issue of whether or not your service has its required permissions.

Resources