NLog.Targets.Splunk - Possible to get rid of the "Properties" wrap? - nlog

In: NLog.Targets.Splunk
https://github.com/AlanBarber/NLog.Targets.Splunk
When use the nlog configuration with:
includeEventProperties="true"
or if I have:
includeEventProperties="false" and use:
<contextproperty name="host" layout="${machinename}" />
<contextproperty name="threadid" layout="${threadid}" />
<contextproperty name="logger" layout="${logger}" />
I get the logs in the following format (properties wrapped in "Properties"):
{"Level":"Info","MessageTemplate":"ApiRequest","RenderedMessage":"ApiRequest","Properties":{"httpMethod":"GET","statusCode":200}, ...}
Is it possible to get rid of the Properties-wrap, and have it more flat?
{ "Level": "Info", "httpMethod": "GET", "statusCode":200, ... }
Many thanks! :-)

Related

appcmd.exe to list IIS config section attribute is returning an ERROR message:Unknown attribute

I am running this command as admin in elevated prompt:
%systemroot%\system32\inetsrv\appcmd list config "website" /section:requestFiltering /text:AllowDoubleEscaping
It returns error message: ERROR (message:Unknown attribute ""AllowDoubleEscaping"". Replace with -? for help.)
So I next ran the following command:
%systemroot%\system32\inetsrv\appcmd set config -section:requestFiltering -?
It returned this output and yes, I can see that the allowDoubleEscaping is missing
ERROR ( message:-allowHighBitCharacters
-unescapeQueryString
-removeServerHeader
-fileExtensions.allowUnlisted
-fileExtensions.applyToWebDAV
-fileExtensions.[fileExtension='string'].fileExtension
-fileExtensions.[fileExtension='string'].allowed
-requestLimits.maxAllowedContentLength
-requestLimits.maxUrl
-requestLimits.maxQueryString
-requestLimits.headerLimits.[header='string'].header
-requestLimits.headerLimits.[header='string'].sizeLimit
-verbs.allowUnlisted
-verbs.applyToWebDAV
-verbs.[verb='string'].verb
-verbs.[verb='string'].allowed
-hiddenSegments.applyToWebDAV
-hiddenSegments.[segment='string'].segment
-alwaysAllowedUrls.[url='string'].url
-alwaysAllowedQueryStrings.[queryString='string'].queryString
-denyUrlSequences.[sequence='string'].sequence
-denyQueryStringSequences.[sequence='string'].sequence
-filteringRules.[name='string'].name
-filteringRules.[name='string'].scanUrl
-filteringRules.[name='string'].scanQueryString
-filteringRules.[name='string'].scanAllRaw
-filteringRules.[name='string'].denyUnescapedPercent
-filteringRules.[name='string'].scanHeaders.[requestHeader='string'].requestHeader
-filteringRules.[name='string'].appliesTo.[fileExtension='string'].fileExtension
-filteringRules.[name='string'].denyStrings.[string='string'].string
So which files is appcmd actually checking for these? I went ahead and checked the C:\Windows\System32\inetsrv\config\schema\IIS_schema.xml file and it does have this attribute defined in there. This seems to be the only place this is defined, so I am confused where else its not finding this attribute to throw the error??
<sectionSchema name="system.webServer/security/requestFiltering">
<attribute name="allowDoubleEscaping" type="bool" defaultValue="false" />
<attribute name="allowHighBitCharacters" type="bool" defaultValue="true" />
<attribute name="unescapeQueryString" type="bool" defaultValue="true" />

JsonLayout with non formatted message

I have little problem with JsonLayout
NLog version: 4.7.10
Platform: netcoreapp 3.1
Current Nlog config
<target name="jsonFileMw" xsi:type="File" fileName="logs\mw.log"
archiveAboveSize="10240"
maxArchiveDays="5"
archiveNumbering="DateAndSequence"
archiveEvery="Day"
enableArchiveFileCompression="true">
<layout xsi:type="JsonLayout" includeAllProperties="true">
<attribute name="time" layout="${longdate}" />
<attribute name="level" layout="${level:upperCase=true}"/>
<attribute name="message" layout="${message}" />
</layout>
</target>
my logging code
_logger.LogInformation("request received. {RequestUrl} {RequestBody}", "some url", "some body");
this logging code produces following log line:
{ "time": "2021-08-02 15:07:30.8198", "level": "INFO", "message": "request received. some url some body", "RequestUrl": "some url", "RequestBody": "some body" }
As you can see this adds log properties to message also which means logging same information twice. As a result log file size increases. I just want to keep message simple. Desired output is below:
{ "time": "2021-08-02 15:07:30.8198", "level": "INFO", "message": "request received. {RequestUrl} {RequestBody}", "RequestUrl": "some url", "RequestBody": "some body" }
How can I achieve this?
You can do this:
<attribute name="messagetemplate" layout="${message:raw=true}" />
See also: https://github.com/NLog/NLog/wiki/Message-Layout-Renderer
See also: https://github.com/NLog/NLog/wiki/How-to-use-structured-logging#output-captured-properties

Importing Data GI In web service

I have a requirement to import custom data into Acumatica using web service using web service.
I have created a custom table having 2 string field and one ntext field which will hold XML data.
Created a GI for it and exposed in web service endpoint.
The import JSON Data format is like this.
[
{
"OrderNbr": "1",
"CommandValue": "8",
"Xmldata": "<?xml version=\"1.0\" encoding=\"utf-8\"?><MLW Cmd=\"8\" TStamp=\"2018-12-21T11:38:25\" Id=\"dsgx1\" OrgId=\"157035408\" DevId=\"b9d863ca-REG-4825e4aa-566b5fc7\" RouteId=\"Resource-879-1\" StopId=\"Location230\" LocationKey=\"Location230\" StopType=\"67\"> <GPS Altitude=\"278.46383285522461\" Latitude=\"34.0487467032243\" Longitude=\"-84.673757432107507\" NoOfSat=\"7\" Speed=\"1.3679999828338623\" SatTStamp=\"2018-12-21T11:37:26\" Direction=\"0\" FixQuality=\"A\" /> <FieldData LCode=\"1\" OwnerId=\"Location230\"> <Field FId=\"89815\" Value=\"No\" /> <Field FId=\"89817\" Value=\"No\" /> <Field FId=\"89816\" Value=\"Patrick N\" /> </FieldData> <Job Id=\"Order-878-4\" Status=\"4\"> <Item Status=\"4\" Id=\"TIFTUF\" Mode=\"Manual\" /> </Job></MLW>"
}
]
I have tried in POSTMAN using basic authentication.
I am getting following error
PUT: 400 Bad request
GET: 500 Internal server error.
UPDATE: I have created a custom list page and configured it in the endpoint.
I have tested in POSTMAN and
Following are the endpoint and the JSON string used to create records
http://localhost/XYZ/(W(3))/entity/XYZ/17.200.001.001/MyResposeImport
{
"OrderNbr": {value :"b"},
"CommandValue": {value :"8"},
"Xmldata": {value :"<?xml version='1.0' encoding='utf-8'?><MLW Cmd='8' TStamp='2018-12-21T11:38:25' Id='dsgx1' OrgId='157035408' DevId='b9d863ca-REG-4825e4aa-566b5fc7' RouteId='Resource-879-1' StopId='Location230' LocationKey='Location230' StopType='67'> <GPS Altitude='278.46383285522461' Latitude='34.0487467032243' Longitude='-84.673757432107507' NoOfSat='7' Speed='1.3679999828338623' SatTStamp='2018-12-21T11:37:26' Direction='0' FixQuality='A' /> <FieldData LCode='1' OwnerId='Location230'> <Field FId='89815' Value='No' /> <Field FId='89817' Value='No' /> <Field FId='89816' Value='Patrick N' /> </FieldData> <Job Id='Order-878-4' Status='4'> <Item Status='4' Id='TIFTUF' Mode='Manual' /> </Job></MLW>"}
}
PUT Returns back OK and the response give below
{
"id": "94a00013-37bf-4077-bfb6-2e8662988547",
"rowNumber": 1,
"note": null,
"OrderNbr": {
"value": "b"
},
"ShippingStatus": {},
"XMLData": {},
"custom": {},
"files": []
}
I have checked in the back end and no record added to the table.
I have created the sitemap under the hidden section since the screen is only API call.
What may be the reason for the record not added to the table?
I have solved the Issue. The JSON Data field name is different from DAC label and API is look DAC label not the field name.
I have changed the JSON data to following and it works fine
{
"OrderNbr": {value :"b"},
"ShippingStatus": {value :"8"},
"XMLData": {value :"<?xml version='1.0' encoding='utf-8'?><MLW Cmd='8' TStamp='2018-12-21T11:38:25' Id='dsgx1' OrgId='157035408' DevId='b9d863ca-REG-4825e4aa-566b5fc7' RouteId='Resource-879-1' StopId='Location230' LocationKey='Location230' StopType='67'> <GPS Altitude='278.46383285522461' Latitude='34.0487467032243' Longitude='-84.673757432107507' NoOfSat='7' Speed='1.3679999828338623' SatTStamp='2018-12-21T11:37:26' Direction='0' FixQuality='A' /> <FieldData LCode='1' OwnerId='Location230'> <Field FId='89815' Value='No' /> <Field FId='89817' Value='No' /> <Field FId='89816' Value='Patrick N' /> </FieldData> <Job Id='Order-878-4' Status='4'> <Item Status='4' Id='TIFTUF' Mode='Manual' /> </Job></MLW>"}
}

Set log file limit with log4net and azure file appender

I'm currently using log4net and azure files to store my logs, works ace.
I've been searching and can't find any configuration to make the logger create files no bigger than a given KB size.
This is the configuration I have:
<rollingStyle value="Size" />
<MaxSizeRollBackups value="10" />
<MaximumFileSize value="10KB" />
<AzureStorageConnectionString value="connectiondatahere" />
<ShareName value="filelog" />
<Path value="processor" />
<File value="processor_{yyyy-MM-dd}.txt" />
<layout type="log4net.Layout.PatternLayout">
<ConversionPattern value="%date %-5level %logger %message%newline"/>
</layout>
</appender>
<root>
<level value="ALL" />
<appender-ref ref="AzureFileAppender"/>
</root>
I've tried a few variations of this configuration but no luck.
After reviewed the source code of log4net-appender-azurefilestorage, I found the log file size limit is not support in azure file appender currently. I suggest you rewrite the azure file appender by yourself and add the size limit feature.
Below are the steps to do it.
Step 1, Add a property named MaximumFileSize to AzureFileAppender class.
public int MaximumFileSize { get; set; }
Step 2, add the size limit code when appending log to file.
protected override void Append(LoggingEvent loggingEvent)
{
Initialise(loggingEvent);
var buffer = Encoding.UTF8.GetBytes(RenderLoggingEvent(loggingEvent));
if ((_file.Properties.Length + buffer.Length) > MaximumFileSize)
{
//do something if the file reach the max file size
}
else
{
_file.Resize(_file.Properties.Length + buffer.Length);
using (var fileStream = _file.OpenWrite(null))
{
fileStream.Seek(buffer.Length * -1, SeekOrigin.End);
fileStream.Write(buffer, 0, buffer.Length);
}
}
}
Step 3, After that, you could add the size limit(per byte) to configuration file.
<MaximumFileSize value="10240" />

logstash: in log4j-input, the "path" is not correct

In my config file, I use
input { log4j {} }
and:
output { stdout { codec => rubydebug } }
I've attached my log4j to logstash using SocketListener. When my app prints something to the log, I see in logstash:
{
"message" => "<the message>",
"#version" => "1",
"#timestamp" => "2015-06-05T20:28:23.312Z",
"type" => "log4j",
"host" => "127.0.0.1:52083",
"path" => "com.ohadr.logs_provider.MyServlet",
"priority" => "INFO",
"logger_name" => "com.ohadr.logs_provider.MyServlet",
"thread" => "http-apr-8080-exec-3",
"class" => "?",
"file" => "?:?",
"method" => "?",
}
the issue is that the "path" field is wrong: AFAI understand, it should be the path of the log file; instead, I get the same value as "logger_name".
I have several apps on my tomcat that I want to collect the logs from. I need "path" to be the path-of-file (including the file-name), so I can distinguish between logs from different apps (each app logs to a different file).
How can it be done?
thanks!
The log4j input is a listener on a TCP socket. There is no file path.
To solve your challenge, you can either configure multiple TCP ports, so every application logs to a different TCP port or you could use GELF. GELF is an UDP-based protocol, but you need additional jars. logstash supports also GELF as native input. You can specify in many GELF appenders static fields, so you can distinguish on application level, which application is currently logging.
You can find here an example:
<appender name="gelf" class="biz.paluch.logging.gelf.log4j.GelfLogAppender">
<param name="Threshold" value="INFO" />
<param name="Host" value="udp:localhost" />
<param name="Port" value="12201" />
<param name="Version" value="1.1" />
<param name="Facility" value="java-test" />
<param name="ExtractStackTrace" value="true" />
<param name="FilterStackTrace" value="true" />
<param name="MdcProfiling" value="true" />
<param name="TimestampPattern" value="yyyy-MM-dd HH:mm:ss,SSSS" />
<param name="MaximumMessageSize" value="8192" />
<!-- This are static fields -->
<param name="AdditionalFields" value="fieldName1=fieldValue1,fieldName2=fieldValue2" />
<!-- This are fields using MDC -->
<param name="MdcFields" value="mdcField1,mdcField2" />
<param name="DynamicMdcFields" value="mdc.*,(mdc|MDC)fields" />
<param name="IncludeFullMdc" value="true" />
</appender>
HTH, Mark

Resources