Is there a configuration setting I can do to make Apache traffic server to log XFF header into the logs ? I have gone over the ATS documentation and there's no mention of this. I see various logging formats explained.
Using the custom log field 'cqh' you can log the value of any header. So in your logging.config, you'd have a field '%<{X-Forwarded-For}cqh>'
Related
I once enable the default filed in IIS Logging.
But the after I need to change and add other fields for Logging.
How to full Disabled Logging in IIS and Then after How to set as a new.
Do you mean you want to let the IIS add the logs as with the new fields setting for the previous logs?
In my opinion, this is impossible. The IIS logs will just log for the new field after you modified the fields. It will not keep the previous logs fields value.
I'm building an ELK stack (for the first time) to track end-user REST API usage for a CloudFront distribution (in front of an S3 origin). Users pass a refresh token as part of their request and I was hoping to use this token to identify which users were making which request. Unfortunately, it looks like CloudFront access logs are missing some header information (particularly Authorization/Accept in my use case). This leaves me with three questions:
Is there a way to tell CloudFront to log additional items? It appears the answer is no.
As an alternative strategy, I tried modifying the request object with lambda#edge (in Viewer Request) to move the header information into the query string (so that it would get logged) but any manipulation in lambda#edge does not seem to be reflected in the log. (though it is reflected in the Origin Request function). Should this be possible?
If doing what I want is impossible, I think the alternative approach is forgo CloudFront logs completely and just fire an http request to logstash with every user request, but I feel like this could be easy to overload.
Thanks
After a few days of research and reaching out to Amazon, I was finally able to answer my own questions:
CloudFront logs can't be customized, they are what they are.
See 1.
It turns out that customization is the wrong approach. What I really need to do is aggregate two separate logs that have the information I need into a single logstash entry. It turns out that the Viewer Response lambda#edge function contains a requestId property (actually event.Records[0].cf.config.requestId) which matches the CloudFront log x-edge-request-id column. So while I haven't finished implementing it yet, these two columns can be used in the logstash config for aggregation. I just need to make sure I set up a Viewer Response event that logs out a consistent format that I can then part with logstash. I'm using the logstash-input-cloudwatch_logs to retrieve teh cloudwatch logs.
I'm using log4net.Appender.AzureAppendBlobAppender to log my web app's info & errors. Sometimes I'm getting the "BlockCountExceedsLimit" exception. It is due to the append blob accepts only 50,000 block commits after that it through the exception (Conflict (409)). I have checked the code and found that it waits for the 512 log events and flush each log entry separately to the append blob. So, we can log only 50,000 log entries in a day.
Can anyone please help me on this? Does anyone know any alternate for this?
Thanks,
Karthik
According to your description, I assumed that you are using log4net.Appender.Azure nuget package. As you can see under AzureAppendBlobAppender.cs:
private static string Filename(string directoryName)
{
return string.Format("{0}/{1}.entry.log.xml",
directoryName,
DateTime.Today.ToString("yyyy_MM_dd",
DateTimeFormatInfo.InvariantInfo));
}
Per my understanding, you could follow AzureAppendBlobAppender.cs to write your custom AzureAppendBlobAppender and adjust the Filename,SendBuffer methods to meet your requirement.
I'm using log4net.Appender.AzureAppendBlobAppender to log my web app's info & errors.
Since you use azure web app to host your application, you could use the built-in Application Logging (Blob), and azure side would help you generate the logs hourly. You could log into Azure Portal, choose your web app, enable application logging (Blob), set the logging level to Information, details you could follow Enable diagnostics logging for web apps in Azure App Service.
For your application, you could use the following code to log info and errors.
System.Diagnostics.Trace.TraceError("xxxxx");
System.Diagnostics.Trace.TraceInformation("xxxxx");
I've changed the code to a little bit to append the blob, once the buffer reaches the threshold value (512 log entries) it'll flush the log entries in single commit.
Spring Integration defines both <int:logging-channel-adapter/> and <int:message-history/> elements for logging. What is the default directory/folder where these files are placed? Also, is this location configurable?
Thanks
<int:message-history/> isn't for logging. It just stores the 'journey' of the message to its headers. Right, it is done in some convenient form, which is useful to be logged.
<int:logging-channel-adapter/> it doesn't store anything to the disk. This component just does log.debug(), log.info() etc.. Where logs are stored it's up to logging system configuration.
How does your logging system works is out of Spring Integration scope: you can simply store logs to file, or to the DB, or send them to JMS, or AMQP, or just show in console. So, investigate, please, how you can fix your 'issue' with you logging system: log4j, commons-logging, slf4j etc.
I am able to find the GET parameters that are made as part of a request but I am not able to retrieve the POST parameters for a request. Can you guys tell me what should be my search parameters for the same?
Does IIS actually log this?
Thanks in advance.
IIS does not log POST parameters. POST is commonly used for large data sets and file uploads which would take up a ton of space on your disk and could cause your server to run out of space easily.
You can setup some manual logging with something like log4net and log POST parameters. File growth will still be a problem but log4net can be configured to limit growth and roll-over at a certain size. You can then index your log4net logs using splunk