log4net Remove new line characters from logs - log4net

We are currently using log4net with the ConsoleAppender. The app runs inside a Docker container, and the console output is sent to our cloud provider (AWS CloudWatch) where we troubleshoot, look at errors, etc.
The problem is when there's multiple lines being sent to the console (think Exception stack traces). Each line is sent to CloudWatch as separate lines, and CloudWatch interprets each line as a separate entry, which looks messy. Ideally I want one CloudWatch entry per log4net LogEntry, or one line at a time per LogEntry in the console output.
In order to do this I would have to remove all line breaks when formatting the LogEntry, maybe escaping it with a "\n\r\" or just a space.
What's the best way to get this behavior in log4net?

Related

Insert data into database using sqlldr in coldfusion

I have a CSV file that I got from a website. I need to upload that same CSV file into my database using SQLLDR in ColdFusion. For some reason I'm not able to insert the data into database.
Below is my code. Using this code I was not able to insert the data into database from the CSV file. It is working file from a batch file, but it is not working using cfexecute. By that I mean, I'm getting a blank screen. No errors, no exceptions, nothing. Checked in the logs but I did not get any errors there either. Only thing what I can see is that the data is not inserted into the database.
FYI, we are using Linux environment, so the path is slightly different.
<cfset CTLPATH="/home/mosuser/apps/nodal/ctl">
<cfset LOGPATH="/home/mosuser/apps/nodal/logs">
<cfexecute name="/opt/oracle/product/12.1.0/client_1/bin/sqlldr"
arguments="userid/password#Sid control=#CTLPATH#/mpimReport.ctl
log=#LOGPATH#/#PathfileName#_load.log data=#filelist##PathfileName#.csv
bad=#LOGPATH#/#PathfileName#_error.txt">
</cfexecute>
Update:
As suggested, dumping the error variable qryerr showed:
Message 2100 not found; No message file for product=RDBMS,
facility=ULMessage 2100 not found; No message file for product=RDBMS,
facility=UL
Add a few parameters to your <cfexecute> call.
timeout - in the order of the number of seconds expect the process to take
variable - the name of the variable to hold the STDOUT output of sqlldr
errorVariable - name of the variable to hold the STDERR output of sqlldr
After doing that you can get the output of the call and inspect it for any error messages and other info.
Adding the timeout is the crucial step - this makes cfexecute block until either the program terminates or the timeout is reached. Without a timeout, ColdFusion simply kicks off the process and immediately continues executing the rest of the current page.

Generate auto increment sequence in logstash

I am pushing logs to Elastic Search from Logstash and then i need to get back the logs in the order they were written. Sorting by time stamp does not help because there could me multiple log statements in the same time. I followed the solution in Include monotonically increasing value in logstash field? and it worked perfectly in my windows system.
But when the code was moved to the linux production environment, logstash is not starting up. Failing with the below error
reason=>"Couldn't find any filter plugin named 'seq'. Are you sure
this is correct? Trying to load the seq filter plugin resulted in this
error: no such file to load -- logstash/filters/seq", :level=>:error}
Check if the seq.rb file is in the filter folder.
Also check if the line ending of your seq.rb are linux. If you transferred the file from a windows machine to a linux, the problem might come from here.

How to add a log file and logging to it

Okay so I've gone about so many online instructions on how to add a log file but none of them seem to log anything when i use the command:
logger hello world
I added a log file local3.log as follows in
/var/log/local3.log
now I wanted to log all local3 facility with all severities to it. I went about what some sites told me and went into /etc/rsyslog.conf and added the line:
local3.* /var/log/local3.log
but when anything boots up or any logger commands i give it doesn't update with the time and date and all that. I've already set my logrotate file properly with weekly every 8 weeks and create and dateext. I still can't get it to work I'm thinking I'm editing the wrong syslog file or the wrong command to it?

Best way to manually periodically import log files into Graylog using logstash

I'm currently using logstash to import dozens of log files from different webapps into Graylog. It works great the files are tagged so I know from wich webapp they originate.
I can't change the webapp thus I can't add a GELF appender to the log4j conf of the webapp. The idea is to periodically retrieve the log files, parse them and import them with logstash into Graylog.
My problem is how do I make sure I don't import a log event I've already imported.
For example, I have a log file that has a log pattern that increments: log.1, log.2, etc. So I'll have log events that could be in log.1 the first time and 2 weeks later when I reimport them they'll maybe be in log.3.
I'm afraid I can't handle that with logstash's file input "sincedb_path" and "start_position".
So here are a few options I've gathered and I'd like your input about them, if anyone encountered the same issue:
Use a logstash filter dropping all events before a certain date,
requires to keep an index of every last log date of every file
imported (potentially 50+) and a lot of configuration writing
Use of a drool rule in GrayLog to refuse logs with timestamps prior
to last log received for a given type
Ask to change the log pattern to be something like log.date instead
of a log pattern that renames files (but I'd rather avoid this one)
Any other idea?

Old logs are not imported into ES by logstash

When I start logstash, the old logs are not imported into ES.
Only the new request logs are recorded in ES.
Now I've see this in the doc.
Even if I set the start_position=>"beginning", old logs are not inserted.
This only happens when I run logstash on linux.
If I run it with the same config, old logs are imported.
I don't even need to set start_position=>"beginning" on windows.
Any idea about this ?
When you read an input log to Logstash, Logstash will keep an record about the position it read on this file, that's call sincedb.
Where to write the sincedb database (keeps track of the current position of monitored log files).
The default will write sincedb files to some path matching "$HOME/.sincedb*"
So, if you want to import old log files, you must delete all the .sincedb* at your $HOME.
Then, you need to set
start_position=>"beginning"
at your configuration file.
Hope this can help you.
Please see this line also.
This option only modifies "first contact" situations where a file is new and not seen before. If a file has already been seen before, this option has no effect.

Resources