I have a directory full of log files, each one named for each day, ie, "log.2016-09-26" but they go back a long ways. I'm using filebeat to grab these logs from this directory, but my issue is that I only want the past 2 weeks/14 days. Filebeat wants a regex to filter out what files to exclude. What is the best way to filter these logs?
Related
i have a log file that created with tomcat, I want to create two indexes from this file so that one of the indexes contains all the log file information and the second index has part of the log file information.
At the moment, I have saved different parts of the log in different indexes, but I also want to have all the log file information in a separate index.
Can I solve this without running two filebeat in server?
So my problem is quite stupid but I cannot find a way to resolve it. I have one 15 GB file on external SFTP server that I need to copy to my data lake. The thing is that column delimiter is a comma and I have some nested lists as well. So when I am trying to use ADF copy activity, the result looks like that:
And most of my data is gone(as nested structures get cut on the first occurence of comma). So maybe I could ignore delimiter. I have tried to set pipe as a delimiter just to get this whole dataset as one column but this doesnt work either.
Powershell? I have tried different scripts that used to work with smaller files and I am getting an error every time.
I have even tried to upload it manually via Azure Storage Explorer but it fails as well after some time. I am not really sure how to make it work at this point.
Thank you for any advice!
I'm trying to use dupFinder to scan for duplications in a .NET codebase. I have certain files and folders that I want to exclude from the scan but I'm struggling to get it working.
The command I'm running is:
dupfinder.exe --show-text --output="dupReport.xml" --exclude="Some.Folder.*;*Resource.designer.cs" MyCode.sln
So what I'm trying to do is:
Scan the MyCode.sln solution.
Ignore all folders matching the pattern Some.Folder.* e.g. Some.Folder.Code and Some.Folder.Tests (these folders are in the root of the repository alongside the solution file).
Ignore all files matching the pattern *Resource.designer.cs in any folder i.e. MyCode.Resource.designer.cs.
I'm sure I'm just doing something wrong but the dupFinder documentation doesn't show an example of using the exclude option.
I have eventually managed to get this working, the conclusion I've drawn is that you can't exclude folders only files.
I think because my original exclude pattern was trying to ignore folders the whole thing wasn't working.
I know this is an old question but I also searched for this.
To exclude complete folders you should use double *
e.g.
--exclude="**\Tests\**;**\Resource.designer.cs"
excludes all Files in all Tests folders and Resource.designer.cs in whatever folder.
Edit:
Tested and still working on JetBrains.ReSharper.CommandLineTools.2020.3.4. Which was the current version when I wrote the answer.
Current version seems to have a bug again and not excluding at all.
I'm working on a project and we want to handle our logging using log4j. I am running into some issues that I am not able to easily resolve looking at the log4j docs, or other documentation online.
I get the basic idea of putting logging code throughout the codebase and then having the properties file assort the logged data into a hierarchy of appenders and how to write out to a file. That's fine. This basically allows me to create greppable log files in one hard coded folder, such as this:
log4j.appender.R=org.apache.log4j.RollingFileAppender
log4j.appender.R.File=example.log
But I have two basic questions: I want to have the log location be dynamic, such as:
log4j.appender.R.File={$processDir}/example.log
Also, every time the user runs this app, a folder will be created with the output files. I would like to have the log file be placed there, and I'm not sure how to accomplish that.
The other issue (although I think this will be a lot easier once the first issue is addressed...) is about creating a formatted log that does not necessarily reflect the process of how the app ran...for example, a title, followed by a list of all input files, a list of all output files, any warnings encountered.
I think for that I would create an object that implemented ObjectRenderer and write a doRender method that gave me the info I wanted.
Does that sound correct?
Thanks!
You can use variable with this syntax
log4j.appender.R.File=${processDir}/example.log
You must define the variables as system properties (es. -DprocessDir=...) or manually (after creating folder) with
System.setProperty("processDir",logDir);
I have successfully configured an application that uses log4j for it's logging to log into a MySQL database. (Using org.apache.log4j.jdbc.JDBCAppender).
I also have some perl applications that log into the database as well. My perl apps are setup so that the name of the database table changes every month (log_2010_11, log_2010_10 etc). At the end of each month, I run reporting scripts on the month just completed, dump the table to an external file (which gets compressed and archived), and then drop the table. This way the total size of the logging database stays within sensible limits.
I would like to do the same with log4j, but there does not appear to be a log4j appender suitable for the purpose.
Is it possible to do something like this:
log4j.appender.SQ=org.apache.log4j.jdbc.JDBCRollingAppender
log4j.appender.SQ.Driver=com.mysql.jdbc.Driver
log4j.appender.SQ.URL=jdbc:mysql://localhost:3306/logs_{%year}_{%month}
Thank you.
I figured out how to do this:
log4j.appender.SQ=org.apache.log4j.jdbc.JDBCAppender
log4j.appender.SQ.Driver=com.mysql.jdbc.Driver
log4j.appender.SQ.URL=jdbc:mysql://localhost:3306/logs
log4j.appender.SQ.sql=INSERT INTO accesslog_%d{yyyy_MM} (date, time, tz, ...
It appears you can just put date format strings into the SQL statement, and JDBCAppender will expand them and log into the coresponding table.
However, it will not create new tables at the start of the new month, so currently I have to manualy create the tables beforehand, which is far from ideal.
You'd have to write your own appender to do this.
Another option would be to stay with the existing appender and do this:
You have a table in your database named log. Why not make a Perl script that makes a new table at end of every month, let's say log_12 for December, copies everything from log to log_12 and then delete everything from log? That way you don't have to mess around with making another appender.
How about a script to run monthly and dump that particular table into a back up file and then zip it for archiving. Upon complete, truncate the table or delete rows within date range.