I am using Tomcat6 to deploy Solr on CentOs. Where can I find the log files to figure out the error Solr is giving?
You need to copy the logging config from solr/ into a location on the classpath.
cp SOLR_HOME/example/resources/log4j.properties TOMCAT_HOME/conf/
Then restart your tomcat instance.
i think you can find it in this directory apache-tomcat-x.x\logs
after that search for text file called "catalina" and the date of your search will be concatenated with a dot to the catalina file for example assume that we made a search at 2012-11-05 so the text file will lock like catalina.2012-11-05
The error given by solr can be seen in catlina.out but if you want it in a separate file this is the way you can configure it.
Add the entried in logging.proeperties which is at the folder apache tomcat/conf.
6localhost.org.apache.juli.FileHandler.level = FINE
6localhost.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
6localhost.org.apache.juli.FileHandler.prefix = solr.
Register the handler at the top in the file where all the other handlers are :
6localhost.org.apache.juli.FileHandler
add the below entry at the last
org.apache.solr.level=INFO
org.apache.solr.handlers=6localhost.org.apache.juli.FileHandler
you will get a seperate solr.log file where in you will have all the solr logs.
Related
I am working on a spark java wrapper which uses third party libraries, which will read files from a hard coded directory name say "resdata" from where job executes. I know this is twisted but will try to explain.
when I execute the job it is trying to find the required files in the path something like this below,
/data/Hadoop/yarn/local//appcache/application_xxxxx_xxx/container_00_xxxxx_xxx/resdata
I am assuming it is looking for the files in the current data directory , under that looking for directory name "resdata". At this point I don't know how to configure the current directory to any path on hdfs or local.
So looking for options to create directory structure similar to what the third party libraries expecting and copying required files over there. This I need to do on each node. I am working on spark 2.2.0
Please help me in achieving this?
just now got the answer I need to put all the files under resdata directory and zip it say restdata.zip, pass the file using the options "--archives" . Then each node will have directory restdata.zip/restdata/file1 etc
I wonder if you can configure logstash in the following way:
Background Info:
Every day I get a xml file pushed to my server, which should be parsed.
To indicate a complete file transfer afterwards I get an empty .ctl (custom file) transfered to the same folder.
The files both have the following name schema 'feedback_{year}{yearday}_UTC{hoursminutesseconds}_51.{extention}' (e.g. feedback_16002_UTC235953_51.xml). So they have the same file name but one is with .xml and the other is a .ctl file.
Question:
Is there a way to configure logstash to wait parsing the xml file until the according .ctl file is present?
EDIT:
Is there maybe a way to archiev that with filebeat?
EDIT2:
It would also be enough to be able to configure logstash in a way that it will wait x minutes before starting to process a new file, if that is easier.
Thanks for any help in advance
Your problem is that you don't want to start the parser before the file transfer hasn't been completed. So, why don't push the data to a file (file-complete.xml) when you find your flag file (empty.ctl)?
Here is the possible logic for a script and runs using crontab:
if empty.ctl exists:
Clear file-complete.xml
Add the content of file.xml to file-complete.xml.
Remove empty.ctl
This way, you'd need to parse the data from file-complete.xml. I think is simpler to debug and configure.
Hope it helps,
How can I change the name of log4j.properties and the location of this as well?
You can change its location like so:
java -Dlog4j.configuration=file:/path_to_file_here/log4j.properties YourApplication
You should also read the manual.
Regarding changing the name, this is how you can achieve this:
First, you must add the following line to your java runtime command:
-Dlog4j.configuration=test.properties
For example lets assume you are using log4j in your web application deployed on Tomcat.
Add the above mentioned line in the java runtime command to start up Tomcat:
C:\Tools\java\j2sdk1.4.2_01\bin\java.exe -jar
-Duser.dir="C:\Tools\Tomcat 4.1"
-Dlog4j.configuration=test.properties
-Djava.endorsed.dirs="C:\Tools\Tomcat 4.1\common\endorsed"
"C:\Tools\Tomcat 4.1\bin\bootstrap.jar" start
You will also possibly want to read this.
I know this is a really old post, but the first thread when I searched for the question. And my found solution is:
System.setProperty("log4j.configurationFile", "theNameIWant.properties");
I searched the logstash docs but i could not find out how logstash executes the filters.
I will explain by example:
Multiple config files, apache.conf nginx.conf logic.conf
Both nginx and apache config files contain a filter that will trigger if their type is met and add a tag called "please_do_logic".
logic.conf contains several grok filters that will extract the request part from the previous grokked log lines in nginx.conf and apache.conf
I have 2 questions:
How does logstash decide which config file will be executed first?
How can you ensure the logic.conf part will be executed after apache.conf and nginx.conf have been executed?
I know you can put everything in a single file / filter and go on from there but that would create messy config files and this would be
a last resort measure.
Thanks
Logstash can't operate on more than one .conf simultaneously, or create some sort of workflow of configuration files, its just not supported/implemented that way. The .conf file tells logstash what inputs to read, what filters to apply, and how to output the events. You'll have to put everything in one configuration file. Of course, you can create how many .conf files as you like, and run multiple instances of logstash, each with different configuration.
When I start logstash, the old logs are not imported into ES.
Only the new request logs are recorded in ES.
Now I've see this in the doc.
Even if I set the start_position=>"beginning", old logs are not inserted.
This only happens when I run logstash on linux.
If I run it with the same config, old logs are imported.
I don't even need to set start_position=>"beginning" on windows.
Any idea about this ?
When you read an input log to Logstash, Logstash will keep an record about the position it read on this file, that's call sincedb.
Where to write the sincedb database (keeps track of the current position of monitored log files).
The default will write sincedb files to some path matching "$HOME/.sincedb*"
So, if you want to import old log files, you must delete all the .sincedb* at your $HOME.
Then, you need to set
start_position=>"beginning"
at your configuration file.
Hope this can help you.
Please see this line also.
This option only modifies "first contact" situations where a file is new and not seen before. If a file has already been seen before, this option has no effect.