How to force Logstash 7 to reparse a file? - logstash

There is a similar question and solution here.
However, in 7.11 if I set sincedb_path => "/dev/null" and start logstash I get the following error:
Error: Permission denied – Permission denied
Exception: Errno::EACCES
which turned out to be quite difficult to find the cause of. In other words, the solution of setting sincedb_path => "/dev/null" doesn't work.
My OS is MacOSX and I installed logstash via brew. Is there a better way than stopping logstash each time, removing libexec/data/plugins/inputs/file/.sincedb_XXXX files and restarting logstash?

sincedb_path => "NULL"
seems to work.

Related

Installing Logstash on windows - An unexpected error occurred! :error=>bad URI (is not URI?)

I am trying to install the latest version of Logstash ie 5.1.1 on windows (Windows 7 Professional).
I unziped the logstash installtion file in the path : C:\Program Files\logstash-5.1.1. Now when i try to test the logstash installation with this command: logstash -e 'input { stdin { } } output { stdout {} }'
But following error is shown when this command in run:
Following is mentioned on their site in the Installation guide section:
Do not install Logstash into a directory path that contains colon (:) characters
Is this the reason i am getting this error?
If yes, it seems to me that there is no way to avoid directory path with a colon on a windows environment. How do i get around this problem ?
If no, what might be the reason and how do i fix it?
I had posted the same question on logstash forum
I got a confirmation that windows 7 is not supported !!
I have installed on windows 10 and able to run this
logstash -e 'input { stdin { } } output { stdout {} }'
Try this may be solve your issue and let me know if it does.
add environment variable let say 'LS_SETTING_DIR' and brows the path of logstash in my case it is C:\Users\mrizwan\Downloads\ELK\logstash-5.1.1\config
I don't know if you've solved this problem yet.
I experienced the same problem.
The answer is that there is a gap in the installation path of the logstath installation path.
So, there is a URL decoding error.
Find out where the space is located at logstash logstash igyeongno.

how to configure logstash with elasticsearch in window8?

I'm currently trying to install and run Logstash on Windows 7 using the guidelines of the Logstash website. I am struggling to configure and use logstash with elasticsearch. Created logstash-simple.conf with below content
`
enter code here`input { stdin { } }
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
when i execute below Command:
D:\logstash-2.4.0\bin>logstash agent -f logstash-simple.conf
I get following error, i tried many things but i get same error
←[31mNo config files found: D:/logstash-2.4.0/bin/logstash-simple.conf
Can you make sure this path is a logstash config file? {:level=>:error}←[0m
The signal HUP is in use by the JVM and will not work correctly on this platform
D:\logstash-2.4.0\bin>
Read No config files found in the error. So Logstash can't find the logstash-simple.conf file.
So try
D:\logstash-2.4.0\bin>logstash agent -f [Direct path to the folder containing logstash-simple.conf]logstash-simple.conf
Verify if extension is .conf or another other thing like .txt (logstash-simple.conf.txt)

puppet: Could not back up <file>: Got passed new contents for sum

I had a question I was hoping someone might have an answer to. Essentially what I'm doing is try to ensure I'm always using a fixed, slightly older version of phpunit, which I've placed in my module's file resources.
The manifest:
file
{
"/usr/bin/phpunit":
ensure => file,
owner => 'root',
group => 'root',
mode => 0755,
source => "puppet:///modules/php/phpunit"
}
Preparation: I download the current ('wrong') version of phpunit and place it in /usr/bin.
So the first run puppet succeeds:
Notice: Compiled catalog for <hostname> in environment production in 3.06 seconds
Notice: /Stage[main]/Php/File[/usr/bin/phpunit]/content: content changed '{md5}9f61f732829f4f9e3d31e56613f1a93a' to '{md}38789acbf53196e20e9b89e065cbed94'
Notice: /Stage[main]/Httpd/Service[httpd]: Triggered 'refresh' from 1 events
Notice: Finished catalog run in 15.86 seconds
Then I download the current (still 'wrong') version of phpunit and place it in /usr/bin again.
This time the puppet run fails.
Notice: Compiled catalog for <hostname> in environment production in 2.96 seconds
Error: Could not back up /usr/bin/phpunit: Got passed new contents for sum {md5}9f61f732829f4f9e3d31e56613f1a93a
Error: Could not back up /usr/bin/phpunit: Got passed new contents for sum {md5}9f61f732829f4f9e3d31e56613f1a93a
Error: /Stage[main]/Php/File[/usr/bin/phpunit]/content: change from {md5}9f61f732829f4f9e3d31e56613f1a93a to {md5}38789acbf53196e20e9b89e065cbed94 failed: Could not back up /usr/bin/phpunit: Got passed new contents for sum {md5}9f61f732829f4f9e3d31e56613f1a93a
What gives? If I delete the file ( /var/lib/puppet/clientbucket/9/f/6/1/f/7/3/2/9f61f732829f4f9e3d31e56613f1a93a/ ) from my filebucket it will work again... for the next run, but not the one after that.
What am I doing wrong?
I'd appreciate any input and thanks in advance.
Been having this error as well. I solved it with a combination of two previous answers.
Firstly I had to delete /var/lib/puppet/clientbucket on the client node by running:
sudo rm -r /var/lib/puppet/clientbucket
Just doing this will only let it run once more.
Then I had to mark the backup => false to stop it recreating the file, missing out either step failed to solve it for me. The accepted answer is incorrect by saying there is
"no solution other than upgrading".
I was able to fix the same problem by removing /var/lib/puppet/clientbucket on the client node.
This node has been running out of disk space, so puppet has probably incorrectly stored empty files there.
As a workaround, you can set backup => false in the file resource. This is a little unsafe, of course.
This has no solution other than to upgrade since there's a bug in certain versions of puppet where files containing both UTF8 and binary characters are treated wrongly, and it results in an error message.
https://tickets.puppetlabs.com/browse/PUP-1038
The ridiculously overcomplicated solution I used as a workaround is to have a .tar file in the file resource which notifies an exec which untars and places the actual executable in the correct directory, making sure the timestamp for the latter is newer than the former.
It's far from ideal but it works in cases like mine where upgrading puppet to the most current version isn't an attractive option.

Puppet refuses to unzip archive

I want to download several libraries (guzzle, pimple) and unzip them immediately after.
For guzzle it works without any problems, however it refuses to unzip pimple and returns following error:
Exec[unflate-pimple]/returns: change from notrun to 0 failed: tar
-zvxf pimple-v1.1.1-0.tar.gz returned 2 instead of one of [0]
My exec:
exec {
"unflate-$lib_name":
cwd => "/var/www/lib/$lib_name",
command => "tar -zvxf $lib_name-$lib_version_prefix$lib_version.tar.gz",
path => "/usr/bin:/usr/sbin:/bin",
require => Exec["download-$lib_name"]
}
Where
$lib_name = "pimple"
$lib_version_prefix = "v"
$lib_version = "1.1.1-0"
Unzipping it manually in the terminal when connecting through SSH works fine.
I already tried unzipping and zipping it again.
I feel completely lost, where is the problem?
To debug this kind of misbehavior, add the logoutput => true parameter to the exec resource.
exec {
"unflate-$lib_name":
cwd => "/var/www/lib/$lib_name",
command => "tar -zvxf $lib_name-$lib_version_prefix$lib_version.tar.gz",
path => "/usr/bin:/usr/sbin:/bin",
require => Exec["download-$lib_name"],
logoutput => true,
}
Newer versions of Puppet default to on_error, which would be fine for your case, too.
The agent will then add the output of tar to the log. I cannot debug this for you further without seeing that output, but I suspect you will be able to solve the issue on your own once you see it.

Logstash - Failed to open <file_path> Permission denied

I am using logstash to push all the text logs from storage to elastic search.
My storage size is about 1 TB. To Start with I have started to push 368 GB data (may be few hundred thousand files) to elastic search but logstash is failing with following error.
{:timestamp=>"2014-05-15T00:41:12.436000-0700", :message=>"/root/share/archive_data/sessionLogs/965c6f46-1a5e-4820-a68d-7c32886972fc/Log.txt: file grew, old size 0, new size 1557420", :level=>:debug, :file=>"filewatch/watch.rb", :line=>"81"}
{:timestamp=>"2014-05-15T00:41:12.437000-0700", :message=>":modify for /root/share/archive_data/sessionLogs/965c6f46-1a5e-4820-a68d-7c32886972fc/Log.txt, does not exist in #files", :level=>:debug, :file=>"filewatch/tail.rb", :line=>"77"}
{:timestamp=>"2014-05-15T00:41:12.441000-0700", :message=>"_open_file: /root/share/archive_data/sessionLogs/965c6f46-1a5e-4820-a68d-7c32886972fc/Log.txt: opening", :level=>:debug, :file=>"filewatch/tail.rb", :line=>"98"}
{:timestamp=>"2014-05-15T00:41:12.441000-0700", :message=>"(warn supressed) failed to open /root/share/archive_data/sessionLogs/965c6f46-1a5e-4820-a68d-7c32886972fc/Log.txt: Permission denied - /root/share/archive_data/sessionLogs/965c6f46-1a5e-4820-a68d-7c32886972fc/Log.txt", :level=>:debug, :file=>"filewatch/tail.rb", :line=>"110"}
share is network mounted. I am using root user to start logstash. User should have all the access needed to mount.
share directory has following access
drwxr-xr-x 44 root root 0 May 13 08:36 share
Now, my log files are static they don't change.
So, my question is - Is there anyway to let logstash know that do not store file handles once it process one log file. I think above error is occurred because number of log files is huge.
I have already filed a bug and there is existing bug in logstash which says that logstash doesn't do well when log files are more in number.
I see some duplicate issues here but I would like to know if anybody has any experience with this kind of issue?
I think, for logstash 1.4.2, the only answer is to:
move or delete the files from the monitored directory
restart logstash
I don't think there's any other way to have logstash release file handles of logs that have been processed and won't be added to any more.

Resources