Logstash log ftp input - logstash

Hithere,
My log files is stored in remote server where the directory is only accessible via browser.
Each day if there is a new log files uploaded in the server, it will be stored like this,
fttp://serverip.com/logs/2014/10/08/log.txt
ftttpp://serverip.com/logs/2014/10/08/log2.txt
fffttpp://serverip.com/logs/2014/10/08/log.xml
fffttttppp://serverip.com/logs/2014/10/08/log.xlx
the timestamp would be the time its uploaded to the server(i can use curl to see its timestamp)
input {
exec {codec => plain { }
command => "curl ftp://serverip.com/logs/2014/10/08/" #this list the dir
interval => 3000}
}
output {
stdout { codec => rubydebug }
#elasticsearch {embedded => true}
}
the problem is how can i combine/link the timestamp with the event file in the directories, because there is no timestamp in the log files.

Related

extract 7z file on S3 using node.js

can someone suggest npm package for extract 7z file for node.js.
I can see some npm package available for ZIP file but that does not work for the 7Z file.
I'm basically looking to extract 7z password protect file on S3 and read the data from 7z file.
Give node-7z package a try:
npm i node-7z
import Seven from 'node-7z'
// myStream is a Readable stream
const myStream = Seven.extractFull('./archive.7z', './output/dir/', {
$progress: true
})
myStream.on('data', function (data) {
doStuffWith(data) //? { status: 'extracted', file: 'extracted/file.txt" }
})
myStream.on('progress', function (progress) {
doStuffWith(progress) //? { percent: 67, fileCount: 5, file: undefinded }
})
myStream.on('end', function () {
// end of the operation, get the number of folders involved in the operation
myStream.info.get('Folders') //? '4'
})
myStream.on('error', (err) => handleError(err))
It also supports password feature you were requesting mate.

How to get log filename in codec plugin inside of file input plugin logstash

The below is the code that I want to ask.
input {
file {
path => "directory/*.log"
start_position => "beginning"
codec => my_own_codec_plugin {
....
}
sincedb_path => "/dev/null"
}
}
I have some log files in same directory. I can reach out them with using * in path. I have created "my_own_codec_plugin" for file input plugin.
I want to pass the log filename to "my_own_codec_plugin".
I mean if path reaches the logfile1.log send the name to codec plugin, then it reaches logfile2.log send the filename to the codec plugin again.
How can i do this ? Thanks for answering
In your custom codec, you're receiving the event and the event should have a path field with the actual path of the file that you can use.

Logstash file input not reparsing file

I have the following problem, I need logstash to reparse already parsed files:
Scenario that doesn't work but should:
upload file to watched folder
logstash processes it, saves to elastic, removes it (file_completed_action => "log_and_delete"), great
I upload the same file again, same name, same content.
logstash doesnt do anything, I want it to process it again
Here is my file input config:
file {
mode => "read"
exclude => "*.tif"
path => ["/home/xmls/*.xml"]
file_completed_action => "log_and_delete"
file_completed_log_path => "/var/log/logstash/completed.log"
sincedb_path => "/dev/null"
start_position => "beginning"
codec => multiline {
pattern => ".*"
what => "previous"
max_lines => 100000
max_bytes => "200 MiB"
}
type => "my-custom-type-1"
}
sincedb_path is set to /dev/null, it should not remember processed files, also tried setting ignore_older to 0, didn't help.
Also tried messing with queue settings in logstash.yml, changed it to persistent, didn't work ...
I'm using logstash version 7.5, logstash-input-file (4.1.11), running in linux machine.
When I restart logstash, then the unprocessed files get processed and cleaned up.
I need it to work without restarting.

Index not creating from couchdb with logstash Logstash

I am using this config file to create index/import data from couchdb.
input {
couchdb_changes {
db => "roles"
host => "localhost"
port => 5984
}
}
output {
elasticsearch {
document_id => "%{[#metadata][_id]}"
document_type => "%{[#metadata][type]}"
host => "localhost"
index => "roles_index"
protocol => "http"
host => localhost
port => 9200
}
}
I was able to run logstash with this config file and import data
once. I closed command prompt to shutdown logstash and reran cmd prompt
and stash with the config file again. but now I cannot see any index
created. Is there anything that I might be doing wrong here. I am using
ctl+c to kill logstash in cmd prompt. Will appreciate any help.
Thanks.
in case someone comes here looking for the answer of same thing...I set sequence_path => "my_couchdb_seq" in couchdb_changes { } section of my config file and it worked. Each time i want to run logstash to create index, value in this file should be replaced with 0. Got to link: https://discuss.elastic.co/t/index-not-creating-from-couchdb-with-logstash/27848/9 for details

Run custom services on startup with puppet

I'm migrating our old process of doing our linux configurations to be managed by puppet but I'm having issues trying to figure out how to do this. We add some custom scripts to the init.d folder on our systems to manage some processes, and then these need this command executed on them to launch on startup:
update-rc.d $file defaults
So what I'm doing with puppet is that I have all these scripts residing a directory, and I copy them over to init.d. I then want to call 'exec' on each of these files with the former command and use the file name as an argument. This is what I have so far:
#copy init files
file { '/etc/init.d/':
ensure => 'directory',
recurse => 'remote',
source => ["puppet:///files/init_files/"],
mode => 755,
notify => Exec[echo],
}
exec { "echo":
command => "update-rc.d $file defaults",
cwd => "/tmp", #directory to execute from
path => "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:",
refreshonly => true
}
This will copy all the files and it calls exec when something is added/updated, but what I can't figure out is how to pass the name of the file as an argument into the exec command. It seems I'm really close but I just can't find anything to help with what I need to do. Is this the right way to try and achieve this?
Thanks.
Your probably not going to accomplish that if your using ensure => 'directory'. You will want to declare the file resource for each individual init script. And the exec isn't the way to go to enable a service. Use the service resource.
file { '/etc/init.d':
ensure => 'directory',
mode => '0755'
}
file { '/etc/init.d/init_script_1':
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0755',
notify => Service['init_script_1']
}
file { '/etc/init.d/init_script_2':
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0755',
notify => Service['init_script_2']
}
service { 'init_script_1':
ensure => running,
enable => true
}
service { 'init_script_2':
ensure => running,
enable => true
}
Hope this helps.

Resources