I tried to use log4viewer for the first time. So my question is basic-level. It is possible to set Log4view to a folder, which has zipped log-files in it? And if yes, how can I configure Log4view correctly? I tried but can't find an example of to do that.
c:\Folder1\zippedLogfiles001.zip
c:\Folder1\zippedLogfiles002.zip
c:\Folder1\zippedLogfiles003.zip (up to 300 logfiles)
...
I heard it is possible that Log4view can read automatically from a folder, so that it doesn't need to unzip the log files manually.
Log4View doesn't work with zipped log files.
To open a log file with Log4View, it has to be a regular, uncompressed text or xml file.
Ulrich
Related
Is it possible to download a file nested in a zip file, without downloading the entire zip archive?
For example from a url that could look like:
https://www.any.com/zipfile.zip?dir1\dir2\ZippedFileName.txt
Depending on if you are asking whether there is a simple way of implementing this on the server-side or a way of using standard protocols so you can do it from the client-side, there are different answers:
Doing it with the server's intentional support
Optimally, you implement a handler on the server that accepts a query string to any file download similar to your suggestion (I would however include a variable name, example: ?download_partial=dir1/dir2/file
). Then the server can just extract the file from the ZIP archive and serve just that (maybe via a compressed stream if the file is large).
If this is the path you are going and you update the question with the technology used on the server, someone may be able to answer with suggested code.
But on with the slightly more fun way...
Doing it opportunistically if the server cooperates a little
There are two things that conspire to make this a bit feasible, but only worth it if the ZIP file is massive in comparison to the file you want from it.
ZIP files have a directory that says where in the archive each file is. This directory is present at the end of the archive.
HTTP servers optionally allow download of only a range of a response.
So, if we issue a HEAD request for the URL of the ZIP file: HEAD /path/file.zip we may get back a header Accept-Ranges: bytes and a header Content-Length that tells us the length of the ZIP file. If we have those then we can issue a GET request with the header (for example) Range: bytes=1000000-1024000 which would give us part of the file.
The directory of files is towards the end of the archive, so if we request a reasonable block from the end of the file then we will likely get the central directory included. We then look up the file we want, and know where it is located in the large ZIP file.
We can then request just that range from the server, and decompress the result...
I have set of old rrd files, so I need to convert them to xml files and again to rrd files in a new server. In order to create xml files from whole directory, I have used following simple bash script.
#!/bin/bash
cd /varcacti/rra
for i in ./traffic_in*;
do
/usr/local/rrdtool/bin/rrdtool dump $i /home/newuser/rrd_dump_xml/$i.xml;
done
This provides a set of xml files but the extension is not what I required actually. This provides,
traffic_in_1111.rrd>>>traffic_in_1111.rrd.xml
But I need traffic_in_1111.rrd>>>traffic_in_1111.xml. can someone help me to modify my code?
You want
/home/newuser/rrd_dump_xml/"${i%.rrd}".xml
#..........................^^^^^^^^^^^
which removes the rrd extension from the end of the string.
You might want to be more specific with the files you're looping over:
for i in ./traffic_in*.rrd
I am writing winscp script in VBA to synchronize certain files from remote to local.
The code I am using is
""synchronize -filemask=""""*.xlsx"""" local C:\Users\xx\Desktop /JrnlDetailSFTPDirect""
There are three xlsx files: 14.xlsx, 12.xlsx, 13.xlsx. However, seems like it is running through all the files even though it is not synchronizing them. Besides, one folder under JrnlDetailSFTPDirect is also downloaded from remote, which is not expected.
Is it possible to avoid looping through all the files, just selecting those three files and downloading them?
Thanks
There are separate masks for files and folders.
To exclude all folders, use */ exclude mask:
synchronize -filemask="*.xlsx|*/" local C:\Users\xx\Desktop /JrnlDetailSFTPDirect
See How do I transfer (or synchronize) directory non-recursively?
I cannot tell anything regarding the other problem, as you didn't show us names of the files. Ideally, append a session log file to your question. Use /log switch like:
winscp.com /log=c:\writablepath\winscp.log /command ...
I wonder if you can configure logstash in the following way:
Background Info:
Every day I get a xml file pushed to my server, which should be parsed.
To indicate a complete file transfer afterwards I get an empty .ctl (custom file) transfered to the same folder.
The files both have the following name schema 'feedback_{year}{yearday}_UTC{hoursminutesseconds}_51.{extention}' (e.g. feedback_16002_UTC235953_51.xml). So they have the same file name but one is with .xml and the other is a .ctl file.
Question:
Is there a way to configure logstash to wait parsing the xml file until the according .ctl file is present?
EDIT:
Is there maybe a way to archiev that with filebeat?
EDIT2:
It would also be enough to be able to configure logstash in a way that it will wait x minutes before starting to process a new file, if that is easier.
Thanks for any help in advance
Your problem is that you don't want to start the parser before the file transfer hasn't been completed. So, why don't push the data to a file (file-complete.xml) when you find your flag file (empty.ctl)?
Here is the possible logic for a script and runs using crontab:
if empty.ctl exists:
Clear file-complete.xml
Add the content of file.xml to file-complete.xml.
Remove empty.ctl
This way, you'd need to parse the data from file-complete.xml. I think is simpler to debug and configure.
Hope it helps,
I have zip files which has log files.My scenario from the zip file need to extract it and then parse it with logstash?.Could anyone guide me this would be very helpful.Thanks for your valuable time.
Logstash itself doesn't have any input plugins to process log files inside zip archives, but as long as you extract the files yourself to a directory that you've configured Logstash to look for log files in you'll be fine. The standard file input plugin accepts wildcards, so you could say
input {
file {
path => ["/some/path/*.log"]
}
}
to have Logstash process all files in the /some/path directory (you'd probably want to assign a type to the messages too).
If the names of the files inside the zip archives aren't unique you'll have to make sure you don't overwrite existing files when you unzip the archives. Or, probably better, extract each archive into a directory of its own and have Logstash process the whole tree:
input {
file {
path => ["/some/path/*/*.log"]
}
}
Logstash isn't capable of deleting processed logs or moving them out of the way, so that's something you'll have to take care of too.