Linux Rsyslogd Configuration - linux

I need to modify a couple of things in the configuration of rsyslogd thus this gets me to this file: /etc/rsyslogd.conf. I want to make rule, more precisely a filter condition. I want to select all mesages of facility mail with the priority at least notice, but not err and save them in a different file.
Therefore, it would go something like this:
mail.notice /var/log/myfile.log
mail.warning /var/log/myfile.log
mail.crit /var/log/myfile.log
# and so on
My question is: Is there a easier way to filter those out, than typing each one? The only documentation I found about this topic is here. I didn't really get from that doc if you can do what I am asking or not, so I thought it would be a great question for stack :D.
Also: I want to filter all messages of priority crit but not facility mail and news. Same question as above.

The way I do it there in the code example is wrong.
A very good explanatory example can fe found here.

Related

Approach to find duplicate - Kafka and queue

Question asked in interview ---Suppose there are two kafaka topic or lets say queue - Q1 & Q2
Both are having some messages suppose 10 messages each.
The condition here is if both the queue are having same messages exctaly same in both queue it's fine but if there is even one odd or non. matching message we need to error out or notify.
the approach i suggested for this problem.
1- Using hashset We can find.. we will add first queue message in add set and while adding other add method will notify us if message is not already there.
2- we can use the Hashmap and store it as key value form..while adding it i will check if the key -message is already there.
but he was not satisfied he did not share the right answer or problem. with above approach.
Let me know if better solution exist and the problem with this approach
He may have been aiming to get into a discussion about the difficulties of balancing in a real-time streaming situation. Let's say there is a continuous stream of messages going through the two topics how do you know if things are balanced?
There is no single answer, it depends on the situation but generally has to consider some kind of time window.
My guess is that the interviewer's dissatisfaction (if any) would be because he was looking to talk about options rather than going to one specific solution to a specific situation.
We can't know what he was thinking without asking (which I would always recommend) but when I'm interviewing I always look for candidates who can consider and discuss problems and trade-offs, not necessarily ones who have the 'right' solution.

Moving to next question when failing to receive a response

We had a developer to come up with a prototype for a bot for bookkeeping questions and we understand that the bot is not perfect. Our biggest challenge was to ensure that after 2-3 failed attempts for the bot to receive an appropriate response, the bot moves on to the next question and that's it. Our previous developer claimed that it's not possible, is that true or not? Currently the bot just gives up after a couple of attempt and that's it.
I am not a tech person and I would really appreciate some help on this.
Hypothetical example of the ideal scenario:
Q: What accounting software do you use?
A: askdnjsajld
Q: Didn't get that. What accounting software do you use?
A: asdkjnajksdn
Q: I am sorry, didn't get that. Let's move on to the next question... When would you like to receive your financials?
A: month-end
Thank you for your help!
Yes, this is possible, although the exact details of how you do this depend on how you're handling responses from the user in the first place.
The most important thing to keep in mind to handle this, however, is to remember that Intents represent what the user says and not how you handle it or how you reply.
That last bit gives the simplest answer to your question - you can, of course, reply however you want the bot to reply after each round. It sounds like you were able to implement logic that says if it got a result it didn't want - you can extend that to add a counter that just uses the next question as its reply after a number of tries.
The more difficult part is to make sure that you know what question the user is replying to, and to make sure you capture the right value in case they try to go back and answer a previous question.
The general solution to this problem is twofold:
Have one Intent that accepts the direct answer to the question you're currently on, but have it triggerable only with certain Input Contexts being set. Then set that Context to be valid when you ask the question, and remove the Context when the question is answered.
Have other Intents that specifically respond to how the user might phrase the question if they were going back to answer it.
For example, if you have asked "What software do you use?" you might set the Context "ask-software-used". You would then have two Intents:
One with an Input Context of "ask-software-used" that accepts just valid software names.
Another with no Input Context, but which might have training phrases such as
"I'm using XXXX software"
"We are working with the XXXX package"
once the user does answer the question, clear the "ask-software-used" Context, ask the next question, and set its Context.
You can also use this to keep track of how many times you've had to repeat the question, waiting for an answer. If that counter hits the limit, do the same thing: clear the Context, ask the next question, and set its Context.

What is the best practice to create a Q&A Alexa app?

I want to make a simple Q&A Alexa app similar to Alexa's custom Q&A blueprint app. I don't want to use blueprints because I need additional functionality. What is the best practice for creating the Alexa app? Should I create a separate intent for each question or should I somehow use utterances?
The best way depends upon what the questions are and how it will be asked.
1. If the questions has a simple structure
Consider these examples:
what is a black hole
define supernova
tell me about milkyway
what is a dwarf star
then it can be configured like this in an intent:
what is a {space}
define {space}
tell me about {space}
and the slot {space} -> black hole, supernova, milkyway, dwarf star.
From the slot value, you can understand what the question is and respond. Since Alexa will also fill slots with values other than those configured, you will be able to accommodate more questions which follows this sentence structure.
2. If the question structure is little complex
what is the temperature of sun
temperature to boil water
number of eyes of a spider
what is the weight of an elephant
then it can be configured like this in an intent:
what is the {unit} of {item}
{unit} to boil {item}
{unit} of eyes of a {item}
what is the {unit} of an {item}
Here,
{unit} -> temperature, number, weight, height etc.
{item} -> sun, moon, water, spider etc
With proper validation of slots you will be able to provide the right answer to the user.
Also, you will be able to provide suggestions if the user asks a question partially.
Ex:
user: what is the temperature
[slots filled: "unit"="temperature","item":""]
Now, you know that the user asked about temperature but the item is missing, so you respond back with a suggestion like this
"Sorry I didn't understand. Do you want to know the temperature of the sun?"
3. If the questions has totally different structure
How to deal with an annoying neighbor
What are the types of man made debris in space
Recommend few good Nickelback songs
Can I jump out of a running train
If your questions are like this, with total random structure, you can focus on certain keywords or crust of the question and group them. Even if you can't group them, find out the required fields or mandatory words.
IntentA: How to deal with an annoying {person}
IntentB: What are the types of man made {item} in {place}
IntentC: Recommend few good {person} songs
IntentD: Can I {action} out of a running {vehicle}
The advantage of using slots here is that even if the user asks a partial question and an associated intent is triggered, you will be able to identify it and respond back with an answer/suggestion or error message.
Ex:
user: what are the types of man made mangoes in space
[IntentB will be triggered]
If you have configured this without a mandatory slot, your backend will be focusing on the intent triggered and will respond with the right answer (man made debris in space), which in this case won't make any sense to the user.
Now, with proper usage of slots and validation you can find that instead of debris your backend received "mangoes" which is not valid. And therefore you can respond back with a suggestion or error message like
"Sorry, I don't know that. Do you want to know about the man made debris found in space"
Grouping questions will help you to add other similar questions later with ease. You can use one intent per question if it is too difficult to group. But remember to validate it with a slot if you want to avoid the situation mentioned right above.
While naming question-intents use a prefix. This might help you to group handlers in your backend code depending on your backend design. This is not mandatory, just a suggestion.
Summary:
Group questions with similar structure.
Use slots appropriately and validate them.
Use predefined slots wherever possible.
Don't just depend on intents alone, because intents can be mapped if its the closest match. But the question might be entirely different or might not make any sense. So use slots appropriately and validate them.
If possible provide suggestion for partial questions.
Test thoroughly and make sure it wont break your interaction model.
You should check Alexa Dialog Interface that allow you to make Q/A or QUIZZ.
https://developer.amazon.com/fr/docs/custom-skills/dialog-interface-reference.html

can you please explain me the difference between BLKSET sort option and NOBLKSET sort option?

Recently I came across an abend in a SORT step in a Mainframe job, where SORTOUT is VSAM file and SORTIN is a equential file.
The error is:
ICE077A 0 VSAM OUTPUT ERROR L(12) SORTOUT
One of my senior colleague suggested to me to see if there are any duplicates, but I didn't find any duplicates in the input file.
s
After going thru some manuals, I found that OPTION NOBLKSET control card overrides the default BLOCKSET COPY TECHNIQUE, and can be used to bypass sort errors (provided all the possible effects of bypassing the sort error are analysed), so I used
OPTION NOBLKSET. Now the step executes successfully.
After analysing the SYSOUT I found that
ICE143I K PEERAGE SORT TECHNIQUE SELECTED
Can any one explain how the BLOCKSET technique works and how PEERAGE technique works?
SORT used in our system is DFSORT.
You can start here, which explains that of three techniques Blockset is DFSORT's preferred and most efficient technique for sorting, merging and copying datasets: http://pic.dhe.ibm.com/infocenter/zos/v1r12/index.jsp?topic=%2Fcom.ibm.zos.r12.icea100%2Fice1ca5028.htm
Peerage/Vale and Conventional are the other two techniques, of which one is selected which is thought to be the next best if it is not possible to use Blockset.
You have misread references to the use of NOBLKSET. In cases where what would effectively be "internal" errors encountered by DFSORT, and if BLOCKSET is being used, turning off Blockset will cause another SORT method to be chosen, which perhaps will get your step run and Production finished whilst the error is investigated with the step that used Blockset.
NOBLKSET is not a cure-all and did not affect your use of DFSORT. You should only use NOBLKSET in very limited circumstances which are suggested to you for very particular reasons. Blockset is significantly more efficient than Peerage/Vale or Conventional.
You should update your question with a sample of the input data and an IDCAMS LISTCAT of the KSDS.
You either had a duplicate key, or the insertions (your file being written) were not in sequence. Remember you can get duplicates if you have a KSDS with data on it already.
If you want details about Blockset and Peerage/Value, you'll have to hit technical journals and possibly patent listings. I don't know why you'd want to go that far. Perhaps knowing that, you now don't?

Is there a log file analyzer for log4j files? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I am looking for some kind of analyzer tool for log files generated by log4j files. I am looking something more advanced than grep? What are you using for log file analysis?
I am looking for following kinds of features:
The tool should tell me how many time a given log statement or a stack trace has occurred, preferably with support for some kinds of patterns (eg. number of log statements matching 'User [a-z]* logged in').
Breakdowns by log level (how many INFO, DEBUG lines) and by class that initiated the log message would be nice.
Breakdown by date (how many log statements in given time period)
What log lines occur commonly together?
Support for several files since I am using log rolling
Hot spot analysis: find if there is a some time period when there is unusually high number of log statements
Either command-line or GUI are fine
Open Source is preferred but I am also interested in commercial offerings
My log4j configuration uses org.apache.log4j.PatternLayout with pattern %d %p %c - %m%n but that could be adapted for analyzer tool.
(disclaimer: I'm one of the developers contributing to Chainsaw V2)
Chainsaw V2 can provide some of the functionality you're looking for through its support for custom expressions and the ability to use those expressions to colorize, search and filter events.
You -can- load multiple log files into Chainsaw (by default, all events for a log file are placed on a logfile-specific tab). You can also define a 'custom expression logpanel' which will aggregate events from all tabs into a new tab matching an expression you provided - similar to a database 'view', you could use the expression 'LEVEL >= WARN' to collect all warnings, error & fatal messages from any log file into a single view.
Some example expressions which could be used to colorize, search or filter events:
msg like 'User [a-z]* logged in'
msg ~= login || msg ~= logout
level > INFO
exception exists
timestamp <= '2010/04/06 15:05:35'
The only way to get 'counts' currently is to define an expression in the 'refine focus' field (the count of events matching the expression will show in the status bar).
One of the useful features added to the upcoming release is a clickable bar to the right of the table (similar to Eclipse or Idea's bar showing syntax error indications) which will display color rule and search expression matches for the entire log file.
When the next version of Chainsaw V2 comes out, I hope you give it a spin - it's Open Source, free, and we're always interested in suggestions & feedback.
I'd suggest Splunk. It provides fast, Google-like searching across lots (terabytes) of logs, is easy to filter (e.g. by log level or date), makes it easy to correlate into transactions of multiple related log events, etc.
There's a downloadable version that's free as long as you're indexing less than 500MB of logs per day.
Take a look at Apache Chainsaw http://logging.apache.org/chainsaw/index.html for your needs
You can try LogSaw, it's an open source software based on Eclipse and which is active right now...
Might come a bit late, but LogMX does all this stuff, and is highly active for many years now. It is not open-source but it is powerful even if it doesn't seem to!
Mind Tree Insight is also a useful Open Source Log Analysis tool
http://sourceforge.net/projects/mindtreeinsight
I have created a custom tool for that: https://plus.google.com/u/0/102275357970232913798/posts/Fsu6qftH2ja
Alfa is a GUI tool for analizing log files. Usually you are forced to search for data in them using editors. You open a log, press Ctrl-F and the "Next" button again and again, then reload the file as it was modified, and repeat the search. Alfa maps a log file to a database allowing you to use standard SQL queries to get data without any superfluous actions.
you can also try an Online log file analysis-
http://www.sharontools.com/tools/LogAnalysis/Main.php
Smith
glogg is a simple but powerful tool. It allows to color lines by filter expressions and has breakpoint-style markers. A separate panel shows search results and/or markers.

Resources