Write to rsyslog with custom app name - python-3.x

What I have
I have written a Python class that writes messages to a custom log via rsyslog. This is scaling off of my previous question, which I have temporarily circumvented by prepending my app's name to each message and have rsyslog configured to put all logs containing my app's name to my custom log. However, I am concerned that something else on the system writes to rsyslog and that message just so happens to contain my app's name, rsyslog will detect this send that log entry to my log. I would also prefer the look of my app's name instead of journal appearing throughout my log.
Attached is my code:
import logging
from logging import handlers
lumberjack = logging.getLogger("MyAppName")
lumberjack.setLevel(logging.INFO)
handler = handlers.SysLogHandler(address='/dev/log')
handler.setFormatter(logging.Formatter('%(name)s %(levelname)s: %(message)s'))
handler.ident = "MyAppName"
lumberjack.addHandler(handler)
lumberjack.critical("I'm okay")
Goal
The following two messages are examples. The first was written by my Python class. The second was written by me running logger -t MyAppName -s "Hey buddy, I think you’ve got the wrong app name"
Aug 22 15:49:53 melchior journal: MyAppNameMyAppName CRITICAL: I'm okay.
Aug 22 15:57:06 melchior MyAppName: Hey buddy, I think you’ve got the wrong app name
Question
What do I have to change in my Python code to get these lines to look the same, but with the levelname included as I have already done so?

Change the following line
handler.setFormatter(logging.Formatter('%(name)s %(levelname)s: %(message)s'))
to look like this (basically just add a colon after %(name)s:
handler.setFormatter(logging.Formatter('%(name)s: %(levelname)s: %(message)s'))
then remove the following line to avoid app name duplication:
handler.ident = "MyAppName"
and now it does the trick:
Sep 10 06:52:33 hostname MyAppName: CRITICAL: I'm okay

Related

config from router shows as one line need multiline for ciscoconfparse

Network guy that's new to Python and programming and found this ciscoconfparse library that looks to have some pretty useful features. I'm running into an issue that I'm sure is something basic, but haven't figured it out. I'm trying to pull snmp config from a router to create a config set to remove v2 configs. Using netmiko to grab output of "show run | in snmp" then parse it. The config that comes back shows as one line. When using ciscoconfparse statements to delete some lines, it deletes everything (assuming because it is only one line) so I have nothing left to build.
in all examples online, sample config looks like this and the functions work as it is multiple lines.
conf=[
'access-list dmz_inbound extended deny udp object training-network any4 eq snmp',
'snmp-server host inside 10.10.10.10 poll community ***** version 2c',
'snmp-server host inside 10.20.20.20 poll community ***** version 2c',
'no snmp-server location',
'no snmp-server contact',
'snmp-server community *****',
'!'
]
when I actually pull config from a router, it looks like this with newline characters but gets parsed as 1 line:
'access-list testNada extended permit udp host 10.10.10.10 eq snmp host 10.20.10.10 eq snmp \nsnmp-server host inside 10.11.11.11 community ***** version 2c\nsnmp-server host inside 10.5.5.5 poll community ***** version 2c\nno snmp-server location\nno snmp-server contact\nsnmp-server community *****\n']
snippet of code I'm running. the delete line statements delete the whole config snip rather than just the line matching the arg.
conf = [ssh.send_command("show run | include snmp")]
parse = CiscoConfParse(conf)
parse.delete_lines('no snmp-server')
parse.delete_lines('access-list')
newConf = (parse.replace_lines('snmp', 'no snmp',excludespec='v3'))
ssh.send_config_set(newConf)
how do I get the config pulled directly from the router to show as multi-line so I can use the ciscoconfparse functions?
You are passing a string to CiscoConfParse instead of a list.
Try the below:
conf = [ssh.send_command("show run | include snmp")]
# this will change your string into a list
formatted_output = conf.splitlines()
parse = CiscoConfParse(formatted_output)
parse.delete_lines('no snmp-server')
parse.delete_lines('access-list')
newConf = (parse.replace_lines('snmp', 'no snmp',excludespec='v3'))
Explanation: Netmiko will return a string (/n means a new line, but it's still a string). When you use splitlines - it will transfer your string into a list (each line will be a new element)
was able to get it to work by iterating over the list returned by netmiko, which returned a 'list in a list'. basically it was a list that had one index that happened to be the list containing the config.
newConf=[]
for i in conf:
newConf.append(i.split("\n"))
which returned
[['access-list testNada extended permit udp host 10.10.3.10 eq snmp host 10.10.10.10 eq snmp ', 'snmp-server host inside 10.4.233.8 community ***** version 2c', ....]]
then ran it through the parser for index 0.
parse = CiscoConfParse(newConf[0])
which resulted in multiple lines and I could use the delete_lines and replace_lines functions to produce the negated config I want to send back to my devices:
['no snmp-server host inside 10.4.233.8 community ***** version 2c', 'no snmp-server host inside 10.3.25.17 poll community ***** version 2c',...]]

How to remove/modify syslogd message's header?

I'm currently use the syslogd of busybox for logging some information. However, I'm unable to modify the message's header.
I log the message like this:
syslog(LOG_INFO,"My message\n");
And I got this output:
Jul 4 15:00:11 halo user.info syslog: My message
I want to replace message's header with epoch time format like this:
1529293692,My message
Or is there any way to completely remove the message's header so I could manually add the epoch time in code?
I have done the research around and found that it is impossible to modify the output message log of syslogd with supported configuration. So I dig into the busybox's source code and modify it. If you face the same issue, you could find it at function:
static void timestamp_and_log(int pri, char *msg, int len)
I check the pri variable to see which level log is it and change the actual outcome message, which is msg.

logstash custom log that has xml tags inside

I have a custom log file that has plain text as well as xml tags. How do i capture these in separate fields. Here is how it looks like:
1/10/2017 4:16:35 AM :
Error thrown is:
No Error
Request sent is:
SCEO415154712
Response received is:
SCEO4151547trueTBAfalse7169-1TBAfalse2389-1
1/10/2017 4:16:35 AM :
Error thrown is:
No Error
*************************************************************************
Request sent is:
<InventoryMgmtRequest xmlns="http://www.af.com/Ecommerce/Worldwide/AvailabilityService/Schemas/InventoryMgmtRequest"><ns0:MsgHeader MessageType="FIXORD" MsgDate="10.01.2017 04:16:32" SystemOfOrigin="ISCS_DE" CommunityID="SG888" xmlns:ns0="http://www.av.com/Ecommerce/Worldwide/AvailabilityService/Schemas/InventoryMgmtRequest"><ns0:OrderID>SCEO4151547</ns0:OrderID><ns0:ReservationID></ns0:ReservationID><ns0:CRD></ns0:CRD></ns0:MsgHeader><ns0:MsgBody xmlns:ns0="http://www.ab.com/Ecommerce/Worldwide/AvailabilityService/Schemas/InventoryMgmtRequest"><ns0:Product Sku="CH562EE" Qty="1" IsExpress="false" IsTangible="true" Region="EMEA" Country="DE"><ns0:ProdType></ns0:ProdType><ns0:LineItemNum>1</ns0:LineItemNum><ns0:JCID></ns0:JCID></ns0:Product><ns0:Product Sku="CH563EE" Qty="1" IsExpress="false" IsTangible="true" Region="EMEA" Country="DE"><ns0:ProdType></ns0:ProdType><ns0:LineItemNum>2</ns0:LineItemNum><ns0:JCID></ns0:JCID></ns0:Product></ns0:MsgBody></InventoryMgmtRequest>
*************************************************************************
Response received is:
<ns0:InventoryMgmtResponse xmlns:ns0="http://www.ad.com/Ecommerce/Worldwide/AvailabilityService/Schemas/InventoryMgmtResponse"><ns0:MsgHeader MsgDate="10.01.2017 04:16:32" MessageType="FIXORD"><ns0:OrderID>SCEO4151547</ns0:OrderID><ns0:ReservationID /><ns0:ReadyToRelease>true</ns0:ReadyToRelease></ns0:MsgHeader><ns0:MsgBody><ns0:Product SKU="CH562EE" LSPSKU="9432GFT" OutOfStock="false" FulfillmentSite="00ZF" SKUExist="true" Region="EMEA" Country="DE" IsTangible="true"><ns0:EDD>TBA</ns0:EDD><ns0:FutureUsed>false</ns0:FutureUsed><ns0:CurrentQty>7169</ns0:CurrentQty><ns0:FutureQty>-1</ns0:FutureQty></ns0:Product><ns0:Product SKU="CH563EE" LSPSKU="9432GFU" OutOfStock="false" FulfillmentSite="00ZF" SKUExist="true" Region="EMEA" Country="DE" IsTangible="true"><ns0:EDD>TBA</ns0:EDD><ns0:FutureUsed>false</ns0:FutureUsed><ns0:CurrentQty>2389</ns0:CurrentQty><ns0:FutureQty>-1</ns0:FutureQty></ns0:Product></ns0:MsgBody></ns0:InventoryMgmtResponse>
*************************************************************************
Also I don't want to capture the line separators (line full of **** at the end) in my grok fields.
There is no simple answer here I'm afraid. Logstash and other log processing tools works line by line, each line is an event. If your events span more than one line you can use the multiline codec, which is pretty powerful, but in my experience you are better off trying to get the logs on to single lines at source, this makes it so much easier to write a pattern and get the process working reliably.
The issues you have here are many, but if, for example, one of your messages (sent via TCP) is retransmitted for some reason or simply (sent via UDP) lost, your pattern will break as part of the message that logstash is expecting is not there.
The best thing you can do in my opinion is to try and change the logging process to save to a file as a single line per event. Most logging tools should allow this with the right config options. Ideally, get your application to log in json format, (assuming you're processing logs to save them in elasticsearch) this would involve the lowest overhead on the logstash server to process these logs (as elasticsearch saves them in json format). All you would then need to do is pass each event/log line to the json filter and the fields are generated by the names given to it by your application.

Is there a printk-style log parser?

The journald of systemd supports kernel-style logging. So, the service can write on stderr the messages starting with "<6>", and they'll be parsed like info, "<4>" - warning.
But while developing the service it's launched outside of systemd. Is there any ready-to-use utilities to convert these numbers into readable colored strings? (it would be nice if that doesn't complicate the gdb workflow)
Don't want to roll my own.
There is no tool to convert the output but a simple sed run would do the magic.
As you said journal would strip off <x> token from the beginning of your log message and convert this to log level. What I would do is check for some env. variable in the code. For ex:
if (COLOR_OUTPUT_SET)
printf ("[ WARNING ] - Oh, snap\n");
else
printf ("<4> Oh, snap\n");

How to see exactly what went wrong in Behave

We recently started using Behave (github link) for BDD of a new python web service.
Question
Is there any way we can get detailed info about the failure cause as tests fails? They throw AssertionError, but they never show what exactly went wrong. For example the expected value and the actual value that went into the assert.
We have been trying to find an existing feature like this, but I guess it does not exist. Naturally, a good answer to this question would be hints and tips on how to achieve this behavior by modifying the source code, and whether this feature exists in other, similar BDD frameworks, like jBehave, NBehave or Cucumber?
Example
Today, when a test fails, the output says:
Scenario: Logout when not logged in # features\logout.feature:6
Given I am not logged in # features\steps\logout.py:5
When I log out # features\steps\logout.py:12
Then the response status should be 401 # features\steps\login.py:18
Traceback (most recent call last):
File "C:\pro\venv\lib\site-packages\behave\model.py", line 1037, in run
match.run(runner.context)
File "C:\pro\venv\lib\site-packages\behave\model.py", line 1430, in run
self.func(context, *args, **kwargs)
File "features\steps\login.py", line 20, in step_impl
assert context.response.status == int(status)
AssertionError
Captured stdout:
api.new_session
api.delete_session
Captured logging:
INFO:urllib3.connectionpool:Starting new HTTP connection (1): localhost
...
I would like something more like:
Scenario: Logout when not logged in # features\logout.feature:6
Given I am not logged in # features\steps\logout.py:5
When I log out # features\steps\logout.py:12
Then the response status should be 401 # features\steps\login.py:18
ASSERTION ERROR
Expected: 401
But got: 200
As you can see, the assertion in our generic step clearly prints
`assert context.response.status == int(status)`
but I would rather have a function like
assert(behave.equals, context.response.status, int(status)
or anything else that makes it possible to generate dynamic messages from the failed assertion.
Instead of using "raw assert" statements like in your example above, you can use another assertion provider, like PyHamcrest, who will provide you with desired details.
It will show you what went wrong, like:
# -- file:features/steps/my_steps.py
from hamcrest import assert_that, equal_to
...
assert_that(context.response.status, equal_to(int(status)))
See also:
http://jenisys.github.io/behave.example/intro.html#select-an-assertation-matcher-library
https://github.com/jenisys/behave.example
According to https://pythonhosted.org/behave/tutorial.html?highlight=debug,and This implementation is working for me.
A “debug on error/failure” functionality can easily be provided, by using the after_step() hook. The debugger is started when a step fails.
It is in general a good idea to enable this functionality only when needed (in interactive mode). This is accomplished in this example by using an environment variable.
# -- FILE: features/environment.py
# USE: BEHAVE_DEBUG_ON_ERROR=yes (to enable debug-on-error)
from distutils.util import strtobool as _bool
import os
BEHAVE_DEBUG_ON_ERROR = _bool(os.environ.get("BEHAVE_DEBUG_ON_ERROR", "no"))
def after_step(context, step):
if BEHAVE_DEBUG_ON_ERROR and step.status == "failed":
# -- ENTER DEBUGGER: Zoom in on failure location.
# NOTE: Use IPython debugger, same for pdb (basic python debugger).
import ipdb
ipdb.post_mortem(step.exc_traceback)
Don't forget you can always add an info message to an assert statement. For example:
assert output is expected, f'{output} is not {expected}'
I find that use pyhamcrest assertions yields much better error reporting than standard Python assertions.

Resources