I have exposed a route in my Sanic app to set the log-level based on the client call. E.g.
from sanic.log import logger, logging
#route("/main")
async def sanic_main(request):
logger.info("Info mesg")
logger.debug("Debug mesg")
return json("processed")
#route("/setlevel")
async def setlevel(request):
level = request.json["level"]
if level == "info":
loglevel = logging.INFO
elif level == "debug":
loglevel = logging.DEBUG
logger.setLevel(loglevel)
return json("done")
On setting log levels between DEBUG and INFO, however, I am observing flaky behavior where the DEBUG messages (from "/main") get printed only some times and vice versa.
NOTE: I am running multiple Sanic workers
How should I go about dynamically setting the log level?
I have never done anything like this, but the sanic.log.logger is just an instance of <class 'logging.Logger'>. So, using setLevel should be fine.
The question is how are you running your app, and how many workers are you using? If you are in a situation where you have multiple processes, then using /setlevel would only change the logger for that one process.
One way to do this is using aioredis as captured in the blog here:
https://medium.com/#sandeepvaday/route-triggered-state-update-across-python-sanic-workers-a0f7ab0f6e4
Related
How to prevent apscheduler from printing job misfire (error) warning to the console ?
As you can see in console output, the job misfire event is captured and handled on a proper way.
But the red message from apscheduler scare normal users, they think the program is crashed, while nothing is wrong at all.
Why print this to the console, if an event scheduler is defined ? After defining an scheduler (EVENT_JOB_MISSED) event listener, the programmer is responsible for the console output.
Apscheduler is a great module, but this issue is a little minor.
def SetScheduler():
global shedul
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.events import EVENT_JOB_ERROR, EVENT_JOB_MISSED
shedul = BackgroundScheduler()
shedul.add_listener(shed_listener, EVENT_JOB_MISSED | EVENT_JOB_ERROR)
Console output:
You have to adjust your logging configuration to filter out these messages.
If you don't need to display any apscheduler logging, you could do:
import logging
logging.getLogger("apscheduler").propagate = False
If you want to display other messages but not these specific ones, you may need to add a filter to that logger, but I have no experience on that. You can check out the documentation on that yourself.
I'm forwarding alert messages from a AWS Lambda function to Sentry using the sentry_sdk in Python.
The problem is that even if I use scope.clear() before capture_message() the events I receive in sentry are enriched with information about the runtime environment where the message is captured in (the AWS lambda python environment) - which in this scenario is completly unrelated to the actual alert I'm forwarding.
My Code:
sentry_sdk.init(dsn, environment="name-of-stage")
with sentry_sdk.push_scope() as scope:
# Unfortunately this does not get rid of lambda specific context information.
scope.clear()
# here I set relevant information which works just fine.
scope.set_tag("priority", "high")
result = sentry_sdk.capture_message("mymessage")
The behaviour does not change if I pass scope as an argument to capture_message().
The tag I set manually is beeing transmitted just fine. But I also receive information about the Python runtime - therefore scope.clear() either does not behave like I expect it to OR capture_message gathers additional information itself.
Can someone explain how to only capture the information I'm actively assigning to the scope with set_tag and similar functions and surpress everything else?
Thank you very much
While I didn't find an explaination for the behaviour I was able to solve my problem (Even though it' a little bit hacky).
The solution was to use the sentry before_send hook in the init step like so:
sentry_sdk.init(dsn, environment="test", before_send=cleanup_event)
with sentry_sdk.push_scope() as scope:
sentry_sdk.capture_message(message, state, scope)
# when using sentry from lambda don't forget to flush otherwise messages can get lost.
sentry_sdk.flush()
Then in the cleanup_event function it gets a little bit ugly. I basically iterate over the keys of the event and remove the ones I do not want to show up. Since some Keys hold objects and some (like "tags") are a list with [key, value] entries this was quite some hassle.
KEYS_TO_REMOVE = {
"platform": [],
"modules": [],
"extra": ["sys.argv"],
"contexts": ["runtime"],
}
TAGS_TO_REMOVE = ["runtime", "runtime.name"]
def cleanup_event(event, hint):
for (k, v) in KEYS_TO_REMOVE.items():
with suppress(KeyError):
if v:
for i in v:
del event[k][i]
else:
del event[k]
for t in event["tags"]:
if t[0] in TAGS_TO_REMOVE:
event["tags"].remove(t)
return event
I'm building an apllication which is intended to do a bulk-job processing data within another software. To control the other software automatically I'm using pyautoit, and everything works fine, except for application errors, caused from the external software, which occur from time to time.
To handle those cases, I built a watchdog:
It starts the script with the bulk job within a subprocess
process = subprocess.Popen(['python', job_script, src_path], stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=True)
It listens to the system event using winevt.EventLog module
EventLog.Subscribe('System', 'Event/System[Level<=2]', handle_event)
In case of an error occurs, it shuts down everything and re-starts the script again.
Ok, if an system error event occurs, this event should get handled in a way, that the supprocess gets notified. This notification should then lead to the following action within the subprocess:
Within the subprocess there's an object controlling everything and continuously collecting
generated data. In order to not having to start the whole job from the beginnig, after re-starting the script, this object has to be dumped using pickle (which isn't the problem here!)
Listening to the system event from inside the subprocess didn't work. It results in a continuous loop, when calling subprocess.Popen().
So, my question is how I can either subscribe for system events from inside a childproces, or communicate between the parent and childprocess - means, sending a message like "hey, an errorocurred", listening within the subprocess and then creating the dump?
I'm really sorry not being allowed to post any code in this case. But I hope (and actually think), that my description should be understandable. My question is just about what module to use to accomplish this in the best way?
Would be really happy, if somebody could point me into the right direction...
Br,
Mic
I believe the best answer may lie here: https://docs.python.org/3/library/subprocess.html#subprocess.Popen.stdin
These attributes should allow for proper communication between the different processes fairly easily, and without any other dependancies.
Note that Popen.communicate() may suit better if other processes may cause issues.
EDIT to add example scripts:
main.py
from subprocess import *
import sys
def check_output(p):
out = p.stdout.readline()
return out
def send_data(p, data):
p.stdin.write(bytes(f'{data}\r\n', 'utf8')) # auto newline
p.stdin.flush()
def initiate(p):
#p.stdin.write(bytes('init\r\n', 'utf8')) # function to send first communication
#p.stdin.flush()
send_data(p, 'init')
return check_output(p)
def test(p, data):
send_data(p, data)
return check_output(p)
def main()
exe_name = 'Doc2.py'
p = Popen([sys.executable, exe_name], stdout=PIPE, stderr=STDOUT, stdin=PIPE)
print(initiate(p))
print(test(p, 'test'))
print(test(p, 'test2')) # testing responses
print(test(p, 'test3'))
if __name__ == '__main__':
main()
Doc2.py
import sys, time, random
def recv_data():
return sys.stdin.readline()
def send_data(data):
print(data)
while 1:
d = recv_data()
#print(f'd: {d}')
if d.strip() == 'test':
send_data('return')
elif d.strip() == 'init':
send_data('Acknowledge')
else:
send_data('Failed')
This is the best method I could come up with for cross-process communication. Also make sure all requests and responses don't contain newlines, or the code will break.
I've tried to disable view resutls tree by groovy code. The code runs, correctly shows and changes the name and enable property (as reported by log), but neither actual ceasing of info in GUI nor writing to file by the listener (both GUI and non-gui mode) happens. Listeners are processed at the end, so IMHO the code that is executed in setUp thread should have effect on logging of other threads. What enabled property do?
I've seen a workaround by editing jmeter plan file in place (JMeter: how can I disable a View Results Tree element from the command line?), but I'd like internal jmeter/groovy solution.
The code (interestingly each listener is processed twice, first printed view resuts tree, next already foo):
import org.apache.jmeter.engine.StandardJMeterEngine
import org.apache.jorphan.collections.HashTree
import org.apache.jorphan.collections.SearchByClass
import org.apache.jmeter.reporters.ResultCollector
def engine = ctx.getEngine()
def test = engine.getClass().getDeclaredField("test")
test.setAccessible(true)
def testPlanTreeRC = (HashTree) test.get(engine)
def rcSearch = new SearchByClass<>(ResultCollector.class)
testPlanTreeRC.traverse(rcSearch)
def rcTrees = rcSearch.getSearchResults()
for (rc in rcTrees) {
log.error(rc.getName())
if (rc.isEnabled()) {log.error("yes")} else {log.error("no")}
rc.setEnabled(false)
if (rc.isEnabled()) {log.error("yes")} else {log.error("no")}
if (rc.getName()=="View Results Tree") {rc.setName ("foo")}
}
ADDED: when listener is disabled in test plan in GUI, it's not found by traverse tree code above.
disabled property is used/checked by JMeter on startup so there must be a change in JMeter code
I open an enhancement Add option to disable View Results Tree/Listeners in non GUI
You can vote on
There are other options executing JMeter externally, using Taurus tool or execute JMeter using Java and disable it:
HashTree testPlanTree = SaveService.loadTree(new File("/path/to/your/testplan"));
SearchByClass<ResultCollector> listenersSearch = new SearchByClass<>(ResultCollector.class);
testPlanTree.traverse(listenersSearch);
Collection<ResultCollector> listeners = listenersSearch.getSearchResults();
listeners.forEach(listener -> listener.setProperty(TestElement.ENABLED, false));
jmeter.configure(testPlanTree);
jmeter.run();
I am developing an express project which will have multiple modules/services in it. The folder structure looks mostly like this:
-- app.js
-- payment_service
-- routes.js
-- index.js
-- models
-- model_1.js
-- model_2.js
APIs in index.js are the only exposed APIs and they work as a gateway for all requests coming for this module/service.
Most of the services can throw operational error under many circumstances, so manual intervention may needed to fix things. So I need to:
Log errors properly with proper context so that some person/script can do the needful.
Figure out the reason of failure.
There will be dedicated teams owning each service. So I should be able to differentiate between error logs for each service so that it can be aggregated and forwarded to concerned person.
I decided to go with ELK stash so that I can generate reports by script.
The main problem that I am facing is that I can't maintain correlation between logs. For example; If a request comes and it travels through five functions and each function logs something then I can't relate those logs.
One way is to create a child logger for each request and pass it to all the functions but that seems to be extra overhead passing logger instance to all the functions.
Another option is to use something like verror and do the logging only at entry point of the service/module so that the whole context can be contained in the log. This approach looks ok for logging errors, however it can't help with info and debug logs - they help me a lot in development and testing phase.
For the sake of differentiating between error logs, I am going to create
A dedicated logger for each service with log level error.
An application wide generic logger for info and debug purpose.
Is this the correct approach?
What will be the best way so that I can achieve all the requirements in simplest way?
I'd recommend you use a logger and you don't need anything too complex. For example:
npm install 12factor-log
Then create a file in your root folder near app.js (or in a /lib folder is where I'd place libraries)
logger.js
const Log = require('12factor-log');
module.exports = (params) => {
return new Log(params);
}
Then in your modules, import your logger and pass in the module name when you instantiate it so you can track where statements come from...
model_1.js
var log = require('./logger')({name: 'model_1'});
// ...
log.info("Something happened here");
// ...
try {
// ...
catch (error) {
const message = `Error doing x, y, z with value ${val}`;
log.error(message);
throw new Error(message);
}
Then handle error gracefully at your controller -> view layer for user-friendly experience.
Your logs would print something like this:
{"ts":"2018-04-27T16:37:24.914Z","msg":"Something happened here","name":"model_1","type":"info","level":3,"hostname":"localhost","pid":18}
As far as correlation of logs, if you see in the output above it includes the hostname of the machine it's running on, and also the name of the module and severity level. You can import this JSON into Logstash and load into Elasticsearch and it will store JSON for easy search and indexing.
See: https://www.elastic.co/guide/en/logstash/current/plugins-filters-json.html
Logging is complex and many people have worked on it. I would suggest not doing so yourself.
So, not following my own advice, I created my own logging package:
https://www.npmjs.com/package/woveon-logger
npm install woveon-logger
This prints file and line numbers of errors and messages, has logging levels and aspect-oriented logging, and can dump a stack trace in one call. It even has color coding options. If you get stuck and need some feature in logging, let me know.
let logger1 = new Logger('log1', {level : 'info', debug : true, showname : true};
let logger2 = new Logger('log2', {level : 'verbose', debug : true, showname : true};
...
log1.info('Here is a log message, that is on line 23.');
log1.verbose('Does not show');
log2.verbose('Shows because log2 is verbose logging');
log2.setAspect('IO', true);
log2.aspect('IO', 'Showing aspect IO logging, for logs for IO related operations');
[2018-06-10T10:43:20.692Z] [INFO--] [log1 ] [path/to/myfile:23] Here is a log message, that is on line 23.
[2018-06-10T10:43:20.792Z] [VERBOS] [log2 ] [path/to/myfile:25] Shows because log2 is verbose logging
[2018-06-10T10:43:20.892Z] [IO----] [log2 ] [path/to/myfile:27] Showing aspect IO logging, for logs for IO related operations
Also, some other features like:
log1.throwError('Logs this as both a line of logging, and throws the error with the same message');
log1.printStack('Prints this label next to the stack trace.');
Hope it helps!
You can use grackle_tracking library https://www.getgrackle.com/analytics_and_tracking
It logs errors & traffic to your db.