Logging in gunicorn log file is not detailed - python-3.x

I'm setting log level to 'debug' which I recall is the most verbose, however I'm only getting lines like this, even when an exception is thrown:
[2018-04-18 22:08:21 +0000] [23394] [DEBUG] POST /json
My startup command is this:
gunicorn --log-level debug --error-logfile gunicorn_error.log -D -b 0.0.0.0:5000 forward_to_es:app
Thanks for any suggestions.

I recommend that you create a gunicorn config file. for example:
# /path-to-your-project/gunicorn_conf.py
bind = '0.0.0.0:8811'
worker_class = 'sync'
loglevel = 'debug'
accesslog = '/var/log/gunicorn/access_log_yourapp'
acceslogformat ="%(h)s %(l)s %(u)s %(t)s %(r)s %(s)s %(b)s %(f)s %(a)s"
errorlog = '/var/log/gunicorn/error_log_yourapp'
In the documentation you may find all possible identifiers for your access log.
Then just do
/path-to-your-project/gunicorn -c gunicorn_conf.py forward_to_es:app
This way you may have one or more configurations or even create various logs depending on the configuration you are trying.

Related

Apache module is not enabled

Attempting to use directives for an Apache module that is not enabled will result in apachectl configtest messages like the following:Example Error Output
(13)Permission denied: AH00957: HTTP: attempt to connect to 127.0.0.1:9090 (127.0.0.1) failed
The Apache error log may have more information.
I tried verifying logs and searched the error in online but unable to find anything. Can someone please help with the same.

Integration of pubsubeat with elasticsearch

I am learning how to integrate pubsub with elasticsearch. There are various options like pubsubbeat, Google_pubsub input plugin, Google Cloud Pub/Sub Output Plugin.
I am currently trying to use pubsubbeat and stucked after running the command " ./pubsubbeat -c pubsubbeat.yml -e -d "*" " as suggested. Log of console is as follows
2019-05-23T14:42:19.949+0100 INFO instance/beat.go:468 Home path: [/home/amishra/pubsubbeat-linux-amd64] Config path: [/home/amishra/pubsubbeat-linux-amd64] Data path: [/home/amishra/pubsubbeat-linux-amd64/data] Logs path: [/home/amishra/pubsubbeat-linux-amd64/logs]
2019-05-23T14:42:19.949+0100 DEBUG [beat] instance/beat.go:495 Beat metadata path: /home/amishra/pubsubbeat-linux-amd64/data/meta.json
2019-05-23T14:42:19.949+0100 INFO instance/beat.go:475 Beat UUID: 4bd6119e-603a-426c-9d5b-6ac588bb000e
2019-05-23T14:42:19.949+0100 INFO instance/beat.go:213 Setup Beat: pubsubbeat; Version: 6.2.2
2019-05-23T14:42:19.949+0100 DEBUG [beat] instance/beat.go:230 Initializing output plugins
2019-05-23T14:42:19.949+0100 DEBUG [processors] processors/processor.go:49 Processors:
2019-05-23T14:42:19.952+0100 INFO pipeline/module.go:76 Beat name: allspark
2019-05-23T14:42:19.952+0100 INFO [PubSub: dev/elk-logstash-poc/logstash-poc] beater/pubsubbeat.go:54 config retrieved: &{Project:dev Topic:elk-logstash-poc CredentialsFile:/home/amishra/key/key.json Subscription:{Name:logstash-poc RetainAckedMessages:false RetentionDuration:5h0m0s} Json:{Enabled:false AddErrorKey:false}}
On second thought, I tried solution 2 but was getting below error and haven't able to resolve yet
io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl onError
WARNING: [io.grpc.internal.ManagedChannelImpl-1] Failed to resolve name. status=Status{code=UNAVAILABLE, description=Unable to resolve host pubsub.googleapis.com, cause=java.net.UnknownHostException: pubsub.googleapis.com
Any lead on how to make thing working will be great help
The issue got resolved in unexpected way i.e. installing and deploying on fresh machine. Root cause is still unknown.

systemd: how to set default log level for messages on stdout/stderr?

I am running an application as a systemd service. Application logs its output to stdout following systemd logging rules - prepending each log message with <x> where x is priority (log level).
<6> this is info
<7> this is debug
<4> this is warning
What I want is to store only priority <= 6 to journal because I run on flashdisk. I don't wan't to store debug messages and also messages/"trash" that is not marked with <>.
Seems not to be a problem - MaxLevelStore=info.
BUT - the problem is that that "trash" written to stdout is marked as priority=6 (info) by default and is also stored inside journal db. What I want is to mark it as debug (7) by default, so from following output:
<6> this is info
<7> this is debug
this is some trash
<4> this is warning
... will only ...
<6> this is info
<4> this is warning
... be stored to journal.
Can't find in all docs I have if/how is this possible. Anybody?
Thank you
You want to useSyslogLevel=debug in the [Service] section of your service. This will cause all messages that aren't prefixed by the priority to default to having a level of debug (7).
Documentation:
https://www.freedesktop.org/software/systemd/man/systemd.exec.html#SyslogLevel=

How to disable log4j default logging?

I am using restless API for my webservice implementation. I see that following lines are being printed on console for every call to my webservice:
Aug 5, 2016 12:30:09 PM org.restlet.engine.log.LogFilter afterHandle
INFO: 2016-08-05 12:30:09 172.23.4.200 - 172.23.7.44 8080 GET /abcservice/xyz - 200 86 0 21 http://localhost
There are so many calls to my webservice and as a result my logs (tomcat catalina.out) are going crazy. I want to disable this logging.
I have configured the log4j settings in log4j.xml How can I disable this logging.
You can set the root logger level to OFF (instead of WARN, DEBUG, INFO, etc)
e.g. log4j.rootLogger = OFF
But, I will recommended you to keep the level to WARN. You will not get the INFO logs but it will let you know about Warnings, Error and Fatal in your applications.
Priority of the logging levels:
**
ALL < DEBUG < INFO < WARN < ERROR < FATAL < OFF
**

Spark duplicated workers instantiated

On the spark master machine, I have the following config in my conf/slaves:
spark-slave1.com
spark-slave2.com
localhost
In conf/spark-env.sh, I have
export SPARK_WORKER_INSTANCES=1
That I intended to have 1 worker from each of the host machine, in total 3 workers, when spark master is started.
Then I start the cluster by: ./sbin/start-all.sh,
yielding:
starting org.apache.spark.deploy.master.Master, logging to ...
spark-slave1.com: starting org.apache.spark.deploy.worker.Worker, logging to ...
localhost: starting org.apache.spark.deploy.worker.Worker, logging to ...
spark-slave2.com: starting org.apache.spark.deploy.worker.Worker, logging to ...
Visiting the spark monitorying web interface at localhost:8080 shows 5 workers registered.
1 from localhost
2 from spark-slave1.com
2 from spark-slave2.com
All of them are having status ALIVE
What I have done wrong?
Let me know if any additional information is needed. I changed the hostname for illustration purpose. It is actually a local ip.
Edit 1 - Added screen capture for reference
I have been experienced the some issues, that is because at your configuration file spark-env.sh, you has set multiple worker instances
modify to export SPARK_WORKER_INSTANCES=1 your problem will solve .

Resources