I'm facing a problem about GET logs of schema registry. When I check the log4j properties I see it is configured as log4j.appender.file.File=${schema-registry.log.dir}/schema-registry.log which is working as intended (log files are located under /confluent-7.0.1/logs/).
My problem is there are also files under /var/log/. It seems that they are recorded in seperate files from week to week.
-rw------- 1 root root 160273230 Jan 2 12:02 messages
-rw------- 1 root root 1831024355 Dec 18 03:10 messages-20221218
-rw------- 1 root root 706439179 Dec 25 03:07 messages-20221225
-rw------- 1 root root 1158507310 Jan 1 03:06 messages-20230101
Content of these files are like that:
Dec 25 03:15:09 server_name bash: [2022-12-25 03:15:09,995] INFO 192.168.181.21 - kafkauser [25/Dec/2022:00:15:09 +0000] "GET /subjects/TOPIC_NAME-key/versions/latest HTTP/1.1" 200 178 "-" "-" GETsT (io.confluent.rest-utils.requests:62)
Dec 25 03:15:10 server_name bash: [2022-12-25 03:15:10,018] INFO 192.168.181.21 - kafkauser [25/Dec/2022:00:15:10 +0000] "GET /subjects/TOPIC_NAME-value/versions/latest HTTP/1.1" 200 2197 "-" "-" GETsT (io.confluent.rest-utils.requests:62)
Dec 25 03:15:10 server_name bash: [2022-12-25 03:15:10,078] INFO 192.168.181.20 - kafkauser [25/Dec/2022:00:15:10 +0000] "GET /subjects/TOPIC_NAME-key/versions/latest HTTP/1.1" 200 178 "-" "-" GETsT (io.confluent.rest-utils.requests:62)
Dec 25 03:15:10 server_name bash: [2022-12-25 03:15:10,098] INFO 192.168.181.20 - kafkauser [25/Dec/2022:00:15:10 +0000] "GET /subjects/TOPIC_NAME-value/versions/latest HTTP/1.1" 200 2197 "-" "-" GETsT (io.confluent.rest-utils.requests:62)
Is this logging happening because of schema registry or is it just part of the Linux system? I mean, is it result of network logging or schema registry logging? Either way, how can I make it stop or configure to be recorded at somewhere else? Thanks in advance.
I assume you have installed Confluent Platform in a way that uses systemctl? If so, then yes, journalctl will write to /var/log/messages via the process's stdout/stderr logs.
You need to disable the ConsoleAppender in the log4j file to stop this.
Related
I have an nginx cache server with a simple basic config, nothing special.
nginx version: openresty/1.13.6.1
built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.4)
built with OpenSSL 1.0.2k 26 Jan 2017
TLS SNI support enabled
log:
172.18.0.1 HIT -- - - 0.000 - - [06/Apr/2022:17:33:00 +0000] "HEAD /custom/xxxx/image/product/12.jpg HTTP/2.0" 200 0 "-" "curl/7.68.0" "-"
This file cache version is stored under custom folder /tmp/cache1.
What i want, is display the cache hash in the log file.
Eg. the cache hash for this file is:
/tmp/cache1/a/0/e6920f37af6df573815a02933c7f480a
And what i want to see:
172.18.0.1 HIT -- - - 0.000 - - [06/Apr/2022:17:33:00 +0000] "HEAD /custom/xxxx/image/product/12.jpg HTTP/2.0" | e6920f37af6df573815a02933c7f480a | 200 0 "-" "curl/7.68.0" "-"
Is it possible? Have any built-in variable to display this hash in the log files?
I am using confluent kafka platform . I have a topic with 4 partition and replication factor of 2. Single zookeeper, three brokers and kafka-rest proxy server. Now I am load testing the system with siege running 1000 users with a list of api which in turn hit kafka producer. I have my producer and consumer using the rest proxy (kafka-rest). I am getting following issue:
{ [Error: getaddrinfo EMFILE] code: 'EMFILE', errno: 'EMFILE', syscall: 'getaddrinfo' }
In kafka-rest log I can see:
[2016-02-23 07:13:51,972] INFO 127.0.0.1 - - [23/Feb/2016:07:13:51 +0000] "POST /topics/endsession HTTP/1.1" 200 120 14 (io.confluent.rest-utils.requests:77)
[2016-02-23 07:13:51,973] INFO 127.0.0.1 - - [23/Feb/2016:07:13:51 +0000] "POST /topics/endsession HTTP/1.1" 200 120 15 (io.confluent.rest-utils.requests:77)
[2016-02-23 07:13:51,974] INFO 127.0.0.1 - - [23/Feb/2016:07:13:51 +0000] "POST /topics/endsession HTTP/1.1" 200 120 12 (io.confluent.rest-utils.requests:77)
[2016-02-23 07:13:51,978] INFO 127.0.0.1 - - [23/Feb/2016:07:13:51 +0000] "POST /topics/endsession HTTP/1.1" 200 120 6 (io.confluent.rest-utils.requests:77)
[2016-02-23 07:13:51,983] INFO 127.0.0.1 - - [23/Feb/2016:07:13:51 +0000] "POST /topics/endsession HTTP/1.1" 200 120 6 (io.confluent.rest-utils.requests:77)
[2016-02-23 07:13:51,984] INFO 127.0.0.1 - - [23/Feb/2016:07:13:51 +0000] "POST /topics/endsession HTTP/1.1" 200 120 4 (io.confluent.rest-utils.requests:77)
[2016-02-23 07:13:51,985] INFO 127.0.0.1 - - [23/Feb/2016:07:13:51 +0000] "POST /topics/endsession HTTP/1.1" 200 120 7 (io.confluent.rest-utils.requests:77)
[2016-02-23 07:13:51,993] INFO 127.0.0.1 - - [23/Feb/2016:07:13:51 +0000] "POST /topics/endsession HTTP/1.1" 200 120 3 (io.confluent.rest-utils.requests:77)
[2016-02-23 07:13:51,994] INFO 127.0.0.1 - - [23/Feb/2016:07:13:51 +0000] "POST /topics/endsession HTTP/1.1" 200 120 4 (io.confluent.rest-utils.requests:77)
[2016-02-23 07:13:51,999] WARN Accept failed for channel java.nio.channels.SocketChannel[closed] (org.eclipse.jetty.io.SelectorManager:714)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
at org.eclipse.jetty.io.SelectorManager$ManagedSelector.processAccept(SelectorManager.java:706)
at org.eclipse.jetty.io.SelectorManager$ManagedSelector.processKey(SelectorManager.java:648)
at org.eclipse.jetty.io.SelectorManager$ManagedSelector.select(SelectorManager.java:611)
at org.eclipse.jetty.io.SelectorManager$ManagedSelector.run(SelectorManager.java:549)
at org.eclipse.jetty.util.thread.NonBlockingThread.run(NonBlockingThread.java:52)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
So I went through a lot of questions related to this. Set my ec2 machine paramenters so that I dont get too many open file error. But its not solved. I have reduced the TIME_WAIT to 30 seconds. ulimit -n is 80000.
I have collected some stats and look like kafka rest proxy which is running on `localhost:8082 causing too many connections.
How do I solve this issue? Also sometimes when error is coming and then I stop my siege test but again when TIME_WAIT connections are reduced, I restart my load test with 1 user only still I see the same issue. Some issue in rest proxy wrapper for node js?
`
You need to increase the ulimit for that process. In order to check the ulimit for a particular process , run this:
sudo cat /proc/<process_id>/limits
in order to increase the ulimit for process running via supervisord, you can increase minfds in supervisord.conf
I have noticed numerous entries in Tomcat's local_access_log for various resources coming from IP address 127.0.0.1. These are clearly attempts to hack in. For example, here is a request to get access to the "manager" app:
127.0.0.1 - - [30/Apr/2015:13:35:13 +0000] "GET /manager/html HTTP/1.1" 401 2474
here is another one:
127.0.0.1 - - [30/Apr/2015:21:23:37 +0000] "POST /cgi-bin/php?%2D%64+%61%6C%6C%6F%77%5F%75%72%6C%5F%69%6E%63%6C%75%64%65%3D%6F%6E+%2D%64+%73%61%66%65%5F%6D%6F%64%65%3D%6F%66%66+%2D%64+%73%75%68%6F%73%69%6E%2E%73%69%6D%75%6C%61%74%69%6F%6E%3D%6F%6E+%2D%64+%64%69%73%61%62%6C%65%5F%66%75%6E%63%74%69%6F%6E%73%3D%22%22+%2D%64+%6F%70%65%6E%5F%62%61%73%65%64%69%72%3D%6E%6F%6E%65+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A%2F%2F%69%6E%70%75%74+%2D%64+%63%67%69%2E%66%6F%72%63%65%5F%72%65%64%69%72%65%63%74%3D%30+%2D%64+%63%67%69%2E%72%65%64%69%72%65%63%74%5F%73%74%61%74%75%73%5F%65%6E%76%3D%22%79%65%73%22+%2D%64+%63%67%69%2E%66%69%78%5F%70%61%74%68%69%6E%66%6F%3D%31+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A%2F%2F%69%6E%70%75%74+%2D%6E HTTP/1.1" 404 1016
When decoded, the URL is this:
127.0.0.1 - - [30/Apr/2015:21:23:37 0000] "POST /cgi-bin/php?-d allow_url_include=on -d safe_mode=off -d suhosin.simulation=on -d disable_functions="" -d open_basedir=none -d auto_prepend_file=php://input -d cgi.force_redirect=0 -d cgi.redirect_status_env="yes" -d cgi.fix_pathinfo=1 -d auto_prepend_file=php://input -n HTTP/1.1" 404 1016
There are lots of such entries, all from IP address 127.0.0.1. Obviously, since this is the address of localhost, I can't block it. More over, I am not sure if there is something that I can do about it. Is there possibly an exploit that should be patched up? For instance, is there a version of Tomcat that has a related vulnerability? I am running Tomcat 8.
Much thanks for any advice!
UPDATE: thanks for the suggestion about a proxy. Turned out that httpd was indeed installed and not surprisingly, there are suspicious request. For example:
[Sat Mar 30 17:26:49 2013] [error] [client 5.34.247.59] Invalid URI in request GET /_mem_bin/../../../../winnt/system32/cmd.exe?/c+dir HTTP/1.0
[Sat Mar 30 17:26:49 2013] [error] [client 5.34.247.59] Invalid URI in request GET /_mem_bin/../../../../winnt/system32/cmd.exe?/c+dir%20c:\\ HTTP/1.0
[Sat Mar 30 17:26:49 2013] [error] [client 5.34.247.59] Invalid URI in request GET /_mem_bin/../../../../winnt/system32/cmd.exe?/c+dir%20c:\\ HTTP/1.0
This is not a windows system so cmd.exe has not place for it...
If you have a proxy server running on your computer, that will often receive requests and then call the primary server using the localhost (127.0.0.1) interface.
This could explain why you're logging these requests.
I tried yesterday to make a virtual host.
i did few steps to make it:
i removed the hash (#) in /opt/lampp/etc/extra/ not its:
(hash)Virtual hosts
Include etc/extra/httpd-vhosts.conf
i edited the /opt/lampp/etc/extra/httpd-vhosts.conf to this:
i created directory for pic.localhost i 3 commands via root:
mkdir /opt/lampp/pictures/
chown daemon:daemon 770 -R /opt/lampp/pictures/
chmod 770 -R /opt/lampp/pictures/
i added this following lined to /etc/hosts/ file:
127.0.0.1 pic.localhost
i restarted the xampp(version 5.6.8) and its not working. what i did wrong?
log file picture-access_log shows:
5.29.203.187 - - [01/Jul/2015:18:04:07 +0300] "GET / HTTP/1.1" 403 1036
log file picture-error_log shows:
[Wed Jul 01 18:04:07.173810 2015] [authz_core:error] [pid 24261]
[client 5.29.203.187:57710] AH01630: client denied by server
configuration: /opt/lampp/pictures/
I'm having a setup with MCollective 1.2.0, Puppet 2.6.4 and an
provision-agent. Most of the time this works great, but sometimes
(every 10th node or so) I experience, that signing-requests of puppet-
agents are not getting signed on the master.
In this case every request of the puppet agent to the "/production/certificate/..." fails with an HTTP-Error 404.
The problem is also hard to analyze because the logoutput is not very
detailed.
Puppet-Agent:
Jun 18 16:10:38 ip-10-242-62-183 puppet-agent[1001]: Creating a new SSL key for ...
Jun 18 16:10:38 ip-10-242-62-183 puppet-agent[1001]: Caching certificate for ca
Jun 18 16:10:41 ip-10-242-62-183 puppet-agent[1001]: Creating a new SSL certificate request for ...
Jun 18 16:10:41 ip-10-242-62-183 puppet-agent[1001]: Certificate
Request fingerprint (md5): 6A:3F:63:8A:59:2C:F6:C9:5E:56:5F:39:16:FF:19:BE
Puppet-Master:
"GET /production/certificate/a.b.c.d HTTP/1.1" 404
"GET /production/certificate_request/a.b.c.d HTTP/1.1" 404
"GET /production/certificate/a.b.c.d HTTP/1.1" 404
"GET /production/certificate/a.b.c.d HTTP/1.1" 404
"GET /production/certificate/a.b.c.d HTTP/1.1" 404
"GET /production/certificate/a.b.c.d HTTP/1.1" 404
... last message repeats endlessly
Does anyone have a glue about that?
Markus