Graylog2 sidecar collector logs not showing up on the dashboard - graylog2

Installed the sidecar collector and configured it with the filebeat backend. It's running successfully. Made an output and attached some inputs to it with general log files. No logs showing up on the dashboard yet. Here's my debug output, which gives me nothing useful:
sudo service collector-sidecar stop
graylog-collector-sidecar -c /etc/graylog/collector-sidecar/collector_sidecar.yml
INFO[0000] Using collector-id: xxx
INFO[0000] Fetching configurations tagged by: [syslog linux]
INFO[0000] Starting collector supervisor
INFO[0000] [filebeat] Starting
INFO[0010] [filebeat] Configuration change detected, rewriting configuration file.
INFO[0010] [filebeat] Stopping
INFO[0014] [filebeat] Starting
Should I also make a system -> input? How can I debug the fact that logs are not showing up? What am I missing here?

I'll give this a shot...
recently I had a similar Problem, which in the end was triggered by the fact, that i ran the graylog2 docker container without opening port 5044. That seems to be the one on which the sidecar delivers content, whereas
it's heartbeat seems go on port 9000, which i had open, so graylog webinterface told me the collector was running OK.
As you configure an input in the collector configuration, you should not have to do so in the input section.
when i look into my inputs section, the collector-configured input shows up as a local input.
After adjusting the Docker container to open up 5044 everything went allright.
This refers to graylog running on docker, I don't know if that's your environment, but it might help anyway ;-)
cheers,
Arend

Related

Running history server behind reverse proxy

My use case
Write a docker container to run history server on port 18080
Pull the container and run it on jupyter notebook
Verify that history server successfully running on https://{my-instance-domain-name}/proxy/18080/applications by setting spark.ui.proxyBase to /proxy/18080 (It's running behind a proxy)
Here's the screen shot
History Serve landing page
Click one of the application id, the link is https://{my-instance-domain-name}/proxy/18080/history/application_1592874010090_0001/1/jobs/, and it never works, the page loads forever
I find this option spark.ui.proxyRedirectUri might be useful, but I'm not sure about it. Anyone knows what is happening here?
I've used to solve it with Nginx and sub_filter config: https://github.com/jahstreet/spark-on-kubernetes-helm/blob/master/charts/spark-cluster/values.yaml#L91-L135 . Pelase let me know if additional descriptions required.

Watch logs from NodeJS on EC2

I have a single EC2 instance on AWS, running HTTPS server with NodeJS.
I'm starting my NodeJS server from the /etc/rc.local, so it will start automatically on every boot.
I have 2 questions:
Is there a better way to start an https server listening on port 443 without using sudo path/to/node myScript.js? What risks do I have if I run this process as root?
Where do I see my logs? When running the script from the shell, I see the logs of the process, but now when it is runs from rc.local, how do I access the output of the server?
Thanks!
Starting the application using sudo definately is not a good practice. You should not run a publicaly accessible service with root credentials. If there is a flaw in your application and someone find this out there is the danger to access more services in the machine.
Your application should start in a non-priviledged port (e.g. 5000) and then having nginx or apache as a reverse proxy that will forward the traffic internally to your application that is running on port 5000. pm2 is suggesting something like that as well: http://pm2.keymetrics.io/docs/tutorials/pm2-nginx-production-setup. Searching online you will be able to find tutorials on how to configura nginx to run on https and how to forward all the traffic from http to https. Your application should not be aware of ssl certificates etc. Remember that the pm2 module should be installed locally within your project and you have to take advantage of the package.json. In there you can define a task that will boot your application on production using the local pm2 module. The advantage is that you don't have to install the pm2 module globally and you will not mess the things again with the permissions and super users.
I don't think that the log is saved somewhere until you will tell it to happen in the rc.local script. How do you spawn the process in there? Something like that should redirect the stdout to a file:
node path/to/node myScript.js 2> /var/log/my-app.rc.local.log # send stderr from rc.local to a log file`
Don't you use a logger in your application, though? I would suggest picking one (there are a lot available like bunyan, winston etc) and substitute all of your console.logs with the logger. Then you can define explicitly in your application where the logs will be saved, you can have different log levels and more features in general.
Not a direct answer, more a small return on experience here.
We have a heavy used nodejs app in production on AWS, on a non-Docker setup (for now ;) ).
We have a user dedicated to run the node app, I guess that if you start your node process with root, it has root access, and that's not a safe thing to do.
To run the app we use pm2, as a process manager, it allow to restart the node process when it fail (and it will), and scale the number of worker to match the number of core of your EC2 instance. You also have access to log of all the workers using ./path/to/node ./node_modules/.bin/pm2 logs, and can send it to whatever you want (from ELK to slack).
My2cents.

running cassandra on a mesos cluster

I'm trying to deploy Cassandra on a small (test) Mesos cluster. I have one master node (say 10.10.10.1) and three worker nodes: 10.10.10.2-4.
On the official site of apache mesos there is a link to cassandra framework developed for mesos (it is here: https://github.com/mesosphere/cassandra-mesos).
I'm following the tutorial that they provide there. In step 3 they are saying I should edit the conf/mesos.yaml file, specifically that I should set mesos.master.url so that it points to the master node (on which I also have the conf file).
The first thing I tried was just to replace localhost by the master node ip, so I had
mesos.master.url: 'zk://10.10.10.1:2181/mesos'
but when I then started the deployment script (by running bin/cassandra-mesos as they say in point 5 I should) I get the following error:
2015-02-24 09:18:24,262:12041(0x7fad617fa700):ZOO_ERROR#handle_socket_error_msg#1697: Socket [10.10.10.1:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
It keeps retrying and displays the same error until I terminate it.
I tried removing 'zk' or replacing it with 'mesos' in the URL, changing (or removing altogether) the port removing the 'mesos' word in the URL but I keep getting the same error.
I also tried looking at how other frameworks do it (specifically spark, which I am hoping to deploy next) but didn't find anything helpful. Any ideas how to run it? Thanks!
The URL provided to mesos.master.url is passed directly to the underlying Mesos Native Java Library. The format listed in your example looks correct.
Next steps in debugging the connection issue would be to verify the IP address the ZooKeeper sever has bound to. You can find out by running sudo netstat -ntplv | grep 2181 on the server that is running ZooKeeper.
I would expect to see something like the following:
tcp 0 0 0.0.0.0:2181 0.0.0.0:* LISTEN 3957/java
Another possibility could be that ZooKeeper is binding specifically to localhost:
tcp 0 0 127.0.0.1:2181 0.0.0.0:* LISTEN 3957/java
If ZooKeeper has bound to localhost a client will only be able to connect to it with the URL zk://127.0.0.1:2181/mesos
A note about the future of the Cassandra Mesos Framework.
I am one of the developers working on rewriting the cassandra-mesos project to be more robust, stable and easier to run. The code in current master(6aa82acfac) is end-of-life and will be replaced within the next couple of weeks with the code that is in the rewrite branch.
If you would like to try out the latest build of the rewrite branch a marathon.json for running the framework can be found here. After downloading the marathon.json update the values for MESOS_ZK and CASSANDRA_ZK (and any resource values you want to update) then POST the json to marathon at /v2/apps.
If you have one master and no zk, how about setting the mesos.master.url to 10.10.10.1:5050 (where 5050 is the default mesos-master port)?
If Ben is right and the Cassandra framework otherwise requires ZK for its own persistence/HA, then try disabling that feature if possible. Otherwise you may have to rip the ZK code out yourself and recompile, if you really want a ZK-free setup (consequently without any HA features).

Logstash windows logs

i am having trouble finding the logs for logstash, i had configured windows servers to forward logs via nxlog using rsyslog in my linux machine, now i don't know where the logs are stored. i have looked in /var/log/ directory but nothing is there.
From my windows hosts although i am receiving the logs to Kibana, can please anyone help me? also my hosts are showing as fqdn and netbios name, i can not attach the image as i do not have enough reputation posts, can someone please assist me?
Thanks
When you started logstash, what config file did you use (it is the file specified after the -f flag)?
In that .conf file, there is an input {} section that shows you the file path (path => file/path/for/logs) that logstash is using to look for logs.
Alternatively, you may be sending the data received over TCP directly to Elasticsearch. You can query this using curl (or a web browser). Should be something like:
curl -XGET http://localhost:9200/_search?pretty

Teamcity build agent in disconnected state

I am running teamcity on Linux server, and it was working completely fine. Once I reboot the server machine and it stopped working. I managed to start the teamcity server using runAll.sh command, but the build agent stays in "disconnected" state. The inactivity reason is being shown as 'server shutdown'. I tried to start the agent using 'agent.sh stop' and 'agent.sh start' but it does not seem to work. Could not get anything meaningful from the logs.
Kindly help.
Thanks
Incase, if you modified the teamcity port then you'll need to change the build agent configuration files to reflect the new serverUrl value. You can find this setting in the C:\TeamCity\buildAgent\conf\buildAgent.properties file.
On the machine that restarted, make sure the firewall didn't come back up in a state that blocks access to/from the agent.
When you restart the agent, the teamcity-agent.log file should have a line saying something like, "buildServer.AGENT.registration - Registering on server". If it succeeds, it should say something like "buildServer.AGENT.registration - Registered: id:.., authorizationToken:..".
Just found this while looking thru my unanswered questions, It was actually a permission issue. I wasn't running the commands as root user. Once I ran 'agent.sh stop' and 'agent.sh start' as a root user, it worked okay.

Resources