Collectd not pushing data to Logstash - logstash

my collectd setup is not pushing the logs to Logstash. Not sure what is the problem here.
I have run the tcpdump on my collectd server. Even its not sending any request. I think, the problem may be on collectd. Someone have any idea on what is wrong here.
Note : There is no block in server firewall .

I don't see anything immediately wrong with your collectd config, but try setting your Server tag to "localhost". That way you'll be sure to bind your server to your local interface.

Change the "collectd" to collectd in your sixth to last line.

Related

Graylog2 sidecar collector logs not showing up on the dashboard

Installed the sidecar collector and configured it with the filebeat backend. It's running successfully. Made an output and attached some inputs to it with general log files. No logs showing up on the dashboard yet. Here's my debug output, which gives me nothing useful:
sudo service collector-sidecar stop
graylog-collector-sidecar -c /etc/graylog/collector-sidecar/collector_sidecar.yml
INFO[0000] Using collector-id: xxx
INFO[0000] Fetching configurations tagged by: [syslog linux]
INFO[0000] Starting collector supervisor
INFO[0000] [filebeat] Starting
INFO[0010] [filebeat] Configuration change detected, rewriting configuration file.
INFO[0010] [filebeat] Stopping
INFO[0014] [filebeat] Starting
Should I also make a system -> input? How can I debug the fact that logs are not showing up? What am I missing here?
I'll give this a shot...
recently I had a similar Problem, which in the end was triggered by the fact, that i ran the graylog2 docker container without opening port 5044. That seems to be the one on which the sidecar delivers content, whereas
it's heartbeat seems go on port 9000, which i had open, so graylog webinterface told me the collector was running OK.
As you configure an input in the collector configuration, you should not have to do so in the input section.
when i look into my inputs section, the collector-configured input shows up as a local input.
After adjusting the Docker container to open up 5044 everything went allright.
This refers to graylog running on docker, I don't know if that's your environment, but it might help anyway ;-)
cheers,
Arend

Logstash windows logs

i am having trouble finding the logs for logstash, i had configured windows servers to forward logs via nxlog using rsyslog in my linux machine, now i don't know where the logs are stored. i have looked in /var/log/ directory but nothing is there.
From my windows hosts although i am receiving the logs to Kibana, can please anyone help me? also my hosts are showing as fqdn and netbios name, i can not attach the image as i do not have enough reputation posts, can someone please assist me?
Thanks
When you started logstash, what config file did you use (it is the file specified after the -f flag)?
In that .conf file, there is an input {} section that shows you the file path (path => file/path/for/logs) that logstash is using to look for logs.
Alternatively, you may be sending the data received over TCP directly to Elasticsearch. You can query this using curl (or a web browser). Should be something like:
curl -XGET http://localhost:9200/_search?pretty

Outputting UDP From Logstash

I have some logs that I want to use logstash to collate . It will be the logstash agent as a shipper on the source servers going to a central logging server . I want to use UDP from the shipper to the central log so I can be totally isolated should the logger fail ( I dont want the 70 odd production servers affected in any way ) . The only way I can see of using udp for the transport is to output using syslog format . Does anyone know of a way I can output UDP natively from logstash ? ( The docs hint at it but I must be missing something on the TCP side ?)
My understanding is that the typical approach would be to use something a bit more lightweight (rsyslog, or even Lumberjack) running on the remote machine, that is sending data to your central logstash server.
It sounds like you want logstash agents (forwarders) to be running on each server and sending the data to the broker. This blog post (also linked as an example below), gives a nice explanation of why they chose not to use logstash forwarders on their remote servers - they eat too much RAM
I use a setup where rsyslog sends UDP data to logstash directly. I've also used a setup where rsyslog was the receiving log server, and aggregated the logs into separate logfiles (based on server hostname) and then logstash read from those files.
For some example configs see the following:
Logstash with Lumberjack and ES
Logstash with rsyslog and ES
I suggest the rsyslog approach, as the chances that rsyslog is already running on your server is higher than Lumberjack. Also there is quite a bit more documentation on rsyslog than Lumberjack. Start off simple, and add more complexity later on, if you feel the need.
You can output UDP natively using the UDP output plugin
http://logstash.net/docs/1.2.1/outputs/udp
By setting the codec field, you can choose whatever output format you like (e.g. JSON)

Can't get DNSMASQ DHCP to configure multiple name servers

Perhaps someone has some help? I am running dnsmasq on Ubuntu 12.04 LTS. This server's address is 192.168.15.3. My gateway's DNS is 192.168.1.254, that takes me out to the Internet. I also have a special-purpose DNS at 192.168.15.2. So I wanted to give those 3 name servers to DHCP clients. This is my dnsmasq.conf server= section:
server=/localnet/192.168.15.3
server=/localnet/192.168.15.2
server=/15.168.192.in-addr.arpa/192.168.1.254
However, when I look at the DHCP allocation on a client, I see only 192.168.15.3 as the sole DNS server. Clients cannot access the Internet (the NIC on .3 is configured correctly and can access the Internet from its console login).
Granted, the first server= line is probably not necessary, but I added it thinking it may help, didn't.
What am I doing wrong? Thanks for your help!
OK, after MUCH experimentation, I found I had to push the option manually. In the dnsmasq.conf file, I added the following line:
dhcp-option=6,
like so:
dhcp-option=6,192.168.15.3,192.168.15.2,192.168.1.254
This served the correct list of name servers to DHCP clients.
The server lines are configuration for dnsmasq's own DNS server, about where it should forward DNS requersts that it receives in order to be able to resolve (and cache) them.
Only the dhcp-options are part of the DHCP configuration that get passed to DHCP clients - so, the accepted anser is correct, but I wanted to share why.

Teamcity build agent in disconnected state

I am running teamcity on Linux server, and it was working completely fine. Once I reboot the server machine and it stopped working. I managed to start the teamcity server using runAll.sh command, but the build agent stays in "disconnected" state. The inactivity reason is being shown as 'server shutdown'. I tried to start the agent using 'agent.sh stop' and 'agent.sh start' but it does not seem to work. Could not get anything meaningful from the logs.
Kindly help.
Thanks
Incase, if you modified the teamcity port then you'll need to change the build agent configuration files to reflect the new serverUrl value. You can find this setting in the C:\TeamCity\buildAgent\conf\buildAgent.properties file.
On the machine that restarted, make sure the firewall didn't come back up in a state that blocks access to/from the agent.
When you restart the agent, the teamcity-agent.log file should have a line saying something like, "buildServer.AGENT.registration - Registering on server". If it succeeds, it should say something like "buildServer.AGENT.registration - Registered: id:.., authorizationToken:..".
Just found this while looking thru my unanswered questions, It was actually a permission issue. I wasn't running the commands as root user. Once I ran 'agent.sh stop' and 'agent.sh start' as a root user, it worked okay.

Resources