How to run Logstash-forwarder from client machine - logstash

I am new to logstash. I installed elasticsearch, kibana, logstash and logstash-forwarder in Ubuntu from this tutorial everything is fine while running on the local machine.
Now I want to include log file from another system so I installed logstash-forwarder on client machine but it fails to run. I cannot able to figure out mistake. In client machine I didn`t install logstash since logstash is running on server.If anything I misunderstood please let me know. What would be the configuration of server and client access.
Logstash forwarder in client:
{
"network": {
"servers": [ "server_ip_addr:5000" ],
"timeout": 15,
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/var/log/syslog",
"/var/log/auth.log"
],
"fields": { "type": "syslog" }
}
]
}
Thanks

Install logstash-forwarder is enough.
Here are the ways to troubleshooting your problem.
Check if logstash is running on port 5000
Login logstash server, and run :
telnet localhost 5000
If you can't telnet, then make sure, logstash service is running properly.
Check if there are firewall issue between clients and logstash server.
Login client (where you installed forwarder on it)
telnet server_ip_addr 5000
If you can't, you need open the firewall port 5000 between client to logstash server.
In config file, the file part is fine, could you update the network part,
Let me know the result
"network": {
"servers": [ "server_ip_addr:5000" ],
"timeout": 15,
"ssl certificate": "/etc/pki/tls/certs/logstash-forwarder.crt",
"ssl key": "/etc/pki/tls/certs/logstash-forwarder.key",
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
},

Related

What is the proper configuration to persist edgeAgent storage?

I am working on an Azure IoT Edge project. Currently I am going through the production readiness checklist. I followed the documentation to use storage on the host filesystem for the edgeAgent and edgeHub modules.
When I run sudo iotedge check edgeHub is OK but edgeAgent raises a warning:
‼ production readiness: Edge Agent's storage directory is persisted on the host filesystem - Warning
The edgeAgent module is not configured to persist its /tmp/edgeAgent directory on the host filesystem.
Data might be lost if the module is deleted or updated.
Please see https://aka.ms/iotedge-storage-host for best practices.
√ production readiness: Edge Hub's storage directory is persisted on the host filesystem - OK
Here is a snippet from the deployment template:
"systemModules": {
"edgeAgent": {
"type": "docker",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-agent:1.0",
"createOptions": {
"HostConfig": {
"Binds": [
"/home/pi/iotedge/edgeAgent/storage/:/iotedge/storage/"
]
}
}
},
"env": {
"storageFolder": {
"value": "/iotedge/storage/"
}
}
},
"edgeHub": {
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-hub:1.0",
"createOptions": {
"HostConfig": {
"Binds": [
"/home/pi/iotedge/edgeHub/storage:/iotedge/storage/"
],
"PortBindings": {
"5671/tcp": [
{
"HostPort": "5671"
}
],
"8883/tcp": [
{
"HostPort": "8883"
}
],
"443/tcp": [
{
"HostPort": "443"
}
]
}
}
}
},
"env": {
"storageFolder": {
"value": "/iotedge/storage/"
}
}
}
},
As of release 1.0.9, there's an issue where edgeAgent's configuration doesn't update unless its image tag is updated. Two options from your current state:
Use a specific tag in the image settings (always recommended). E.g. mcr.microsoft.com/azureiotedge-agent:1.0.9
Delete the edgeAgent container on the device: docker rm -f edgeAgent. It will get restarted in under 30 secs and the new storageFolder env var will be picked up.
Run 'iotedge check' again after the container is update, and this warning should go away.
I have followed the same documentation and was able to avoid the production readiness checklist warnings on my Raspberry Pi 3.
1) I have configured the "Binds" as per the documentation Link module storage to device storage
"Binds":["/etc/iotedge/storage/:/iotedge/storage/"]
2) I have provided the user access on the HostStoragePath, from SSH terminal.
sudo chown 1000 /etc/iotedge/storage/
sudo chmod 700 /etc/iotedge/storage/
3) Restarted Raspberry Pi 3, to make sure the grant access takes into effect.
Make sure you have host storage folders available on your edge device.
Make sure to provide the user full access on those folders.
Try the deployment with your updated manifest, it should probably work.

Can't connect to json-server from another device

I installed the json-server from https://github.com/typicode/json-server and works fine, I can execute GET, POST etc., but locally only. When I try to connect from the outside it doesn't work.
I tried to turn off firewall, change ports and diffrent startup settings (i.e json-server --host 192.168.0.21 db.json) and nothing helped. Here's my database.
[
{
"id": 2,
"login": "admin3",
"haslo": "haslo3"
},
{
"id": 3,
"login": "admin2",
"haslo": "haslo2"
},
{
"login": "admin1",
"haslo": "haslo1",
"id": 7
}
]
I would like to be able to connect to my server from the whole world, but only local adresses work (i.e http://192.168.0.21:3000/user or http://localhost:3000/user). What's wrong with my approach?
Make sure you are starting the server bound to your adapter (ex. --host 0.0.0.0).
Then you need to setup port forwarding on your router.
https://deaddesk.top/exposing-local-server-to-the-public-internet/

Azure: Unable to connect to cluster (Aks-engine) using kubectl

SOLUTION
I appended feature-gates in kube-apiserver.yaml in master node. This broke the apiserver, so kubectl couldn't connect to the nodes. After removing them, it was working fine.
PROBLEM
I deployed a Kubernetes cluster using aks-engine but I'm getting this error Unable to connect to the server: dial tcp 13.66.162.75:443: i/o timeout when I try to use kubectl. I'm able to access the master node with the serial console but not through ssh (same error comes in this case).
$ KUBECONFIG=_output/kubeconfig/kubeconfig.westus2.json kubectl get node
Unable to connect to the server: dial tcp 13.66.162.75:443: i/o timeout
$ KUBECONFIG=_output/kubeconfig/kubeconfig.westus2.json kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:34:11Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Aks-Engine version - v0.28.1-linux-amd64
Kubernetes version - 1.10.12
Here is the kubeconfig.westus2.json file -
{
"apiVersion": "v1",
"clusters": [
{
"cluster": {
"certificate-authority-data": "*****"
"server": "https://masquerade-az.westus2.cloudapp.azure.com"
},
"name": "masquerade-az"
}
],
"contexts": [
{
"context": {
"cluster": "masquerade-az",
"user": "masquerade-az-admin"
},
"name": "masquerade-az"
}
],
"current-context": "masquerade-az",
"kind": "Config",
"users": [
{
"name": "masquerade-az-admin",
"user": {"client-certificate-data":"****","client-key-data":"*****"}
}
]
}
This is the screenshots for inbound ports.
This is the screenshot for outbound ports.
As shared by the original poster, the solution is:
I appended feature-gates in kube-apiserver.yaml in master node. This broke the apiserver, so kubectl couldn't connect to the nodes. After removing them, it was working fine.

Strongloop deploy on AWS EC2

I have instaled node.js and npm on EC2 through command line and also uploaded all my files for Strongloop project
When I'm running the server locally it is working fine but when I'm running the node server.js on EC2 command line. it is running but saying:
Web server listening at: http://0.0.0.0:3001/
ENVIRONMENT : development
How could I start my server on AWS EC2... Can't figure it out.
My Config.json file
{
"restApiRoot": "/api",
"host": "0.0.0.0",
"port": 3001,
"remoting": {
"context": {
"enableHttpContext": false
},
"rest": {
"normalizeHttpPath": false,
"xml": false
},
"json": {
"strict": false,
"limit": "100kb"
},
"urlencoded": {
"extended": true,
"limit": "100kb"
},
"cors": false,
"errorHandler": {
"disableStackTrace": true
}
},
"legacyExplorer": false
}
Few possible things:
Make sure the port you are using is in your AWS Security Inbound Rules. Go to 'Security Groups', then select the group associated with your instance then click "Inbound". In your case, you have to add the port 3001 for HTTP protocol and the source 0.0.0.0/0.
You need to keep the service running even after you close the terminal/command-prompt window. To make the server start automatically and run in the background, you need to use something like 'systemctl' in ubuntu. Here is a step by step guide and tutorial: https://www.digitalocean.com/community/tutorials/how-to-use-systemctl-to-manage-systemd-services-and-units
Now try visiting your IPv4 Public IP or your associated domain name with the port number. e.g. 1.2.3.4:3001 or mywebsite.com:3001. The IP is written in the same row as your instance in AWS.

How to attach to node with VSCode and grunt-contrib-connect

I'm unable to use Visual Studio Code to debug my node application.
My .vscode/launch.json looks like this:
{
"version": "0.2.0",
"configurations": [{
"name": "Attach",
"type": "node",
"request": "attach",
"port": 9001
}]
}
I use grunt-contrib-connect to start my web server. My connect task is defined my my gruntfile.js like this:
connect: {
server: {
options: {
debug:true,
port: 9001,
base: '.',
livereload: true,
},
},
},
After successfully starting the web server with the above task, I try to attach from VSCode, but apart from some UI flashes, nothing seems to happen. No breakpoints are hit.
I read the VS Code Debugging Documentation, especially the Attaching VS Code to Node section, which is why I added the debug:true to my connect task. However this did not seem to fix anything.
If I understand the "port" in grunt-contrib.connect correctly, then this is the port on which the webserver will respond, not the debugging port.
So in your .vscode/launch.json you must specify the debug port not the webserver port. Since grunt uses node, I assume the debug port is node's default port 5858.

Resources