Sending logs to a remote elastic stack instance - node.js

I've recently configured a standalone environment to host my elastic stack as described here
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elastic-stack-on-ubuntu-18-04
Overview
The setup is as follows
NGinx ( :80 ) < Kibana ( :5601 ) < Elastic Search ( : 9200 ) < Log Stash
So in order to access my logs I simply go to <machine-ip>:80 within the browser and login using my credentials for kibana I setup within the guide.
My logging server is setup correctly to use file-beat to send system logs to log-stash etc - What i'm not sure is the correct way to replicate this behaviour on a remote machine
Question
I now would like to post logs over to my log server from another machine but i'm a little unsure on the best way to approach this - Here is my understanding.
1) Install log-stash + filebeat on the machine I want to send logs from
2) Read STDOUT from the docker container/s using filebeat + format in log stash
3) Send the log stash output to my logging server
Now the final point is the part i'm not sure on ( Or maybe the other parts are not the best way to do it either! )
My questions are
Q1) Where should I post my logs too - Should I be hitting my <machine-ip>:80 and talking directly through kibana, or should I open port 9200 to talk to elastic search directly ( And if so how should I be authenticating this communication like Kibana is through credentials )
Q2) What are the best practices on logging from a docker container ( nodeJS in my case ). Should I be setup like point 1 + 2 mentioned where I run logstash / file-beat on that machine or is there a better way
Any help is much appreciated!
e/ Solution for Q1
I've come up with a solution to Q1 for anyone in the future looking
1) Setup an NGINX proxy listening on port 8080 on the elastic stack logging server
- Only traffic coming from my application servers is allowed to talk to this
2) Forward traffic to the elasticsearch instance running on port 9200
The nginx config looks like this
server {
listen 8080;
allow xxx.xxx.xxx.xx;
deny all;
location / {
proxy_pass http://localhost:9200;
}
}

https://www.npmjs.com/package/winston-transport-udp-logstash if you want try I created this npm package to send data to ups logstash endpoint

Related

High Availability Multi Backup Server

I have a project which need to have multi-backup server. It is better to look at the below topology:
So, We will have 4 Remote Site Server which will act as Backup Server in case of Main server is down. In Normal condition Devices will connect to Remove Site Server IP and passthrough it to Mainserver. In case that Mainserver is down or Link from Remote Site Server to Main server is fail, then Remote Site Server will act as Mainserver and start serving services.
I know I can do this by using Nginx and use proxy_pass for tcp , but the thing that we have a dynamic port, for example.
The user can add port 4500 to server virtual server, and later add another port 45001 to be accessed by clients.
I'm now sure how to do it with Nginx.
Also, I have been looking for another solution like keepalived or pacemaker but seems like they are only having like Master-Backup mechanism, not master-backup,backup,backup
Any advice how to get this done?
Appreciate your ideas!
As far as I understood, You need some way to change servers dynamically like server:3000,server3001 and so on. If so you can save server list in text file and use revers proxy to use server form the serverfile.
after that you can simply update the serverlist file to update server from you code.
for example:
Create a file called servers.txt in a directory of your choice and list all the backend servers that you want to use in the following format:
server1.example.com
server2.example.com
server3.example.com
server3.example.com
In the Nginx configuration file, define an upstream block that refers to the servers.txt file and specifies the relevant proxy parameters, like this:
http {
upstream backend {
server unix:/var/run/php-fpm.sock;
include /path/to/servers.txt;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend;
}
}
}
specify additional parameters for other configuration.
finally you can change file servers.txt as per your need. But i dont know if you need to reload nginx server everytime you update your server.txt file.

Is there a way to "host" an existing web service on port X as a network path of another web service on port 80?

What I'm trying to do is create an access website for my own services that run on my linux server at home.
The services I'm using are accessible through <my_domain>:<respective_port_num>.
For example there's a plex instance which is listening on port X and transmission-remote (a torrenting client) listening on port Y and another custom processing service on port Z
I've created a simple website using python flask which I can access remotely which redirects paths to ports (so <my_domain>/plex turns into <my_domain>:X), is there a way to display these services on the network paths I've assigned to them so I don't need to open ports for each service? I want to be able to channel an existing service on :X to <my_domain>/plex without having to modify it, I'm sure it's possible.
I have a bit of a hard time to understand your question.
You certainly can use e.g. nginx as a reverse proxy in front of your web application, listen to any port and then redirect it to the upstream application on any port - e.g. your Flask application.
Let's say, my domain is example.com.
I then can configure e.g. nginx to listen on port 80 (and 443 for SSL), and then proxy all requests to e.g. port 8000, where Flask is running locally.
Yes, this is called using nginx as a reverse proxy. It is well documented on the internet and even the official docs. Your nginx.conf would have something like:
location /my/flask/app/ {
# Assuming your flask app is at localhost:8000
proxy_pass http://localhost:8000;
}
From user's perspective, they will be connecting to your.nginx.server.com/my/flask/app/. But behind the scenes nginx will actually forward the request to your app, and serve its response back to the user.
You can deploy nginx as a Docker container, I recommend doing this as it will keep the local files and configs separate from your own work and make it easier for you to fiddle with it as you learn. Keep in mind that nginx is only HTTP though. You can't use it to proxy things like SSH or arbitrary protocols (not without a lot of hassle anyway). If the services generate their own URLs, you might also need to configure them to anticipate the nginx redirects.
BTW, usually flask is not served directly to the internet, but instead nginx talks to something like Gunicorn to handle various network related concerns: https://vsupalov.com/what-is-gunicorn/

How to configure Port Forwarding with Google Cloud Compute Engine for a Node.JS application

I'm trying to configure a port forwarding (port 80 to port 8080) for a Node.js application hosted on Google Cloud Compute Engine (Ubuntu and Nginx).
My ultimate goal is to have an url like "api.domain.com" showing exactly the same thing from "api.domain.com:8080" (:8080 is working actually).
But because it's a virtual server on Google platform, I'm not sure what kind of configuration I can do.
I tried these solutions without success (probably because it's a Google Cloud environment):
Forwarding port 80 to 8080 using NGINX
Best practices when running Node.js with port 80 (Ubuntu / Linode)
So two questions here:
1.Where I need to configure the port forwarding?
Directly in my Ubuntu instance with Nginx or Linux config files?
With gcloud command?
In a secret place in the UI of console.cloud.google.com?
2.What settings or configuration I need to save?
One possibility is to use Google Cloud Load balancer.
https://cloud.google.com/load-balancing/docs/
1) Create a backend service that listen on port 8080
2) Create a frontend service that listen on port 80
3) Then forward frontend trafic on this backend service
4) Bonus : You can create a ssl certificate auto managed by GCP https://cloud.google.com/load-balancing/docs/ssl-certificates#managed-certs
For the benefit of future readers, here how I figured out how to configure the port forwarding.
You will need to be sure that your Firewall on Google Platform is well configured. Follow this process well described here: Google Cloud - Configuring Firewall Rules. You will need to be sure that port 80 (or 443 for HTTPS) and your Node.JS port (e.g 8080 in my case) are opened.
You will need to configure the port forwarding directly on the server. As far as I know, as opposed to the firewall rules, this is not a configuration that you can do in the Google Cloud platform UI. In my case, I need to edit the Nginx config file located in: /etc/nginx/sites-available/default.
Use this example for reference to edit your Nginx config file: nginx config for http/https proxy to localhost:3000
Once edited, you need to restart your Nginx service with this command: sudo systemctl restart nginx
Verify the state of Nginx service with this command: sudo systemctl status nginx
Your port should be redirected correctly to your Node.js application.
Thanks to #John Hanley and #howie for the orientation about Nginx configuration.
EDIT: This solution is still working but the accepted answer is easier.

Do I need a different server to run node.js

sorry if this is a wrong question on this forum but I am simply just stuck and need some advice. I have a shared hosting service and a cloud based hosting server with node.js installed. I want to host my website as normal but I also want to add real time chat and location tracking using node.js I am confused with what I am reading in several places because node.js is itself a server but not designed to host websites? So I have to run 2 different servers? One for the website and one to run node.js? When I setup the cloud one with a node.js script running I can no longer access the webpages.
Whats the best way for me achieve this as I am just going round in circles. Also is there a way I can set up a server on my PC and run and test both of these together before hand so I see what is needed and get it working as it will stop me ordering servers I dont need.
Many thanks for any help or advice.
Node can serve webpages using a framework like Express, but can cause conflicts if run on the same port as another webserver program (Apache, etc). One solution could be to serve your webpages through your webserver on port 80 (or 443 for HTTPS) and run your node server on a different port in order to send information back and forth.
There are a number of ways you can achieve this but here is one popular approach.
You can use NGINX as your front facing web server and proxy the requests to your backend Node service.
In NGINX, for example, you will configure your upstream service as follows:
upstream lucyservice {
server 127.0.0.1:8000;
keepalive 64;
}
The 8000 you see above is just an example, you may be running your Node service on a different port.
Further in your config (in the server config section) you will proxy the requests to your service as follows:
location / {
proxy_pass http://lucyservice;
}
You're Node service can be running in a process manager like forever / pm2 etc. You can have multiple Node services running in a cluster depending on how many processors your machine has etc.
So to recap - your front facing web server will be handling all traffic on port 80 (HTTP) and or 443 (HTTPS) and this will proxy the requests to your Node service running on whatever port(s) you define. All of this can happen on one single server or multiple if you need / desire.

Solr with Jetty on LAMP server - Admin page access issue

I have Solr with its default Jetty that came with example directory installed on Linux server which has apache2 as its web server.
Now, within the same private LAN, when I open a browser and type in http://<ip-address>:8983/solr works ONLY when I do port forwarding otherwise it doesn't work. I am not sure what could be the problem? Please note this installation has been done on a remote server in a hosting environment for production deployment and I am a beginner wrt deployment stuff.
You can use the jetty.host parameter during startup to allow direct access to Jetty.
The -D option of the java command can be used with the followin syntax:
java -Djetty.host=0.0.0.0 -jar start.jar
In this way Jetty can be reached from all the hosts.
However this is not the ideal setup IMHO. I prefere to setup Jetty to listen only on localhost, implementing the client with another frontend server which listen on port 80. If you want to implement the frontend on another server you can use iptables to limit the incoming connection, dropping everything on the 8983 port if the IP is different from the one of your frontend server.
This image depicts my preferred setup for a LAMP stack includin SOLR:

Resources