How to connect jhipster on aws to elk cloud - jhipster

I need to connect my JHIPSTER app that is in AWS / elasticbeanstalk to ELK Cloud.
It does not work with this:
jhipster:
logging:
logstash:
enabled: true
host: localhost # If using a Virtual Machine on Mac OS X or Windows with docker-machine, use the Docker's host IP here
port: 5000
queueSize: 512
as reference https://www.jhipster.tech/monitoring/

The answer is that you have to connect to logstash. Jhipster delivers a format to send it to a logstash service. Which in turn transfers it to elasticsearch -> Kibana.

Related

Azure VM not connecting to Azure Redis Cache but local is connecting to Azure Redis Cache

The same Azure Redis cache is getting connected from local machince.
The port 6380, on which cache is running is open in firewall of both inbound and outbound in the VM.
I tried in both NodeJs and Java. Both are connecting to remote Azure Redis from local and the exact same code for NodeJS and Java is not connceting to Azure Redis cache from VM.
Java config:
spring.redis.host=my-cache.redis.cache.windows.net
spring.redis.password=<password>
spring.redis.port=6380
spring.redis.ssl=true
NodeJS config:
const client = redis.createClient(6380,
'my-cache.redis.cache.windows.net',
{
auth_pass: <password>,
tls: { servername: 'my-cache.redis.cache.windows.net' }
});
well, the other end must accept connection as well, so you mist allow connections from the VM if you have any firewall rules at all:
https://learn.microsoft.com/en-us/azure/azure-cache-for-redis/cache-configure#firewall
This is solved by the following on my Windows Desktop VM:
Click Start.
Enter cmd in the Start menu search text box.
Right-click Command Prompt and select Run as Administrator.
Run the following command: ipconfig /flushdns
Run the following command: ipconfig /registerdns

Google App Engine two ports on same application

I want to run a node js application on Google App Engine but my application is currently running two protocols, on two different ports. One of them is the port 8080 (which is required to work on Google App Engine) and the other one is 1883, a mqtt server. Looking the documentation, I couldn't find anything to make this works, actually, the server starts without errors but of course I can't connect to the port 1883, only the http and https.
I need to know if this is possible at all and if is, how do I forward that port or proxy that?
My app.yaml:
runtime: nodejs
env: flex
service: comms-server
network:
name: default
subnetwork_name: default
forwarded_ports:
- 1883/tcp
Also, my VPC configuration is:
Try the following:
network:
forwarded_ports:
- 1883
instance_tag: comms-server
And:
gcloud compute firewall-rules create default-allow-comms-server \
--allow tcp:1883 \
--target-tags comms-server \
--description "Allow traffic on port 1883"

Elasticsearch : connect to a remote server via. SSH (Node JS)

I'm currently using Node's [elasticsearch][1] package. Until now, I connected to the ES instance in the following way.
let esClient = new elasticsearch.Client({
host: '127.0.0.1:9200',
log: 'trace'
});
Now, I've installed ES on a remote Amazon EC2 Linux server by tunneling through SSH using a key file.
I've done the basic ES installation and setup on that server. Tested it as well, and it runs properly.
I've now deployed my Node project on a Server X (EC2 - Ubuntu server).
And Elasticsearch is on Server Y(EC2 - Amazon Linux server).
Apart from specifying the IP in the host parameter what else do I need to connect to ES running on Server Y from Server X?
You have to make sure you have the port (9200) open in Amazon's Security Group settings.

FileBeat not load balancing to multiple logstash (indexer) servers

I tried load balancing with 2 different logstash indexer servers, but when I add, say 1000 lines to my log, filebeats sends logs exclusively to only one server (I enabled stdout and can visually check output too see which logstash server is receiving the log events)
My filebeats conf:
filebeat:
prospectors:
-
paths:
- "D:/ApacheLogs/Test/access.*.log"
input_type: log
document_type: my_test_log
scan_frequency: 1s
registry_file: "C:/ProgramData/filebeat/registry"
output:
logstash:
hosts: ["10.231.2.223:5044","10.231.4.143:5044"]
loadbalance: true
shipper:
logging:
files:
will there be added support to disable persistent TCP connection on filebeats? I currently cannot use AWS ELB since due to sticky connection it always sends to one logstash server until it gets reset. Is this not the right architecture for it? Should I be sending to redis queue instead? In filebeats I have no idea nor could find any documentation how to send to redis queue?
Something like these did not work, I can't even find a way to debug it because filebeats leaves no logs
filebeat:
prospectors:
-
paths:
- "D:/ApacheLogs/Test/access.*.log"
input_type: log
document_type: my_test_log
scan_frequency: 1s
registry_file: "C:/ProgramData/filebeat/registry"
output:
redis:
# Set the host and port where to find Redis.
host: "logstash-redis.abcde.0001.usw2.cache.amazonaws.com"
port: 6379
shipper:
logging:
level: warning
# enable file rotation with default configuration
to_files: true
files:
path: C:\temp\filebeat.log
Version:
On windows server: FileBeat (Windows - filebeat version 1.2.2 (386))
On logstash indexer server: logstash 2.3.2
Operating System:
Windows server: Microsoft Windows NT 6.0.6002 Service Pack 2
Logstash indexer server: RHEL Linux 4.1.13-19.30.amzn1.x86_64
Filebeat should really solve this, but since they advertise it as being as lightweight as possible, don't hold your breath.
I don't know how easy it is to get HAProxy running on Windows, but it should solve your problem if you can get it installed:
https://serverfault.com/questions/95427/windows-replacement-for-haproxy
Use Layer4 roundrobin load balancing. You'll probably want to install an HAProxy on every machine with Filebeat. 1 HAProxy frontend will listen on localhost:5044, and it will map to multiple Logstash backends.
You can send your filebeat output to redis via below config:
output:
redis:
host: "host"
port: <port>
save_topology: true
index: "index-name"
db: 0
db_topology: 1
timeout: 5
reconnect_interval: 1

Strange behaviour of Mean.io on Azure VM‏

I created an Azure virtual machine with Ubuntu 14.04 LTS OS.
I installed a mean.io application version 0.3.3, on this virtual machine, with nginx that proxy the requests in the http port 3000 over the port 80.
I opened one endpoint in azure portal, for the TCP protocol on private port 3000 and public port 80.
I installed the latest version of node on Azure VM.
The database (mongoDB) is hosted on compose.io.
With pm2 (https://www.npmjs.org/package/pm2) I created a daemon that run the application.
All apparently works fine: the cpu was with no load and the memory was empty (only 100MB).
But after a period, node.js cannot process the request.
I have tried to do a 'curl' in localhost 3000 but i dont have any response.
The problem persists only in Azure VM: I tried the same application, with the same configuration, on my dev machine (ubuntu 14.04 desktop), and on Digital Ocean (another distro of ubuntu 14.04 server) and all works fine without problem.
Can you help me to find the problem?
I have tried to dockerize all infrasctructure, in the same machine (a CoreOS vm on azure):
1 container with mean app,
1 container with MongoDB,
the problem still persisted!!!
finally, i have found the solution: keep the connection to MongoDB alive.
i have modified the server.js file from the mean app in this mode:
var options = {
server: {
socketOptions: { keepAlive: 1 }
}
};
var db = mongoose.connect(config.db, options);
In this mode the connection still alive and the problem was solved.

Resources