Hosting Neo4J Browser Separately From Server - azure

I want to have three different Neo4J instances running (each with a different database). Then I need three different Neo4J browser visualizers (think project1.domainname.com / project2.domainname.com / project3.domainname.com) with each one mapping to a specific database instance
I've managed to get the three different database instances running on a single Azure VM - so far so good.
But I'm not sure how to create and map those browsers. I'd like to run them each as Azure websites as that would help with some other problems I've forseen.
1) Where is this browser HTML etc. so I can load it to the Azure Website?
2) Where in that code would I specify the IP Address and Port that browser should be talking to?
I've also heard some people talking about an Azure for Neo4J project but that is nearly five years old and the Neo4J guys said to put the database instances on VM's. Were they right>

The Neo4j Browser is an angular app that you can find in the neo4j source code at https://github.com/neo4j/neo4j/tree/2.3/community/browser
You can set the host that the browser app can talk to:
:config host:"http://host:port"
This is an undocumented feature and might be removed.
For 3.0 the intent is to decouple browser and Neo4j anyway.

Related

Node Red - Accessing dashboard from remote server

I have a question regarding the Node Red dashboard. I've got my dashboard all set up and working. Now, I want to be able to access the dashboard outside of my local network. Right now I do this through a VNC server. What needs to happen next is that clients need to able to access the dashboard, but they are not getting access to my VNC server of course. I have done my fair amount of Google work. I (somewhat) understand that a service like ngrok (ngrok.com) or dataplicity (dataplicity.com) is what I am looking for. What would be the best way of setting this up safely?
Might be useful to clarify: I'm using a raspberry Pi!
Thanks in advance!
If you want to give the outside world access to your dashboard, you can also consider to host your node-red application in the cloud. See links at the bottom-left of page https://nodered.org/docs/getting-started/
Most of those services have a free tier - so it might you cost nothing.
If you cannot deploy your complete node-red in the cloud (e.g. because it is reading local sensors) then you can split your node-red application into 2 node-red applications: one running locally and one (with the dashboard) running in the cloud. Of course then the 2 node-red applications need to exchange messages: for this the cloud services mentioned on that page also provides a secure way to send and receive events from the node-red cloud application that you can use.

MongoDB in Docker drops databases automatically and periodically

I have a web page project developed in MEAN stack instanced in a server with Docker.
I have 2 container running, one has the API and the service, the other runs the mongodb service. Everything seems right but I have one big problem, in certain periods of time, the system drops my databases.
The last issue was on 11/03/2018 and the previous one 2 week ago.
These are the mongodb container last logs:
There is any configuration that maybe I can edit, or do you have passed through this problem?
If you need more information please do ask. Thanks.
Your database gets accessed from an external (public) IP. Your MongoDB server is publicly accessible and you have no authentication activated.
This official doc describes how you can enable authentication: https://docs.mongodb.com/manual/tutorial/enable-authentication/
Furthermore: You can restrict access to the server. If your app server and your mongodb server are in the same network, you need not to expose your mongodb to the public.

mesos-dns, best practice for working with ports

I am quite new to Service Discovery and clustered systems. I started experimenting with Mesos and Marathon for the deployment of Docker containers, the Marathon REST API and UI seem to do a good job.
My problem is the actual discovery of deployed services. For testing purposes I deployed a Kafka Cluster scaled to 3 instances via Marathon, as so I did with a MongoDB test-cluster. The Mesos-DNS client gives me a record like kafka.marathon.mesos and mongo.marathon.mesos which implies the dynamically mapped port from the host to the container. The problem is, that my client explicitly needs information about the target port. Is there a general way to get those port information from the service automatically and dymanically? What about apps exposing multiple ports?
My thougts so far:
- Doing a REST call to get a status about the deployed app and somehow extract the relevant data
- Doing a DNS SRV lookup and somehow extract the relevant data
- Having some kind of "master", statically bound to a port, with dynamic "clients".
I searched a lot for those informations but in the end most of the tutorials ended with a manual lookup which is not what I aim for.
You're spot on. I recently gave a presentation at XebiCon around this topic and plan to publish a blog post with details about the setup incl. GitHub repo. For starters you could have a look at a Python implementation for the HTTP API consumption part.
UPDATE: the blog post is now available here.

Is it possible to include remote JMX values on a dashboard?

I'm looking at using hawtio for our app as a support console. We're not currently using camel or the like, but I am impressed by the ability to connect to remote JVM's via Jolokia/JMX and the logging features and was wondering:
Our use case would be that we have a weblogic server hosting our web app and my thought would be to include hawtio as a war alongside it. In addition to monitoring the web app, we have a number other JVMs running on different servers.
Is it possible to create a dashboard using values from the local JVM, as well as some of the remote JVMs?
Or must one always manually connect to the instance to see the dashboard for that particular JVM?
The current dashboard and JMX plugin does not support that.
Though there is works planned to support gathering statistics from remote JMVs etc. And there is also work on elastichsarch with a kibana web ui.

Separate service on NodeJS server

I want to know how to structure my NodeJS server.
I want to separate services proposed on my website to mount cluster in the future and to have many servers (each allowed to one special task).
Example :
The 'main' server which have one project : ExpressJS and Database
The 'communication server' which have one project : Chat + Forum
Others projects : For complex computing (generating chart / stats / emailing)
Could you explain me different approach for this type of complex website ?
Like Benjamin Gruenbaym said, the architecture belongs somewhere else.
If you are wondering about how to setup the applications on an individual server, there are a few things to keep in mind.
NodeJS runs in a single process, so it should ideally take up 1 core of the CPU. If you run a database on the same server, that is another core. So it may be fine to host all node applications on the same server, if it has a sufficient number of cores.
To run two different Node processes on the same machine, you simply start them one after another, but make sure that they listen on different ports.
To make sure that you can scale out your application later, it is important that you use domain names, instead of IP adresses when you identify your services to each other. So the nodeJS app should know about the database as mydatabase.mycompany.com, not as 192.168.1.10 or any other ip address. This will allow you to later move the database to another network address or to use a load balancer.

Resources