I wish there's a trick for coding jHipster Microservices and Gateway with Codenvy, even with these issues related in the below closed isssue.
This closed issue comes from Jhipster, which is reason why I ask here, so please see it carefully if interested by topic of the question.
https://github.com/jhipster/generator-jhipster/issues/6922
This is a screenshot from my codenvy machine:
Please help and tell me, as I keep thinking that with an advanced use of BrowserSync, we could finally manage this.
I already have a Codenvy with 7GB which let run all the stack tiers simultaneously. Yet I can test the apps but not perfectly , as you can see if you follow the related ticket. I can pass thru the entities creation forms, yet once I come to call the CRUD rest services, it's failing because listen ports are not accessible => seems like due to redirections, my local browser is trying to access open ports from the realm, but Codenvy only give 1 virtual access to a single test port.
What I'd like to do with BrowserSync is just manipulating a headLess browser on the Codenvy machine, remotly from my Codenvy test URL. Then all problems would be solved no? Is this possible?
Related
I'm currently running two servers, both on my local machine. One is Spring, the other is Flask. The Spring server handles major business logic of my application, whereas the Flask one handles light database access operations. Additionally, I have a React Node.js server that interacts with both of these. I am considering merging the Flask server into my Spring server to solve this problem, but I am curious to know the correct way of connecting to two servers.
I am using a React proxy to use the port of the Spring server, and I'm making my calls on that, whereas my Flask server I connect to with 'http://127.0.0.1:5000/' via fetch. This has been working great when accessing the site from the hosting PC, but when trying to access it remotely via port forwarding, the Flask server cannot be accessed. This makes sense, because React is looking at the localhost of that machine.
My question is: other than merging the two servers, how can I connect to both of them remotely like this? Is there something big I am missing? I am of course happy to provide all files or information necessary, and as this is my first time doing a project like this, I hope what I've explained is sufficient.
EDIT: Just as an aside, I think some of this is due to Flask's default behavior of not allowing outside connections. I will update this more if anything comes of my tests.
EDIT2: I tested this even more and found that the Flask server access works great when the react proxy points to it, whereas the Spring server fails. The opposite behavior I have now. So it isn't about Flask default behavior, but the fact that React can only proxy into one server at a time.
I have got this working, and a good answer was posted on SO already, just under different phrasing, so I didn't find it. I will link the post here!
Multi server solution
Specifically, I just wasn't familiar with the setupProxy.js file, and this is the solution to this problem.
I have two questions a bit theoretical. I searched in the Internet but I didn't get a clear answer.
My first question:
I would like to develop an app in MEAN stack (Mongo, Angular8, NodeJS server), but I don't want to have a central server connected with the database somewhere and all the clients just connect to it remotely. I want to deploy the whole app (mongo database, server back-end, angular front-end) locally in a standalone pc. Yes, the user would have to activate both database and server services and yes, he would have to use the app through the browser at the localhost address, but I don't want him to be able to see the code. Is it possible or do you have in mind any trick to achieve that?
My second question:
Can I directly link mongo database with the Angular8 code without interfere the NodeJS API's?
I know that my questions are a bit generic, but I am not looking for huge answers, rather if my questions are possible and some tips on how to move on.
For the first question: Your user will always be able to see the compiled code (through the developer console for example) but not the source code of the angular application.
I am new to Spring-Boot(Cloud) and going to work on one new project.
Our project architect have designed this new application like this:
One front end Spring boot application(which is also microservice) with Angular-2.
One Eureka server, to which other microservices will connect.
ZUUL proxy server, which will connect to Front end and mircoservices.
Now, followings are the things I am confuse about and I can't ask him as he too senior to me:
Do I need to have separate ZUUL proxy server ? I mean, what is the pro and cons of using same front end application as ZUUL server?
How MicorService-1 will communicate with Node's MicroService-1? Some blogs suggesting the Sidecar. But again, why ? because I can directly invoke ReST api of NodeJS-1 from the Microservice-1.
(I know, this is very tough to guess but still asking) NodeJS services(which are not legacy services) are suppose to call some third party api or retrieve data from DB.
Now, what I am not getting is why we need NodeJS code at all? Why can't we do the same in the microservices written in Java?
Can anyone who have worked with similar kind of scenario shed some light on my doubts?
I do not have the full context of the problem that you are trying to solve, therefore answers bellow are quite general, but still may be useful:
Do I need to have separate ZUUL proxy server? I mean, what is the pro and cons of using same front end application as ZUUL server?
Yes, you gonna need a separate API Gateway service, which may be Zuul (or other gateways, e.g tyk.io).
The main idea here is that you can have hundreds or even thousands of microservices (like Amazon, Netflix, etc.) and they can be scattered across different machines or data centres. It would be really silly to enforce your API consumers (in your case it's Angular 2) to memorise all the possible locations of each microservice. Better to have one API Gateway that knows about all the services under it, so your clients can call your gateway and have access to the underlying services through one single place. Also having one in your system will decouple your clients from your services, so it's possible to evolve them independently.
Another benefit is that you can have access control, logging, security, etc. in one single place. And, BTW, I think that you are missing one more thing in your architecture - it's Authorization Server. A common approach in building security for microservices is OAuth 2.0.
How MicorService-1 will communicate with Node's MicroService-1? Some blogs suggesting the Sidecar. But again, why? because I can directly invoke ReST API of NodeJS-1 from the Microservice-1.
I think you could use Sidecar, but I have never used it. I suppose that the question 'why' is related to the Discovery Service (Eureka in your architecture).
You can't call microservice NodeJS-1 directly because there may be several instances of NodeJS-1, which one to call? Furthermore, you can't know whether service is down or alive at any given point in time. That's why we use Discovery Services like Eureka - they handle all of these. When any given service has started, it must register itself in Eureka. So if you have started several instances of NodeJS-1 they all will be registered in Eureka and whenever Microservice-1 wants to call NodeJS-1 service it asks Eureka for locations of live instances of NodeJS-1. The service then chooses which one to call.
(I know, this is very tough to guess but still asking) NodeJS services(which are not legacy services) are suppose to call some third party api or retrieve data from DB.
Now, what I am not getting is why we need NodeJS code at all? Why can't we do the same in the microservices written in Java?
I can only assume that the NodeJS has been chosen because it has an outstanding performance for IO operations, including HTTP requests that may come in hand when calling 3-rd party services. I do not have any other rational explanations for this.
In general, microservices give you a possibility to have your microservices written in different languages and it's cool indeed since each language solves some problems better than the other. On the other hand, this decision should be made with caution and answer the question - "do we really need a new language in our stack to solve problem X?".
This question already has answers here:
How can Meteor apps work offline?
(4 answers)
Closed 8 years ago.
I am planning to create a web application using Node.js and Meteor Framework with mongoDB. This application will be critical for the business operation, so ideally it should be able to handle network failure.Is this possible? Or my only option here is to create a stand-alone application? The application will probably be run on either a PC or a tablet.
Are there any existing solution for this?
One Idea I have is, is it possible to have a local cache of the user's database on the machine. When the network is up, this cache might not be used but continually updated. But when the network failed, then the connection will be hand off to this database so operation can continue as usual. When the network is back up, this database will sync with the our server and back to normal mode.
In case of a PC, we might be able to run a local server manually to get the webpage backup. I couldn't think of a solution for the tablet though.
it sounds like you are looking for PouchDB. It works with CouchDB as a backend instead of Mongo, but I think these two are quite similar.
PouchDB is a local Javascript based DB on the client device. It syncs with 'real' DB once client is online again.
I am not affiliated with them, and I use Mongo daily as well, never actually tried CouchDB before, but might be worth having a look.
Meteor actually support this out of the box. I guess I was searching with the wrong terms. Check out the link below for more information.
How can Meteor apps work offline?
I am completely new to elasticsearch but I like it very much. The only thing I can't find and can't get done is to secure elasticsearch for production systems. I read a lot about using nginx as a proxy in front of elasticsearch but I never used nginx and never worked with proxies.
Is this the typical way to secure elasticsearch in production systems?
If so, are there any tutorials or nice reads that could help me to implement this feature. I really would like to use elasticsearch in our production system instead of solr and tomcat.
There's an article about securing Elasticsearch which covers quite a few points to be aware of here: http://www.found.no/foundation/elasticsearch-security/ (Full disclosure: I wrote it and work for Found)
There's also some things here you should know: http://www.found.no/foundation/elasticsearch-in-production/
To summarize the summary:
At the moment, Elasticsearch does not consider security to be its job. Elasticsearch has no concept of a user. Essentially, anyone that can send arbitrary requests to your cluster is a “super user”.
Disable dynamic scripts. They are dangerous.
Understand the sometimes tricky configuration is required to limit access controls to indexes.
Consider the performance implications of multiple tenants, a weakness or a bad query in one can bring down an entire cluster!
Proxying ES traffic through nginx with, say, basic auth enabled is one way of handling this (but use HTTPS to protect the credentials). Even without basic auth in your proxy rules, you might, for instance, restrict access to various endpoints to specific users or from specific IP addresses.
What we do in one of our environments is to use Docker. Docker containers are only accessible to the world AND/OR other Docker containers if you explicitly define them as such. By default, they are blind.
In our docker-compose setup, we have the following containers defined:
nginx - Handles all web requests, serves up static files and proxies API queries to a container named 'middleware'
middleware - A Java server that handles and authenticates all API requests. It interacts with the following three containers, each of which is exposed only to middleware:
redis
mongodb
elasticsearch
The net effect of this arrangement is the access to elasticsearch can only be through the middleware piece, which ensures authentication, roles and permissions are correctly handled before any queries are sent through.
A full docker environment is more work to setup than a simple nginx proxy, but the end result is something that is more flexible, scalable and secure.
Here's a very important addition to the info presented in answers above. I would have added it as a comment, but don't yet have the reputation to do so.
While this thread is old(ish), people like me still end up here via Google.
Main point: this link is referenced in Alex Brasetvik's post:
https://www.elastic.co/blog/found-elasticsearch-security
He has since updated it with this passage:
Update April 7, 2015: Elastic has released Shield, a product which provides comprehensive security for Elasticsearch, including encrypted communications, role-based access control, AD/LDAP integration and Auditing. The following article was authored before Shield was available.
You can find a wealth of information about Shield here: here
A very key point to note is this requires version 1.5 or newer.
Ya I also have the same question but I found one plugin which is provide by elasticsearch team i.e shield it is limited version for production you need to buy a license and please find attached link for your perusal.
https://www.elastic.co/guide/en/shield/current/index.html