JHipster Registry - why is it mandatory to run in the cloud for production? - jhipster

I am working on an app that needs to be self-hosted on a Windows 10 PC where all the clients are inside a company network. I am using docker-compose for the various microservices and I am considering JHipster for an API Gateway and Registry service.
As I am reading the documentation, there is a line in the JHipster docs (https://www.jhipster.tech/jhipster-registry/
) that says "You can run a JHipster Registry instance in the cloud. This is mandatory in production, but this can also be useful in development". I am not a cloud expert so I not sure what is different in the environments that would cause the Registry app issues when running on a local PC. Or perhaps there is a way to give the local Windows PC a 'cloud' environment.
Thanks for any insight you have for me.

I think that this sentence does not make sense.
Of course, you must run a registry if you generated apps with this option but it does not mean that you must run it in the cloud. The same doc shows you many alternative ways to run it.
Running all your micro services on a single PC is a bit strange, it defeats the purpose of a microservice architecture because you got a single point of failure that can't scale. Basically, you are paying for extra complexity without getting the benefits, where a monolith would be so much simpler and more productive.
How many developers are working on your app?

Related

What is the production deployment / runtime architecture of ResolveJS backend systems?

Does the reSolveJS generally run as a single NodeJS application on the server for production deployment?
Of course, event store and read models may be separate applications (e.g. databases) but are the CQRS read-side and write-side handled in the same NodeJS application?
If so, can / could these be split to enable them to scale separately, given the premise of CQRS is that the read-side is usually much more active than the write-side?
The reSolve Cloud Platform may alleviate these concerns, given the use of Lambdas that can scale naturally. Perhaps this is the recommended production deployment option?
That is, develop and test as monolith (single NodeJS application) and deploy in production to reSolve Cloud Platform to allow scaling?
Thanks again for developing and sharing an innovative platform.
Cheers,
Ashley.
reSolve app can be scaled as any other NodeJS app, using containers or any other scaling mechanisms.
So several instances can work with the same event store, and it is possible to configure several instances to work with the same read database, or for every instance to have its own read database.
reSolve config logic is specified in the app's run.js code, so you can extend it to have different configurations for different instance types.
Or you can have the same code in all instances and just route command and queries to the different instance pools.
Of course, reSolve Cloud frees you from these worries, in this case you use local reSolve as dev and test environment, and deploy there.
Please note that reSolve Cloud is not yet publicly released. Also, local reSolve can not have all required database adapters at the moment, so those yet to be written.

Docker For Development Only

I am an IT Supervisor head and have very little development background so I apologize for this naive question.
Currently, we are using Weblogic, running in Linux VMs, created by Oracle VM (OVM) to host our application for production.
The development environment also uses the same configuration.
Our developers are suggesting we use docker in the development environment and utilize DevOps to increase the agility of development.
This sounds like a good idea to me, but I still want our production to run on the same configuration running today (Weblogic in Linux VMs over Oracle VM Hypervisor); I do not want to use docker for production.
I have been searching to find out if that is possible with no luck.
I would really appreciate it if you can help.
I have three questions:
Is that possible?
Is that a normal practice to run docker for development only while using traditional nondocker for production?
If it is possible, what are the best ways to achieve that?
Thank You
Docker is linux distro-agnostic. Java development is JEE container-agnostic (if you follow the Java official specs defined in the JSRs).
So, these are two reasons why you should have the same behaviour between your developper environment and your production environment. Of course, a pre-production environment should be welcome to be sure this is true. And do not avoid looking at memory and performances issues, before doing that. Moreover, depending on the reason you are using Weblogic, ask yourself about which JVM and JEE container you would run in your docker containers.
is that possible ?
Yes, we do that in my organization, for some applications, using tomcat (instead of WebSphere for other applications).
is that a normal practice to run docker for development only while using traditional none docker for production ?
There are many practices, depending on the organization goals, strategy and level of agility. Using Docker for development and not in production is the most use-case with Docker containers, nowadays, but the next level is to use a Docker engine in a production environment. See next section:
-if it is possible, what are the best practice to achieve that ?
The difficulty is that in a production environment, you need a system for automating deployment, scaling, and management of containerized applications.
Developers do not need that. So it is really easy for them to migrate to Docker (and it lets them do things easier and faster than without Docker).
In production, you should really consider using Kubernetes or OpenShift, instead of running a simple docker engine, like your developers do. But it is much more complicated than simply installing Docker on a single Windows or Linux host.

Deploy node.js in production

What are the best practices for deploying a nodejs application in production?
I would like to know how deploy for production Api's nodejs is being done today, today my application is in docker and running locally.
I wonder if I should use a Nginx inside the container and deploy my server on it or just upload my image node that is already running today.
*I need load balance
There are few main types of deployment that are popular today.
Using platform as a service like Heroku
Using a VPS like AWS, Digital Ocean etc.
Using a dedicated server
This list is in the order of growing difficulty and control. So it's easiest with PaaS but you get more control with a dedicated server - thought it gets significantly more difficult, especially when you need to scale out and build clusters.
See this answer for more details on how to install Node on a VPS or a dedicated server:
how to run node js on dedicated server?
I can only add from experience on AWS using a NAT Gateway which is a dedicated Node server with a MongoDB server behind the gateway. (Obviously this is a scalable system and project.)
With or without Docker, you need to control the production environment. This means clearly defining which NPM libraries you will need for production, how you handle environment variables and clusters for cores.
I would suggest, very strongly, using a tool like PM2 to handle clusters, server shutdowns and restarts and logs. (Workers & slaves also if you need them and code for them).
This list can go on and on, but keep in mind this is only from an AWS perspective. Setting up a Gateway correctly on AWS is also not an easy process. Be prepared for some gotcha's along the way.

Car-lease-demo in local development environment

The Car-Lease-Demo seems to be a perfect demo to understand Hyperledger Fabric. However, it seems to be configured to run in IBM Cloud, is anyone successful in running it locally?
I presume that you are referring to this demo. I have not tried this, but it should be possible to run all of this on your laptop. First, follow the directions for running one or more peer instances (and a CA) here. Then, you should be able to run the demo server after a few tweaks.
Looking at the code, you'd have to set some environment variables (VCAP_APP_HOST and VCAP_APP_PORT) to run the node app locally, as these will not be provided unless running in a Cloud Foundry environment.
Further, you'll need to change Server_Side/configurations/configuration.js to provide appropriate values for config.api_* as those values are also specific to the IBM Blockchain service running in Bluemix.

Best solution to host a (command line) Windows application?

I have a Windows application that does some calculations and is called from command line. On my Windows machine, I have a PHP script running under Apache that executes the application and shows the output.
Is there any hosting solution that I can use to do the same? I can't figure out if EC2 or Azure are the right solutions. Basically, I need a web server + ability to execute my application.
Suggestions? Thanks.
You can host your application on AppHarbor, the .NET Platform-as-a-Service. You can either port your web frontend to .NET or try to get your PHP stuff working with Phalanger. AppHarbor is working on Background Tasks, which might be a good match for your workload.
I would just run the PHP script you already have under IIS in a Windows Azure web role.
If it is a Windows Application and you have the source code I would go with an Azure Worker Role. The advantage of using a PaaS (as Azure) instead of an IaaS (as Amazon) is that you wont have to bother of keeping the server up to date.
The real investment in time will be when you rewrite your application to make it work as a Worker Role. The time needed to do this work depends on how your application works right now. If is uses a lot of disc access it might be difficult and perhaps an Amazon server would be better. But if it only crunches numbers in memory an Azure Worker Role is a very good candidate.
The real advantage of using an Amazon server is that you probably wont need to do any work at all. Except maintaining the server.
As described in the question both Azure and EC2 will do the job very well. This is the kind of task both systems are designed for.
So the question becomes really: which is best? That depends on two things: what the application needs to do and your own experience and preference.
As it's a Windows application there should probably be a leaning towards Azure. While EC2 supports Windows, the tooling and support resources for Azure are probably deeper at this point.
If cost is a factor then a (somewhat outdated) resource is here: http://blog.mccrory.me/2010/10/30/public-cloud-hourly-cost-comparison/ -- the conclusion is that, by and large, Azure and Amazon are roughly similar for compute charges.
Steve Marx has a blog post that describes how to run another web server (i.e not IIS) on Azure
This potentially has everything you need - you can deploy Apache and your executable and run it in exactly the same way.
Alternatively - you can deploy your executable along side a bit of code in a worker role that would run that application periodically, all depending on your exact requirements

Resources