I'm a front end developer about to join a project team working with Service Fabric to build a Web Front End to their microservice driven application.
One of the problems I've been having in my own research is that when working with local Service Fabric Clusters, I have to redeploy my Application to test if something does or doesn't work in my Web App. This slows down developer velocity massively, as the process will only take longer and longer as other Back End services are added. I largely work with the Web App communicating to an API Gateway Service (GraphQL.NET).
What I'd like to know is if there's a way to run a local Web Application out outside of a Service Fabric cluster, but still have it communicate to one. This would allow my front end developer tool chain to remain intact, and develop at a much faster pace with incremental building and live-reload tools.
Of course, if anyone's come up with any better solution to the problem, I'd love to hear about it! ;)
We have a javascript front end (so this may not be applicable to you)- which means that there's a ton of front end library files etc. This was a nightmare to copy across to the cluster for testing and took forever. Theres a couple of ways I've been able to speed things up.
One is by keeping all the front end files in a separate project and using a build step to copy them across into the asp.net core project. So only the bundled/minified files are copied into the cluster when deploying.
Another option is to host these front-end files with a local node http-server which watches for changes etc and keep a static environment file where you can set the ip/hostname of your local cluster thats running. I use fiddler to redirect the hostname to the local ip, this way you can use the urls that you will use in production, which is handy. You'll need to set up cors though, which wasn't a problem for us.
So yes, definitely possible.
Related
I am working on an app that needs to be self-hosted on a Windows 10 PC where all the clients are inside a company network. I am using docker-compose for the various microservices and I am considering JHipster for an API Gateway and Registry service.
As I am reading the documentation, there is a line in the JHipster docs (https://www.jhipster.tech/jhipster-registry/
) that says "You can run a JHipster Registry instance in the cloud. This is mandatory in production, but this can also be useful in development". I am not a cloud expert so I not sure what is different in the environments that would cause the Registry app issues when running on a local PC. Or perhaps there is a way to give the local Windows PC a 'cloud' environment.
Thanks for any insight you have for me.
I think that this sentence does not make sense.
Of course, you must run a registry if you generated apps with this option but it does not mean that you must run it in the cloud. The same doc shows you many alternative ways to run it.
Running all your micro services on a single PC is a bit strange, it defeats the purpose of a microservice architecture because you got a single point of failure that can't scale. Basically, you are paying for extra complexity without getting the benefits, where a monolith would be so much simpler and more productive.
How many developers are working on your app?
I'm moving my Three.js app and its customized node.js environment, which I've been running on my local machine to Google Cloud. I want to test things out there, and hopefully soon get some early alpha testing going with other people.
I'm not sure which is the wiser way to go... to upload the repo I've been running locally as-is onto a VM which users would then access via the VM's external IP until I get a good name to call this app... or merge my local node.js environment with what's available via the Google App Engine and run it on GAE.
Issues I'm running into with the linux VM approach... I'm not sure how to do the equivalent on the VM of what I've been doing locally. In Windows Powershell I cd into the app directory and then enter node index.js. I'm assuming by this method of deployment that I can get the app running as soon as the browser hits the external IP. I should mention too that the app will allow users to save content as well as upload images, and eventually, 3D models as well as json datasets.
Issues I'm running into with the App Engine approach: it looks like I only have access to a linux-based command line, and have to install all the node.js modules manually. Meanwhile I have a bunch of files to upload, both the server-side node files and all the frontend stuff. I don't see where to upload those files, and ultimately what I'd like to do is have access to a visual, editable file-tree interface, as I have in Windows and FileZilla, so I can swap files in and out, etc. Alternatively I suppose I could import a repo from Github? Github would be fine as long as I can visually see what's happening. Is there a visual interface for file structure available in GAE somewhere? Am I missing something?
I went through the GAE "Hello World" tutorial and that worked fine, but was left scratching my head afterward regarding how to actually see and edit the guts of the tutorial app, or even where to look for the files.
So first off, I want to determine what's the better approach, and then if possible, determine how to make the experience of getting my app up there and running a more visual, user-friendly experience.
Thanks.
There are many things to consider when choosing how to run an app, but my instinct for your use case is to simply use a VM on GCE. The most compelling reason for this is that it's the most similar thing to what you have now. You can SSH into the machine and run nohup node index.js & (or node index.js inside tmux/screen if you prefer) and it will start the app and not stop it when you log out of SSH. You can use SCP / SFTP with whatever GUI client you want to upload files. You don't have to learn anything new! If you wanted to, you could even use a Windows VM (although I think you have to pay a little more than for a comparable Linux VM due to the licensing fees).
That said, the other way is arguably more "correct" by modern development standards, but it will involve a lot more learning that will prevent you from getting your app running somewhere other than your laptop in the short term:
First, you'll need to learn about Docker and stateless containers, which is basically what your app runs inside of on AppEngine.
Next, you'll need to learn how to hook up a separate stateful service (database, file server, ...) to your app's container so you can store your files, etc. in it, and then probably rewrite your app somewhat to use it to store stuff.
Next, you'll probably want some way to automatically deploy this from code instead of manually doing it, which gets you into build systems, package managers, artifact storage, continuous integration systems, and on and on and on.
This latter path is certainly what you should choose for a long-running production service if you work with a big team of developers -- but that doesn't mean that it's necessarily the right path for your project today. If you don't care about scaling up automatically, load balancing between nodes, redundant copies of your app running in different regions in case there's a natural disaster, etc., then go with the easy way for now, and you can learn new ways to improve the service when they're actually needed.
I have been playing around with Kudu on an IIS development server and succeeded in making it deploy a hello world site etc.
But I was wondering if there was some resources on how to deploy Kudu in larger environments. So that you can quickly add new server nodes (virtual) to balance load.
Is there some approach to manage multiple Kudu deployments from a centralized location? I know It has an REST API and obviously we can probably use this, but that sort of requires some development, so was just looking to see if there was an existing solution out there.
But So far I have either searched after the wrong thing on Google, or there isn't much of what I need.
Does anyone have any experience in running it in larger environments with many servers?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
There are a bunch of managed cloud based hosting services for nodejs out there which seem relatively new and some still in Beta.
Yet another path to host a nodejs app is setting up a stack on a VPS like Linode.
I'm wondering what's the basic difference here between these two kinds of deployment.
Which factors should one consider in choosing one over another?
Which one is more suitable for production considering how young these services are.
To be clear I'm not asking on choosing a provider but to decide whether to host on a managed nodejs specific hosting or on an old fashioned self setup VPS.
Using one of the services is for the most part hands off - you write your code and let them worry about managing the box, keep your process up, creating the publishing channel, patching the OS, etc...
In contrast having your own VM gives you more control but with more up front and ongoing time investment.
Another consideration is some hosters and cloud providers offer proprietary or distinct variations on technologies. They have reasons for them and they offer value but it does mean that if you want to switch cloud providers, it might mean you have to rewrite code, deployment scripts etc... On the other hand using VMs with standard OS as the baseline is pretty generic. If you automate/script/document the configuration of your VMs and your code stays generic, then your options stay open. If you do take a dependency on a proprietary cloud technology then it would be good to abstract it away behind an interface so it's a decoupled component and not sprinkled throughout your code.
I've done both. I did the VM path recently mostly because I wanted the learning experience. I had to:
get the VM from the cloud provider
I had to update and patch the OS
I had to install and configure git as a publishing channel
I had to write some scripts and use things like forever to keep it running
I had to configure the reverse http-proxy to get it to run multiple sites.
I had to configure DNS with the cloud provider, open ports for git etc...
The list goes on. In the end, it cost me more up front time not coding but I learned about a lot more things. If those are important to you, then give it a shot. If you want to focus on writing your code, then a node hosting provider may be for you.
At the end of it, I had also had more options - I wanted to add a second site. I added an entry to my reverse proxy, append my script to start up another app with forever, voila, another site. More control. After that, I wanted to try out MongoDB - simple - installed it.
Cost wise they're about the same but if you start hosting multiple sites with many other packages like databases etc..., then the VM can start getting cheaper.
Nodejitsu open sourced their tools which also makes it easier if you do your own.
If you do it yourself, here's some links that may help you:
Keeping the server up:
https://github.com/nodejitsu/forever/
http://blog.nodejitsu.com/keep-a-nodejs-server-up-with-forever
https://github.com/bryanmacfarlane/svchost
Upstart and Monit
generic auto start and restart through monitoring
http://howtonode.org/deploying-node-upstart-monit
Cluster Node
Runs one process per core
http://nodejs.org/docs/latest/api/cluster.html
Reverse Proxy
https://github.com/nodejitsu/node-http-proxy
https://github.com/nodejitsu/node-http-proxy/issues/232
http://blog.nodejitsu.com/http-proxy-middlewares
https://github.com/nodejitsu/node-http-proxy/issues/168#issuecomment-3289492
http://blog.argteam.com/coding/hardening-node-js-for-production-part-2-using-nginx-to-avoid-node-js-load/
Script the install
https://github.com/bryanmacfarlane/svcinstall
Exit Shell Script Based on Process Exit Code
Publish Site
Using git to publish to a website
IMHO the biggest drawback of setting up your own stack is that you need to manage things like making Node.js run forever, start it as a daemon, bring it behind a reverse-proxy such as Nginx, and so on ... the great thing about Node.js - making firing up a web server a one-liner - is one of its biggest drawbacks when it comes to production-ready systems.
Plus, you've got all the issues of managing and updating and securing your server yourself.
This is so much easier with the hosters: Usually it's a git push and that's it. Scaling? Easy. Replication? Easy. ...? Easy. All within a few clicks.
The drawback with the hosters is that you can not adjust the environment. Okay, you can probably choose which version of Node.js and / or npm to run, but that's it. You have no control over what 3rd party software is installed. You've got no control over the OS. You've got no control over where the servers are located. And so on ...
Of course, some hosters allow you access to some of these things, but there is rarely a hoster that supports all.
So, basically the question regarding Node.js is the same as with each other technology: It's a pro vs con of individualism, pricing, scalabilty, reliability, knowledge, ...
I personally chose to go with a hoster: The time and effort I save easily outperform the disadvantages. Mind you: For me, personally.
This question needs to be answered individually.
Using Docker is another way to simplify the setup on single Linux VPS. With Docker both development and production setups are faster, more robust, and more secure.
The setup is faster and more robust because you will be deploying ready Node.js image at once, without running any installation scripts. And it would be more secure because internal dependencies, such as database, can be hidden from outside world completely and accessible only from Docker internal network. On top of it, Docker significantly simplifies the upgrade process for underlying OS and Node.js runtime.
There are two ways to setup Node.js Docker environment. The first one – follow the instruction published here how to dockerize your application and deploy it with Docker, alongside with databases when needed. The guide gives the instructions for the development setup, the production setup will be similar.
Another way would be deploying official Node.js docker image and mounting application code as a volume or a folder to Node.js image. That would allow to update Node.js image going forward without re-building and re-deploying the application. Such approach solves long-standing problem with security patching of Docker images.
To help out with the setup of Docker on single machine - you can use Abberit Admin Panel. It will set up Node.js environment for you with a click of a button, including databases if you need them. The tool is free, and you can turn it off after you have completed initial setup. On the other hand, if later you decide to reduce maintenance tax of the production - you can migrate into managed service without any changes in the app.
Disclaimer: I am one of the founders of Abberit.
This may be a basic question, but how do I go about effeciently deploying updates to currently running node.js code?
I'm coming from a PHP, JavaScript (client-side) background, where I can just overwrite files when they need updating and the changes are instantly available on the produciton site.
But in node.js I have to overwrite the existing files, then shut-down and the re-launch the application. Should I be worried by potential downtime in this? To me it seems like a more risky approach than the PHP (scripting) way. Unless I have a server cluster, where I can take down one server at a time for updates.
What kind of strategies are available for this?
In my case it's pretty much:
svn up; monit restart node
This Node server is acting as a comet server with long polling clients, so clients just reconnect like they normally would. The first thing the Node server does is grab the current state info from the database, so everything is running smoothly in no time.
I don't think this is really any riskier than doing an svn up to update a bunch of PHP files. If anything it's a little bit safer. When you're updating a big php project, there's a chance (if it's a high traffic site it's basically a 100% chance) that you could be getting requests over the web server while you're still updating. This means that you would be running updated and out-of-date code in the same request. At least with the Node approach, you can update everything and restart the Node server and know that all your code is up to date.
I wouldn't worry too much about downtime, you should be able to keep this so short that chances are no one will notice (kill the process and re-launch it in a bash script or something if you want to keep it to a fraction of a second).
Of more concern however is that many Node applications keep a lot of state information in memory which you're going to lose when you restart it. For example if you were running a chat application it might not remember who users were talking to or what channels/rooms they were in. Dealing with this is more of a design issue though, and very application specific.
If your node.js application 'can't skip a beat' meaning it is under continuous bombardment of incoming requests, you just simply cant afford that downtime of a quick restart (even with nodemon). I think in some cases you simply want a seamless restart of your node.js apps.
To do this I use naught: https://github.com/superjoe30/naught
Zero downtime deployment for your Node.js server using builtin cluster API
Some of the cloud hosting providers Node.js (like NodeJitsu or Windows Azure) keep both versions of your site on disk on separate directories and just redirect the traffic from one version to the new version once the new version has been fully deployed.
This is usually a built-in feature of Platform as a Service (PaaS) providers. However, if you are managing your servers you'll need to build something to allow for traffic to go from one version to the next once the new one has been fully deployed.
An advantage of this approach is that then rollbacks are easy since the previous version remains on the site intact.