I have a requirement. Is there a way to run nodejs apps inside golang? I need to wrap the nodejs app inside a golang application and in the end to result a golang binary that starts the nodejs server and then to be able to call nodejs rest endpoints. I need to encapsulate in the golang binary the entire nodejs application with nodem_odules, if necessarily the nodejs runtime.
Well, you could make a Go program that includes e.g. a zipped Node application that it extracts and starts but it will be very hard to do well - you will have huge binaries, delays in extracting files, potential portability problems etc. Usually when you want to call REST endpoints then you host your Node app on some server and you let the client app (the Go app in your example) to connect to that Node app to work correctly. Advantages are that it is much faster, the app is much smaller, you don't have portability issues with Node binaries and addons and you can quickly update your backend any time you want.
It will be a very bad idea to embed a nodejs app into your golang, for various reasons such as: size, security updates pushing, etc.
However, if you so strong feel that they should be together, you could easily create a docker container with these two (a golang server + a node app) and launch them via docker. You can set the entrypoint to a supervisord daemon so that your node server as well as the golang server can be brought up when your container is run.
If you are planning to deploy via kubernetes you can create two individual docker containers (one for the golang server, one for the node server) but deploy them always together as a pod too.
There are multiple projects to embed binary files and/or file system data into your Go application.
Look at 'Alternatives' section of project 'vfsgen':
https://github.com/shurcooL/vfsgen#alternatives
Related
I would like to containerize a react app and node js backend API, that's currently running on a GCE VM. The majority of the information that's available online leads me to believe that I need to deploy them as separate containers to Cloud Run. However as Martin points out in the video - https://www.youtube.com/watch?v=WHH7eQLbG_s - Simplify your web apps on Google Cloud Run, that one could you use Cloud Run itself.
Is this possible and if so, is there any info. available online relating to this
Cloud Run can export only one port. Therefore, if you want to expose 2 different things (i.e. the frontend and the backend), you need to have a unique entrypoint.
You can wrap your static pages in your NodeJS backend server, or package, in your container, a NGINX server that will route to the react app the request to the static content, and to the nodeJS backend, the backend request.
About the Martin's video, you can also read my comments. It's better to deploy 2 different stuffs and to put a Load Balancer in front of both (similar to your NGINX in fact, but decoupled and therefore more scalable and easier to update). I personally recommend App Engine Standard for the static content (because serving that content is processing free, and the cheapest!), and cloud run for the backend.
O.! I'm the backend half of a small team that primarily builds apps in postgres/nodejs/apollo graphql/react stack.
In my hobby projects I use golang and have gotten decent at constructing CLI apps with cobra/viper. I'm starting to play with the idea of moving all the critical business logic and data access into reusable small CLI apps built in golang and distributed as binaries. I envision the output of these cli's to produce json as to be machine readable.
The nodejs graphql servers would then become more shallow wrappers around the CLI binaries and called using something like const { stdout, stderr } = await exec('<<MY CLI --here >>');
Separating out business logic and data access into a CLI is attractive to me for reusability in non server scenarios. Also I just really like writing in go more than node. This seems like a decent idea, but perhaps I am overlooking some pitfall to this approach? Anyone taken an approach like this?
Use utility only when individual users use them from terminal.
Launching so many cli processes from nodejs servers would not be efficient and scalable in case the node server gets too many concurrent requests.Launching too many cli processes would make it slow and consume system resources.
I would use an API. The node server would pipe the request to the go api server. Now about cli, to be used in standalone mode from terminal by a user, you need to add all your logic into a separate module (lib). This module lib can be hosted (or used) into Go api server as well as cmd. The cmd utility and go http api server processes would just be host while actual thing in module.
Or even better the command-line utility will have 2 modes to run as http server or as standalone utility.
I apologize if this is a general question, but apparently what I'm googling doesn't make sense, and I don't have anyone at my work I can ask.
Here's my situation:
Simple Vue.js app created with the Vue CLI 3.5, with an <input type="file">. Nothing hooked up yet when the form submits.
Docker container with Node and NGINX that pulls in and builds my Vue app - this is working. I basically copied the Dockerfile right from the vue.js site.
Now I need to store the file from the web app (and eventually FTP it to another server).
It was suggested I use Node, but I'm new to all the server-side stuff, so I don't know how to do that. My container has both Node and NGINX in it running the app, so can I stick with one container? And how do I build locally for development and then transfer that to the Docker image?
I'm just looking for pointers/articles/tutorials to help me think about this the right way.
Attention opinion:
In my experience NGINX is more of a router managing traffic.
Its in the name of Node to make a single one per task. So () i think its uesfull to split those.
Node is similar to "frotnend" except you have access to common system apis - wrapped by node.
But basically like you consume rest or graphql apis you can do with node.
Best is to simply peek into the basics you most likely gonna need:
file system, stream,...
https://nodejs.org/api/
I'm moving my Three.js app and its customized node.js environment, which I've been running on my local machine to Google Cloud. I want to test things out there, and hopefully soon get some early alpha testing going with other people.
I'm not sure which is the wiser way to go... to upload the repo I've been running locally as-is onto a VM which users would then access via the VM's external IP until I get a good name to call this app... or merge my local node.js environment with what's available via the Google App Engine and run it on GAE.
Issues I'm running into with the linux VM approach... I'm not sure how to do the equivalent on the VM of what I've been doing locally. In Windows Powershell I cd into the app directory and then enter node index.js. I'm assuming by this method of deployment that I can get the app running as soon as the browser hits the external IP. I should mention too that the app will allow users to save content as well as upload images, and eventually, 3D models as well as json datasets.
Issues I'm running into with the App Engine approach: it looks like I only have access to a linux-based command line, and have to install all the node.js modules manually. Meanwhile I have a bunch of files to upload, both the server-side node files and all the frontend stuff. I don't see where to upload those files, and ultimately what I'd like to do is have access to a visual, editable file-tree interface, as I have in Windows and FileZilla, so I can swap files in and out, etc. Alternatively I suppose I could import a repo from Github? Github would be fine as long as I can visually see what's happening. Is there a visual interface for file structure available in GAE somewhere? Am I missing something?
I went through the GAE "Hello World" tutorial and that worked fine, but was left scratching my head afterward regarding how to actually see and edit the guts of the tutorial app, or even where to look for the files.
So first off, I want to determine what's the better approach, and then if possible, determine how to make the experience of getting my app up there and running a more visual, user-friendly experience.
Thanks.
There are many things to consider when choosing how to run an app, but my instinct for your use case is to simply use a VM on GCE. The most compelling reason for this is that it's the most similar thing to what you have now. You can SSH into the machine and run nohup node index.js & (or node index.js inside tmux/screen if you prefer) and it will start the app and not stop it when you log out of SSH. You can use SCP / SFTP with whatever GUI client you want to upload files. You don't have to learn anything new! If you wanted to, you could even use a Windows VM (although I think you have to pay a little more than for a comparable Linux VM due to the licensing fees).
That said, the other way is arguably more "correct" by modern development standards, but it will involve a lot more learning that will prevent you from getting your app running somewhere other than your laptop in the short term:
First, you'll need to learn about Docker and stateless containers, which is basically what your app runs inside of on AppEngine.
Next, you'll need to learn how to hook up a separate stateful service (database, file server, ...) to your app's container so you can store your files, etc. in it, and then probably rewrite your app somewhat to use it to store stuff.
Next, you'll probably want some way to automatically deploy this from code instead of manually doing it, which gets you into build systems, package managers, artifact storage, continuous integration systems, and on and on and on.
This latter path is certainly what you should choose for a long-running production service if you work with a big team of developers -- but that doesn't mean that it's necessarily the right path for your project today. If you don't care about scaling up automatically, load balancing between nodes, redundant copies of your app running in different regions in case there's a natural disaster, etc., then go with the easy way for now, and you can learn new ways to improve the service when they're actually needed.
What are the best practices for deploying a nodejs application in production?
I would like to know how deploy for production Api's nodejs is being done today, today my application is in docker and running locally.
I wonder if I should use a Nginx inside the container and deploy my server on it or just upload my image node that is already running today.
*I need load balance
There are few main types of deployment that are popular today.
Using platform as a service like Heroku
Using a VPS like AWS, Digital Ocean etc.
Using a dedicated server
This list is in the order of growing difficulty and control. So it's easiest with PaaS but you get more control with a dedicated server - thought it gets significantly more difficult, especially when you need to scale out and build clusters.
See this answer for more details on how to install Node on a VPS or a dedicated server:
how to run node js on dedicated server?
I can only add from experience on AWS using a NAT Gateway which is a dedicated Node server with a MongoDB server behind the gateway. (Obviously this is a scalable system and project.)
With or without Docker, you need to control the production environment. This means clearly defining which NPM libraries you will need for production, how you handle environment variables and clusters for cores.
I would suggest, very strongly, using a tool like PM2 to handle clusters, server shutdowns and restarts and logs. (Workers & slaves also if you need them and code for them).
This list can go on and on, but keep in mind this is only from an AWS perspective. Setting up a Gateway correctly on AWS is also not an easy process. Be prepared for some gotcha's along the way.