Repeatable installs of a Go application? - node.js

I am from the NodeJS/JavaScript world where I have npm and dependencies written down in the package.json. When I deploy it, I know that I just need to run npm install and all the dependencies consumed by the app will be installed.
How is it supposed to be done for a Go project? Suppose I have a source code of the app which I deploy remotely by, say, running git pull. Now, how do I make sure the dependencies are present? What I see is I need to install a package manager manually then install dependencies using it?
What's a standard way of deploying a Go app on a server?

First of all, you're indeed thinking like a JS developer. Go is compiled, and thus the proper way to deploy a Go app is not to use the source code at all - you build it on your build server, and deploy a binary. So on the server level you simply don't care anymore, the only place where you need dependencies is the build system.
Now, the standard way to do this in go is to vendor dependencies with your source, that is make sure they are included in the git repo. Another approach is the express them in a manifest file and fetch them with an external tool. These are both more reliable than the naive approach, of simply using go get in build time, fetching the current version of your dependencies (this requires no manifest file).
There are many tools for vendoring management, to name two: Godep and gb

Related

How can I deploy my application within a cloned repository on Google App Engine?

I'm using a node package to run a web server (among other benefits) for my project. The catch is, my project is only loaded on the server if it's within a directory of the node package. In other words, my directory structure looks like this:
<npm_pkg>/
<npm_pkg_src>/
clients/
<my_project_name>/
<my_project_src>
I would like to be able to use standard deployment processes for my project (e.g. gcloud app deploy, Travis continuous deployment, etc.), but I need to run my project from within a subdirectory of the larger package. Is there an easy way to force a git clone <pkg> during a build step and deploy my project in the target subdirectory?
I'm pretty new to CI/CD, but I tried to search around for similar examples and couldn't find any. Note: the parent project is not owned by me and thus I can't just use submodules without forking it (and I have no intention to alter it in any way). I also strictly just want to be able to trigger deploys based on my actual project's repository, if possible, whereas submodules would involve maintaining two and committing features twice (from what I understand).
Any help is much appreciated.
Edit: I forgot to mention that as part of this configuration I also need to run my server script from the root of the parent package. IOW, my package.json's start script will look like "start": "cd ../.. && npm start". Just in case it's relevant.
This might be what you’re looking for: CI/CD with App Engine
Clone from the repo and deploy from the subdirectory it is located, and Cloud Source Repositories can automate the whole process for you
I would also suggest you keep services separate, this will make things clearer for you and others that will/might be working on the project with you

Packaging Software Ideas

We have a migration tool to migrate the customers data between different applications. I am looking for ideas to make it very easy for the customer to use this tool. Right now they invoke shell scripts with some options and get the data dump, but I want to make this even more easier for the end customer. Tool is written in nodejs
pkg could be what you're looking for.
From the package description:
This command line interface enables you to package your Node.js
project into an executable that can be run even on devices without
Node.js installed.
Use Cases
Make a commercial version of your application without
sources
Make a demo/evaluation/trial version of your app without
sources
Instantly make executables for other platforms
(cross-compilation) Make some kind of self-extracting archive or
installer No need to install Node.js and npm to run the packaged
application No need to download hundreds of files via npm install to
deploy your application. Deploy it as a single file Put your assets
inside the executable to make it even more portable Test your app
against new Node.js version without installing it

Including local dependencies in deployment to lambda

I have a repo which consists of several "micro-services" which I upload to AWS's Lambda. In addition I have a few shared libraries that I'd like to package up when sending to AWS.
Therefore my directory structure looks like:
/micro-service-1
/dist
package.json
index.js
/micro-service-2
/dist
package.json
index.js
/shared-component-1
/dist
package.json
component-name-1.js
/shared-component-2
/dist
package.json
component-name-2.js
The basic deployment leverages the handy node-lambda npm module but when I reference a local shared component with a statement like:
var sharedService = require('../../shared-component-1/dist/index');
This works just fine with the node-lambda run command but node-lambda deploy drops this local dependency. Probably makes sense because I'm going below the "root" directory in my dependency so I thought maybe I'd leverage gulp to make this work but I'm pretty darn new to it so I may be doing something dumb. My strategy was to:
Have gulp deploy depend on a local-deps task
the local-deps task would:
npm build --production to a directory
then pipe this directory over to the micro-service under the /local directory
clean up the install in the shared
I would then refer to all shared components like so:
var sharedService = require('local/component-name-1');
Hopefully this makes what I'm trying to achieve. Does this strategy make sense? Is there a simpler way I should be considering? Does anyone have any examples of anything like this in "gulp speak"?
I have an answer to this! :D
TL;DR - Use npm link to link create a symbolic link between your common component and the dependent component.
So, I have a a project with only two modules:
- main-module
- referenced-module
Each of these is a node module. If I cd into referenced-module and run npm link, then cd into main-module and npm link referenced-module, npm will 'install' my referenced-module into my main-module and store it in my node_modules folder. NOTE: When running the second npm link, the name of the project is the one you find in your package.json, not the name of the directory (see npm link documentation, previously linked).
Now, in my main-module all I need to do is var test = require('referenced-module') and I can use that to my hearts content. Be sure to module.exports your code from your referenced-module!
Now, when you zip up main-module to deploy it to AWS Lambda, the links are resolved and the real modules are put in their place! I've tested this and it works, though not with node-lambda yet, though I don't see why this should be a problem (unless it does something different with the package restores).
What's nice about this approach as well is that any changes I make to my referenced-module are automatically picked up by my main-module during development, so I don't have to run any gulp tasks or anything to sync them.
I find this is quite a nice, clean solution and I was able to get it working within a few minutes. If anything I've described above doesn't make any sense (as I've only just discovered this solution myself!), please leave a comment and I'll try and clarify for you.
UPDATE FEB 2016
Depending on your requirements and how large your application is, there may be an interesting alternative that solves this problem even more elegantly than using symlinking. Take a look at Serverless. It's quite a neat way of structuring serverless applications and includes useful features like being able to assign API Gateway endpoints that trigger the Lambda function you are writing. It even allows you to script CloudFormation configurations, so if you have other resources to deploy then you could do so here. Need a 'beta' or 'prod' stage? This can do it for you too. I've been using it for just over a week and while there is a bit of setup to do and things aren't always as clear as you'd like, it is quite flexible and the support community is good!
While using serverless we faced a similar issue, when having the need to share code between AWS Lambdas. Initially we used to duplication the code, across each microservice, but later as always it became difficult to manage.
Since the development done in Windows Environment, using symbolic links was not an option for us.
Then we came up with a solution to use a shared folder to keep the local dependencies and use a custom written gulp task to copy these dependencies across each of the microservice endpoints so that the dependency can be required similar to npm package.
One of the decisions we made is not to keep two places to define the dependencies for microservices, so we used the same package.json to define the local shared dependencies, where gulp task passes this file and copy the shared dependencies accordingly also installing the npm dependencies with a single command.
Later we made the code open source as npm modules serverless-dependency-install and gulp-dependency-install.

OpenShift Online, NodeJS, Jenkins, and package dependencies - can someone explain?

I'm running a NodeJS app on Openshift using Jenkins for building deployments (and I'm pretty new to both Node and cloud-based servers). My app depends on a package that has a binary component, so I can't just check it into git - it fails when it's executed on the server. I'm wondering what's the best way to deploy these sorts of dependencies. I see that there is an $OPENSHIFT_DEPENDENCIES_DIR (as well as $OPENSHIFT_BUILD_DEPENDENCIES_DIR), but I can't find any information about how (or if) these can be utilized for node modules. It would be great if I could keep all my dependencies on the server and out of my source tree.
Thanks!
Update: I forgot to mention that I need to apply a patch to the package in question, which is why I can't just rely on it being auto-installed via package.json. Plus, it seems awfully redundant/slow to rebuild all your dependencies on every deployment.
I'm also new to nodejs. I've been playing with nodeJs for about 6 months from now. As for my personal experience nodejitsu is the best cloud-hosting service for nodejs. As I said so due to the following reasons.
You can simply install jitsu command line in your terminal
Your app can be deployed with all the dependencies and databases using the package.json file
They support all the types of sockets either
A very good alternative for jitsu is heroku But sometimes heroku fails with Socket.IO and stuff.

How to include Ember into an existing Node/Express.js App

I'm working on including Ember into an already deployed Node/Express/EJS application. I don't want to disrupt any of the existing application behavior, but instead, want build out any additional feature to the app using Ember. The server side code for these new features has already been built, and each endpoint returns the JSON format that Ember Data expects. I've been looking into Ember App Kit and Ember-cli, but I'm not sure how to include these tools into my existing directory structure, and I'm not certain if these are in face the right tools for my use case. Does anyone have any experience with this particular use case?
For example, navigating to /foo returns the existing express route that renders an ejs template, but /bar would be an Ember route that hits the api endpoint of the same name.
Use ember-cli (ember-cli.org). It's perfect for this situation as it allows you to rapidly prototype out your ember app. It even comes with an expressJS based testing suite and mocks server.
Once you are ready to incorporate it to your NodeJS, Flask, or whatever other application all static files should be available in the ember-cli dist directory.
Just don't forget to build the ember-cli project before porting via the means of ember build. After that it's just a simple matter of moving the files in the ember projects dist folder into wherever you need in your environment.
Just to embellish a bit: Ember-cli has a great work-flow for use while building your ember app. Try ember serve for a quick example. I mention this because it speaks to your question of how to incorporate this into your existing project (by project, I assume you may mean workflow). I typically will build ember projects purely using ember-cli and consider the back-end (usually a REST-API exposed via either Flask or NodeJS) a separate concern. When importing the app all I have to concern myself with is making sure my server serves the correct static dist files.
I would not recommend using the Ember App Kit (EAK) as it has been deprecated in favor of ember-cli. It really is.. much, much better.
Ok so I'm going to try to be more complete in this answer. Let's start with the isolated question - ember-cli or eak? Definitely Ember-Cli, but why?
EAK is officially heading for deprecation in favor of ember-cli.
Ember-cli produces more structured, cleaner, maintainable ember code.
Ember-cli integrates your entire ember-app workflow.
Managing all types of dependencies and assets is made simple via bower install --save and Brocfile.js edits. (See the ember-cli docus for explanation)
Now the more complicated part of the question. How do I integrate this with an existing workflow? I recently ran into this when building a webrtc-included ember app. It just so happened that this was my first real use of ember as well. So, not yet realizing the full potential of my new hammer I wrote the REST API, Backend ORM layer, the signalling service, the session cache, and built a complete CI workflow first. Then I was ready to build my ember app and ended up in your exact position.
To short circuit a long story - the lesson I learned was that I should treat my ember-cli app like a completely separate concern. What I mean here is - there's my backend (NodeJS, Apache, Nginx... whatever) and what I code here is built, tested and integrated separately. It normally even lives in its own git repository. It's a separate concern to my front end equation which typically would consist of several components itself. My I-Phone Native app would have its own workflow from build-to-test and integrate to my backend via a REST API. My Android native app another. My web app another. For all intents and purposes, in my workflow these are entirely separate workflows that only tie together when we start talking Continuous Integration.
There's a lot of arguments for why you'd want to do this. Most importantly - it scales.
The beauty of ember-cli is that it makes it fairly trivial to get a workflow for your ember app going and trivial to redeploy your app + workflow on a new box/instance. I would certainly recommend referring to the official ember-cli setup instructions but I'm going to include them here in case the URL goes bad one day:
No really, refer to the link my instructions will suck in comparison...
Deploying a new Ember App
Install NodeJS, NPM and Git (ember-cli will as a default load git going for you on new apps) on your system via sudo apt-get install nodejs and sudo apt-get install npm, sudo apt-get install git.
Note: On Ubuntu 14.04 and some other Debian systems use sudo apt-get install nodejs-legacy instead. If in doubt, use legacy. If you experience problems using the node command after install, it's definitely that you need to use nodejs-legacy. Don't bother trying to do the linking manually.
Install required node modules globally: sudo npm install -g ember-cli, sudo npm install -g bower, sudo npm install -g phantomjs
Create new ember-cli app: cd <Desired Directory>, ember new my-app-name
Now you can look at ember help to begin learning how to use ember-cli. Hint: The --dry-run flag is your friend. You'll notice that when you installed ember-cli all the scaffolding was taken care of for you. You'll see that you can add things with simple ember generate commands and they will not only create the required objects, but the test files as well. Best of all, using ember serve you can start scaffolding your app and via simple flags you can configure the test server to actually proxy and use your already-existing REST API (if you have one) or the expressJS mocks server to build a psuedo-API.
Integrating it with your larger workflow from here is a simple matter of configuring whatever tools you use (I use Jenkins and Ansible for this kind of stuff) to distribute the dist folder of ember-cli to where it should go to be served as static content (it is just a single page webapp in the end).
If you want to instead play with an existing ember-cli app that operates in an isolated workflow and already makes use of most of the goodies in order to get some familiarity - as I suspect you'll quickly realize how to fit this into whatever your current structure is - feel free to clone and play with this one here.
And so finally - to answer the more specific question of how this might fit into an existing directory structure, I would break this down into two categories. When we're talking src - I would have it in it's own "structure", separated at least by being in a separate sub-directory of its own. When we're talking built and deliverable I would include the contents of the /dist folder in whatever static web server directory you want to serve your ember app from.
EDIT: I Added some more detail - hopefully useful details below the line break. Let me know if you have more questions or if I can explain anything better.
I am facing a similar situation. I am planning to use EAK as a "prototyping tool" in a separate project folder. Then build the distribution directory from EAK using grunt dist and insert that into the assets folder of my main Node.js project.

Resources