How does the openshift build system work? - node.js

I'm struggling with how to set up an openshift build, where I have two projects in one. Both (the frontend and the node backend) have their own package.json but only the latter needs to be started.
I'm using the hooks in the .openshift directory, but there seems to be more 'magic' going on in the background. For example nowhere in my build code there is an npm start declared. Openshift seems to run the command automatically. Is there maybe a way to deactivate this behaviour? What is going on the background? Is the build-process defined by the package I'm using? And if so, how to change it?
If all fails, I really would like to use an external build-server. Is that possible?

Related

How can I deploy my application within a cloned repository on Google App Engine?

I'm using a node package to run a web server (among other benefits) for my project. The catch is, my project is only loaded on the server if it's within a directory of the node package. In other words, my directory structure looks like this:
<npm_pkg>/
<npm_pkg_src>/
clients/
<my_project_name>/
<my_project_src>
I would like to be able to use standard deployment processes for my project (e.g. gcloud app deploy, Travis continuous deployment, etc.), but I need to run my project from within a subdirectory of the larger package. Is there an easy way to force a git clone <pkg> during a build step and deploy my project in the target subdirectory?
I'm pretty new to CI/CD, but I tried to search around for similar examples and couldn't find any. Note: the parent project is not owned by me and thus I can't just use submodules without forking it (and I have no intention to alter it in any way). I also strictly just want to be able to trigger deploys based on my actual project's repository, if possible, whereas submodules would involve maintaining two and committing features twice (from what I understand).
Any help is much appreciated.
Edit: I forgot to mention that as part of this configuration I also need to run my server script from the root of the parent package. IOW, my package.json's start script will look like "start": "cd ../.. && npm start". Just in case it's relevant.
This might be what you’re looking for: CI/CD with App Engine
Clone from the repo and deploy from the subdirectory it is located, and Cloud Source Repositories can automate the whole process for you
I would also suggest you keep services separate, this will make things clearer for you and others that will/might be working on the project with you

Repeatable installs of a Go application?

I am from the NodeJS/JavaScript world where I have npm and dependencies written down in the package.json. When I deploy it, I know that I just need to run npm install and all the dependencies consumed by the app will be installed.
How is it supposed to be done for a Go project? Suppose I have a source code of the app which I deploy remotely by, say, running git pull. Now, how do I make sure the dependencies are present? What I see is I need to install a package manager manually then install dependencies using it?
What's a standard way of deploying a Go app on a server?
First of all, you're indeed thinking like a JS developer. Go is compiled, and thus the proper way to deploy a Go app is not to use the source code at all - you build it on your build server, and deploy a binary. So on the server level you simply don't care anymore, the only place where you need dependencies is the build system.
Now, the standard way to do this in go is to vendor dependencies with your source, that is make sure they are included in the git repo. Another approach is the express them in a manifest file and fetch them with an external tool. These are both more reliable than the naive approach, of simply using go get in build time, fetching the current version of your dependencies (this requires no manifest file).
There are many tools for vendoring management, to name two: Godep and gb

Including local dependencies in deployment to lambda

I have a repo which consists of several "micro-services" which I upload to AWS's Lambda. In addition I have a few shared libraries that I'd like to package up when sending to AWS.
Therefore my directory structure looks like:
/micro-service-1
/dist
package.json
index.js
/micro-service-2
/dist
package.json
index.js
/shared-component-1
/dist
package.json
component-name-1.js
/shared-component-2
/dist
package.json
component-name-2.js
The basic deployment leverages the handy node-lambda npm module but when I reference a local shared component with a statement like:
var sharedService = require('../../shared-component-1/dist/index');
This works just fine with the node-lambda run command but node-lambda deploy drops this local dependency. Probably makes sense because I'm going below the "root" directory in my dependency so I thought maybe I'd leverage gulp to make this work but I'm pretty darn new to it so I may be doing something dumb. My strategy was to:
Have gulp deploy depend on a local-deps task
the local-deps task would:
npm build --production to a directory
then pipe this directory over to the micro-service under the /local directory
clean up the install in the shared
I would then refer to all shared components like so:
var sharedService = require('local/component-name-1');
Hopefully this makes what I'm trying to achieve. Does this strategy make sense? Is there a simpler way I should be considering? Does anyone have any examples of anything like this in "gulp speak"?
I have an answer to this! :D
TL;DR - Use npm link to link create a symbolic link between your common component and the dependent component.
So, I have a a project with only two modules:
- main-module
- referenced-module
Each of these is a node module. If I cd into referenced-module and run npm link, then cd into main-module and npm link referenced-module, npm will 'install' my referenced-module into my main-module and store it in my node_modules folder. NOTE: When running the second npm link, the name of the project is the one you find in your package.json, not the name of the directory (see npm link documentation, previously linked).
Now, in my main-module all I need to do is var test = require('referenced-module') and I can use that to my hearts content. Be sure to module.exports your code from your referenced-module!
Now, when you zip up main-module to deploy it to AWS Lambda, the links are resolved and the real modules are put in their place! I've tested this and it works, though not with node-lambda yet, though I don't see why this should be a problem (unless it does something different with the package restores).
What's nice about this approach as well is that any changes I make to my referenced-module are automatically picked up by my main-module during development, so I don't have to run any gulp tasks or anything to sync them.
I find this is quite a nice, clean solution and I was able to get it working within a few minutes. If anything I've described above doesn't make any sense (as I've only just discovered this solution myself!), please leave a comment and I'll try and clarify for you.
UPDATE FEB 2016
Depending on your requirements and how large your application is, there may be an interesting alternative that solves this problem even more elegantly than using symlinking. Take a look at Serverless. It's quite a neat way of structuring serverless applications and includes useful features like being able to assign API Gateway endpoints that trigger the Lambda function you are writing. It even allows you to script CloudFormation configurations, so if you have other resources to deploy then you could do so here. Need a 'beta' or 'prod' stage? This can do it for you too. I've been using it for just over a week and while there is a bit of setup to do and things aren't always as clear as you'd like, it is quite flexible and the support community is good!
While using serverless we faced a similar issue, when having the need to share code between AWS Lambdas. Initially we used to duplication the code, across each microservice, but later as always it became difficult to manage.
Since the development done in Windows Environment, using symbolic links was not an option for us.
Then we came up with a solution to use a shared folder to keep the local dependencies and use a custom written gulp task to copy these dependencies across each of the microservice endpoints so that the dependency can be required similar to npm package.
One of the decisions we made is not to keep two places to define the dependencies for microservices, so we used the same package.json to define the local shared dependencies, where gulp task passes this file and copy the shared dependencies accordingly also installing the npm dependencies with a single command.
Later we made the code open source as npm modules serverless-dependency-install and gulp-dependency-install.

OpenShift Online, NodeJS, Jenkins, and package dependencies - can someone explain?

I'm running a NodeJS app on Openshift using Jenkins for building deployments (and I'm pretty new to both Node and cloud-based servers). My app depends on a package that has a binary component, so I can't just check it into git - it fails when it's executed on the server. I'm wondering what's the best way to deploy these sorts of dependencies. I see that there is an $OPENSHIFT_DEPENDENCIES_DIR (as well as $OPENSHIFT_BUILD_DEPENDENCIES_DIR), but I can't find any information about how (or if) these can be utilized for node modules. It would be great if I could keep all my dependencies on the server and out of my source tree.
Thanks!
Update: I forgot to mention that I need to apply a patch to the package in question, which is why I can't just rely on it being auto-installed via package.json. Plus, it seems awfully redundant/slow to rebuild all your dependencies on every deployment.
I'm also new to nodejs. I've been playing with nodeJs for about 6 months from now. As for my personal experience nodejitsu is the best cloud-hosting service for nodejs. As I said so due to the following reasons.
You can simply install jitsu command line in your terminal
Your app can be deployed with all the dependencies and databases using the package.json file
They support all the types of sockets either
A very good alternative for jitsu is heroku But sometimes heroku fails with Socket.IO and stuff.

Should node.js changes be instantaneous?

Seeing how node.js is ultimately javascript, shouldn't changes to any files be seen when trying to run the app command? I've forked the yuidocjs repo on github and am trying to work my way through my local build, but for some reason my minor changes do not get picked up. I'm new to node.js so I'm not really sure what the exact conventions are.
In node.js when you require a file the source code gets interpreted. It's considered good practice to require all code when you start the server so all the code gets interpreted once.
The code does not get re-interpreted whenever you run it though.
So changes are not instantaneous.
To help you out, try supervisor which does hot reloading of the server on code changes.
It is possible to make instant changes happen by re-interpreting source code but that is not the default action. Generally you should just re-start the server.
Also see nodemon which will automagically reload changed files under it's authority.
EDIT
Rereading your question, it appears you are asking about the following scenario:
Run app to test
Quit app to refactor js code
Restart app
And you're asking why your changes do not appear at step 3?
If this is the case, you are seeing something very strange which might be related to how and from where files are being required.
In node, run:
console.dir(require.paths);
To see where node is looking for any resources you are requiring. If there is a copy of the file you're changing in any of the paths listed which is not the file you're editing, this would explain your problem.

Resources