I have a node.js app that compiles the runtime version into the dist folder. therefore the package.json file specifies the start script node dist/index.js.
I now want to containerize it but the container doesn't need to have the distribution directory and those files really should live in the root of the app, thus my Dockerfile contains the lines
COPY package.json /usr/src/app/
COPY dist/* /usr/src/app/
which places the runtime files in the image. the problem I have is that when the docker file issues its last command:
CMD ["npm", "start"]
it fails because the index.js is now in the wrong location (because it's looking for it in the non-existent distribution directory). I could solve the problem by issuing:
CMD ["node", "index.js"]
instead but that seems like the wrong thing to do. what is the correct approach?
* update I *
I had modified the index.js so it could run from the root of the project (i.e. it expects to find the resources it needs in the dist/ folder) by issuing a node dist/index.js, but of course, this is now also a problem since there is no distribution directory. how is this generally approached?
I would code all your javascript require calls relative to the current file with no reference to the "dist" directory. So let's say you have index.js and routes.js in the project root. index.js loads routes via var routes = require('./routes'). When you compile these, compile BOTH of them into the dist directory and all should be well.
If you want to run via npm start from the project root, you can configure that to do node dist/index.js.
For docker, there's no reason to use npm at all to launch your app. It's an unnecessary process executing for no benefit. Just launch via node index.js in your dockerfile with WORKDIR /usr/src/app.
Related
I'm unable to move my index.ts file inside a src folder to organize my project.
I'm using prisma and following the 'from the scratch tutorial' (https://www.prisma.io/docs/getting-started/setup-prisma/start-from-scratch) which require just a index.ts to query the database. When I run the script just from the root folder using npx ts-node index.ts it runs fine.
But when I put it inside a folder (in my case, src/controller, two folders), it logs a
error: Cannot find module './index.ts
If I run npx ts-node src/controller/index.ts with the path to index.ts, it does run fine, but there is a way to configure the path, so I can just type index.ts?
I have a nodejs app with the following structure
app
-dir1
-dir2
-dir3
app.js
package.js
and for dir1 is an app of its own, also it will function as a package of the main app when ran from app/app.js
if i copy the entire app dir to another folder, navigate to app/dir1
it will contain
dir1
-pack1
-pack2
-pack3
-pack4
app_sub.js
package.json
and i have to run this app_sub.js
when i run this file module not found error is occurring
for the file in
/home/mypath/app/dir2/somefile.js
the error is occurring from outside the folder of the app.
can you guys help me in this matter?
Note:
in the directory where i copied the app if i run the npm install from app directory and then goto the app/dir1 and run node app_sub.js it is working fine
I have a project written in Typescript which uses jasmine-ts to run a series of tests.
I need to create a Docker container to run the tests for a few reasons.
Whilst the project locally runs OK npm test:
c:\github\gareththegeek\corewar>npm test
> corewar#0.0.26 test c:\github\gareththegeek\corewar
> nyc jasmine-ts
Started
.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
503 specs, 0 failures
Finished in 0.893 seconds
When I containerise the same folder and run npm test from the docker image, the Typescript imports don't seem to be recognised.
c:\github\gareththegeek\corewar>docker run corewar
> corewar#0.0.26 test /usr/src/app
> nyc jasmine-ts
/usr/src/app/node_modules/ts-node/src/index.ts:307
throw new TSError(formatDiagnostics(diagnosticList, cwd, ts, lineOffset))
^
TSError: тип Unable to compile TypeScript
parser/Expression.ts (1,29): Cannot find module './interface/IExpression'. (2307)
parser/Expression.ts (2,39): Cannot find module './interface/IToken'. (2307)
parser/Expression.ts (3,30): Cannot find module './interface/ITokenStream'. (230
My docker image is as basic as it comes:
FROM node:carbon
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
CMD [ "npm", "test" ]
I'm really unsure why the docker container behaves differently to my local npm test command. I'm going to assume it's because locally I'm on windows and the docker container isn't. But I'm really unsure how to debug this.
Can anyone give any pointers on why Typescript imports aren't working within docker the way I'm expecting them too?
I can paste over some of the Typescript code if it's any help.
Sigh ignore this question, I resolve the issue by realising that my includes referred to interface whereas for some reason the folder in Github was Interface.
I'd renamed all my folders locally through vscode, but this one hadn't updated git (even though others did :/).
I did the following:
cd parser
git mv Interface ifacetemp
git mv ifacetemp interface
commit...
push...
This solved the issue, very annoying.
I'm trying to identify a good practice for the build process of a nodejs app using grunt/gulp to be deployed inside a docker container.
I'm pretty happy with the following sequence:
build using grunt (or gulp) outside container
add ./dist folder to container
run npm install (with --production flag) inside container
But in every example I find, I see a different approach:
add ./src folder to container
run npm install (with dev dependencies) inside container
run bower install (if required) inside container
run grunt (or gulp) inside container
IMO, the first approach generates a lighter and more efficient container, but all of the examples out there are using the second approach. Am I missing something?
I'd like to suggest a third approach that I have done for a static generated site, the separate build image.
In this approach, your main Dockerfile (the one in project root) becomes a build and development image, basically doing everything in the second approach. However, you override the CMD at run time, which is to tar up the built dist folder into a dist.tar or similar.
Then, you have another folder (something like image) that has a Dockerfile. The role of this image is only to serve up the dist.tar contents. So we do a docker cp <container_id_from_tar_run> /dist. Then the Dockerfile just installs our web server and has a ADD dist.tar /var/www.
The abstract is something like:
Build the builder Docker image (which gets you a working environment without webserver). At thist point, the application is built. We could run the container in development with grunt serve or whatever the command is to start our built in development server.
Instead of running the server, we override the default command to tar up our dist folder. Something like tar -cf /dist.tar /myapp/dist.
We now have a temporary container with a /dist.tar artifact. Copy it to your actual deployment Docker folder we called image using docker cp <container_id_from_tar_run> /dist.tar ./image/.
Now, we can build the small Docker image without all our development dependencies with docker build ./image.
I like this approach because it is still all Docker. All the commands in this approach are Docker commands and you can really slim down the actual image you end up deploying.
If you want to check out an image with this approach in action, check out https://github.com/gliderlabs/docker-alpine which uses a builder image (in the builder folder) to build tar.gz files that then get copied to their respective Dockerfile folder.
The only difference I see is that you can reproduce a full grunt installation in the second approach.
With the first one, you depend on a local action which might be done differently, on different environments.
A container should be based in an image that can be reproduced easily instead of depending on an host folder which contains "what is needed" (not knowing how that part has been done)
If the build environment overhead which comes with the installation is too much for a grunt image, you can:
create an image "app.tar" dedicated for the installation (I did that for Apache, that I had to recompile, creating a deb package in a shared volume).
In your case, you can create an archive ('tar') of the app installed.
creating a container from a base image, using the volume from that first container
docker run --it --name=app.inst --volumes-from=app.tar ubuntu untar /shared/path/app.tar
docker commit app.inst app
Then end result is an image with the app present on its filesystem.
This is a mix between your approach 1 and 2.
A variation of the solution 1 is to have a "parent -> child" that makes the build of the project really fast.
I would have dockerfile like:
FROM node
RUN mkdir app
COPY dist/package.json app/package.json
WORKDIR app
RUN npm install
This will handle the installation of the node dependencies, and have another dockerfile that will handle the application "installation" like:
FROM image-with-dependencies:v1
ENV NODE_ENV=prod
EXPOSE 9001
COPY dist .
ENTRYPOINT ["npm", "start"]
with this you can continue your development and the "build" of the docker image is going to be faster of what it would be if you required to "re-install" the node dependencies. If you install new dependencies on node, just re-build the dependencies image.
I hope this helps someone.
Regards
I'm using jasmine-node to test my Meteor application and I want to use the auto-test feature so I don't have to rerun the tests all the time by myself.
My meteor application folder structure is like this:
server
foo.coffee
tests
foo.spec.coffee
And with the spec file I want to test code which is located in foo.coffee. I start jasmine-node with this args:
jasmine-node ./ --autotest --coffee --test-dir tests
And now I would assume that the autotest feature will react on all changes in the root folder but it just reacts on changes in the test folder. And I can't start it in the root folder because I get an error in the .meteor files (and I don't want to have jasmine testing/including the meteor code anyway).
So I want to have jasmine rerun the tests even if I change code in the server folder. How can I achieve that?
Use the --watch parameter along with --autotest and specify the directories that contain whatever files you want to have watched.