I have an app that uses the swagger-express-mw library and I start my app the following way:
SwaggerExpress.create({ appRoot: __dirname }, (err, swaggerExpress) => {
// initialize application, connect to db etc...
}
Everything works fine on my local OSX machine. But when I use boot2docker to build an image from my app and run it, I get the following error:
/usr/src/app/node_modules/swagger-node-runner/index.js:154
config.swagger = _.defaults(readEnvConfig(),
^
TypeError: Cannot assign to read only property 'swagger' of [object Object]
My dockerfile looks like this (nothing special):
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm set progress=false
RUN npm install
COPY . /usr/src/app
EXPOSE 5000
CMD ["npm", "run", "dev"]
Has anyone else encountered similar situations where the local environment worked but the app threw an error inside the docker container?
Thanks!
Your issue doesn't appear to be something wrong with your docker container or machine setup. The error is not a docker error, that is a JavaScript error.
The docker container appears to be running your JavaScript module in Strict Mode, in which you cannot assign to read-only object properties (https://msdn.microsoft.com/library/br230269%28v=vs.94%29.aspx). On your OSX host, from the limited information we have, it looks like it is not running in strict mode.
There are several ways to specify the "strictness" of your scripts. I've seen that you can start Node.js with a flag --use_strict, but I'm not sure how dependable that is. It could be that NPM installed a different version of your dependent modules, and in the different version they specify different rules for strict mode. There are several other ways to define the "strictness" of your function, module, or app as a whole.
You can test for strict mode by using suggestions here: Is there any way to check if strict mode is enforced?.
So, in summary: your issue is not inheritantly a docker issue, but it is an issue with the fact that your javascript environments are running in different strict modes. How you fix that will depend on where strict is being defined.
Hope that helps!
Related
I have an application built with nextJs and this application should work on a local server (Windows).
my customer told me that he needed this application to work in the background after searching I found that I needed to use a package called pm2 and when I used it gives me an error and I found that I needed to make some configurations for it and I can't found any helping resources, please help 💔
I found that to run nextJs application in the background you will need a custom configuration
you need to download the pm2 globally in your system
create a file with the name ecosystem.config.js in the root folder next to the package.json file
you need to put your config data in this file which would be something like this
module.exports = {
apps: [
{
name: "inventory_test",
script: "node_modules/next/dist/bin/next",
args: "start -p 3333", //running on port 3000
watch: false,
},
],
};
you should set the name as the name you want to see when you check
the list of pm2
the problem will be solved when you set the script as I did in the code above to be more precise the default run of pm2 is to go to the node js folder in the system and try to make start for the application using npm directly but this is the problem we need to make it use the node runner from the nextjs itself or something like this so we change the script as above
after that, we set the arguments that we should run after the npm and in my example is the arg start and choose the port for our application too
and now we make our config
NOTES
you should make build before you start the application
to run the project you will open the folder of the project in the terminal || cmd || cmder and run the command pm2 start ecosystem.config.js
I have a node app running, and I need to access a command that lives in an alpine docker image.
Do I have to use exec inside of javascript?
How can I install latex on an alpine container and use it from a node app?
I pulled an alpine docker image, started it and installed latex.
Now I have a docker container running on my host. I want to access this latex compiler from inside my node app (dockerized or not) and be able to compile *.tex files into *.pdf
If I sh into the alpine image I can compile '.tex into *.pdf just fine, but how can I access this software from outside the container e.g. a node app?
If you just want to run the LaTeX engine over files that you have in your local container filesystem, you should install it directly in your image and run it as an ordinary subprocess.
For example, this Javascript code will run in any environment that has LaTeX installed locally, Docker or otherwise:
const { execFileSync } = require('node:child_process');
const { mkdtemp, open } = require('node:fs/promises');
const tmpdir = await mkdtemp('/tmp/latex-');
let input;
try {
input = await open(tmpdir + '/input.tex', 'w');
await input.write('\\begin{document}\n...\n\\end{document}\n');
} finally {
input?.close();
}
execFileSync('pdflatex', ['input'], { cwd: tmpdir, stdio: 'inherit' });
// produces tmpdir + '/input.pdf'
In a Docker context, you'd have to make sure LaTeX is installed in the same image as your Node application. You mention using an Alpine-based LaTeX setup, so you could
FROM node:lts-alpine
RUN apk add texlive-full # or maybe a smaller subset
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY ./ ./
CMD ["node", "main.js"]
You should not try to directly run commands in other Docker containers. There are several aspects of this that are tricky, including security concerns and managing the input and output files. If it's possible to directly invoke a command in a new or existing container, it's also very straightforward to use that permission to compromise the entire host.
I'm running a next.js react app in a docker container. It's being composed with several other contains: one running Ghost (I'm using the API), one running mysql, and one running NGINX. I've got everything running in development mode.
It works perfectly when run using next dev. But when I run it by doing next build and next start, I start seeing errors like Error: getaddrinfo ENOTFOUND ghost-api when I try to make server-side HTTP requests to my Ghost API container. I'm not entirely sure what the issue is but it seems like there's some issue with how Node is making requests after being built. I've been digging through a lot of Docker/Node questions trying to figure this one out but haven't had any luck.
The entire project can be found here: https://github.com/MichaelWashburnJr/react-cms
The problem may exist in the environment variable that you are using. In both getGhostApi and getGhostApiKey function, you are using the environment variable.
In NextJs you'll have to specify a next.config.js in which you define the variables that you need for
Ex. next.config.js
module.exports = {
serverRuntimeConfig: {
// Will only be available on the server side
mySecret: 'secret',
secondSecret: process.env.SECOND_SECRET, // Pass through env variables
},
publicRuntimeConfig: {
// Will be available on both server and client
staticFolder: '/static',
},
}
You can also refer to the next documentation for the same.
https://nextjs.org/docs/api-reference/next.config.js/runtime-configuration
I'm not able to reproduce the error. How are you starting the frontend container in prod mode?
From the error it appears like you might be trying to start the frontend container or the frontend app as a separate process without starting it as part of the compose project. If that is the case, the name ghost-api won't be resolvable and you would get the Error: getaddrinfo ENOTFOUND ghost-api error.
I've changed the command key of frontend container as follows:
command: [ "yarn", "start-prod" ]
Changed the "start-prod" script in frontend/package.json as follows:
"start-prod": "next build && NODE_ENV='production' next start"
and everything worked as it worked in dev mode. I got some UNKNOWN_CONTENT_API_KEY error in both dev and prod mode but definitely there is no ghost-api name resolution error.
After cloning your repos:
$ grep -R ST_API *
frontend/.env.development:GHOST_API_URL=http://ghost-api:2368
frontend/.env.production:GHOST_API_URL=http://ghost-api:2368
frontend/src/constants/Config.js:export const getGhostApi = () => process.env.GHOST_API_URL || 'http://localhost:8000';
ghost-api is not a domain name: to make it work you need to edit your hosts file or (for a real production environment) to change http://ghost-api:2368 in frontend/.env.production file with the real deploy domain name.
If you are asking why you can't trust on docker compose networking, the answer is: you can, but only in the containers; while the front end will run in the browser of your application client, which is outside the containers.
Hope this helps.
It seems that Docker's hostname resolution does not work during build time. That is why ghost-api is not found.
Instead of referencing the other container by its name (ghost-api), on Mac you can try host.docker.internal. On Linux, using host networking during build worked for me:
nextjs-app:
build:
network: "host"
# ...
network_mode: "host"
This way, you can reference the other container using localhost.
I got this strange issue when trying to run my NodeJS App inside a Docker Container.
All the path are broken
e.g. :
const myLocalLib = require('./lib/locallib.');
Results in Error:
Cannot find module './lib/locallib'
All the file are here (show by ls command inside the lib direcotry)
I'm new to Docker so I may have missed something in my setup
There's my Dockerfile
FROM node:latest
COPY out/* out/
COPY src/.env.example src/
COPY package.json .
RUN yarn
ENTRYPOINT yarn start
Per request : File structure
Thank you.
You are using the COPY command wrong. It should be:
COPY out out
My docker file is super simple:
FROM node:4-onbuild
RUN npm install gulp -g;
EXPOSE 8888
This image will automatically run the start script in package.json which I have set simply as gulp.
If I run gulp on my host machine, and make a change to node file, it automatically restarts server:
var gulp = require('gulp');
var nodemon = require('gulp-nodemon');
gulp.task('default', function() {
nodemon({
script: 'server.js', // starts up server on port 4000
env: { 'NODE_ENV': 'development' }
})
});
Figuring everything is okay I run this: docker run -d -p 1234:4000 -v $(pwd):/usr/src/app my-image
Going to http://192.168.99.100:1234/ shows 'Hello World!' from my server.js file. Updating the file does NOT update what I see by hitting that URL again. If I exec into the container, I see the file is updated. Since the container started node via the same gulp command, I don't understand why the node server wouldn't have restarted and shown the update.
The TL;DR of this is that you need to set nodemon to poll the filesystem for changes as follows: https://github.com/remy/nodemon#application-isnt-restarting
In some networked environments (such as a container running nodemon reading across a mounted drive), you will need to use the legacyWatch: true which enabled Chokidar's polling.
Via the CLI, use either --legacy-watch or -L
The longer version is this (with one key assumption - you're using docker on Mac or similar):
On Mac or similar, docker doesn't run natively and instead runs inside of a virtual machine (generally virtual box via docker-machine). Virtual machines generally don't propagate filesystem inotify events, which is what most watchers rely on to restart or perform an action when a file changes. As the virtual machine doesn't propagate the events from the host, Docker never receives the events. Your original docker file would probably work on a native linux machine.
There's an open issue and much more detailed discussion of this here.