I am new to docker, I have successfully installed docker on ubuntu,
I am testing a node app, at folder: /home/samir/Documents/docker-centos:
$ ls
Dockerfile Dockerfile~ index.js index.js~ package.json package.json~
I don't know what are those duplicates ending with ~ as I didn't add them.
index.js:
var express = require('express');
// Constants
var PORT = 8080;
// App
var app = express();
app.get('/', function (req, res) {
// even if I change the output here, I still get Hello world only
res.send('Hello world, This was added\n');
});
app.listen(PORT);
console.log('Running on http://localhost:' + PORT);
Although I have started the container using a data volume that points to the same app dir like this:
docker run -p 49160:8080 -v /home/samir/Documents/docker-centos -d samir/centos-node-hello
but when I view the output like:
curl -i localhost:49160
I get Hello world even if I changed the file..
Do I miss something?
How would I run the container, and edit files at host? why it didn't work?
EDIT
Dockerfile
FROM centos:centos6
# Enable Extra Packages for Enterprise Linux (EPEL) for CentOS
RUN yum install -y epel-release
# Install Node.js and npm
RUN yum install -y nodejs npm
# Install app dependencies
COPY package.json /src/package.json
RUN cd /src; npm install
# Bundle app source
COPY . /src
EXPOSE 8080
CMD ["node", "/src/index.js"]
You need to build the Docker image again using the docker build command
If you're planning on using this setup not only for one-off testing of your app, but for development as well, you'd be better off mounting your application code as a volume.
Related
I have built a flutter web application (flutter stable#3.3.9) where I have set the url strategy to PathUrlStrategy().
Of course, locally this application builds fine and runs fine. I am hosting this application in a NodeJS application as follows:
import express, {Express, Request, Response} from "express";
import dotenv from "dotenv";
import cookieParser from "cookie-parser";
import https from "https";
import {ClientRequest, IncomingMessage} from "http";
import path from "path";
const app: Express = express();
const port = process.env.PORT;
app.use(express.json());
app.use(express.urlencoded({extended: false}));
app.use(cookieParser());
app.use(express.static(path.join(__dirname, "flutter")));
app.get("/api/page/:id", async (req, res) => {
getPageSessionHandler(req, res);
});
app.post("/api/payment", async (req, res) => {
console.log("handling payment request");
postPaymentHandler(req, res);
});
app.get("*", (_, res) => {
res.sendFile(path.resolve(__dirname, "flutter/index.html"));
});
app.listen(port, () => {
console.log(
`⚡️[server (nodejs)]: Server is running at http://localhost:${port}`,
);
});
var postPaymentHandler = (authToken: String, req: any, res: any) => {
//implementation removed
};
var getPageSessionHandler = (authToken: String, req: any, res: any) => {
//implementation removed
};
Of course, this runs fine locally as follows:
flutter build web --release --web-render=html
Then move the build/web* output to the proper folder in my nodejs server.
I can even locally containerize this application and run it from my docker desktop (windows 11) using the following dockerfile:
FROM debian:latest AS build-env
# Install flutter dependencies and nodejs dependencies
RUN apt-get update
RUN apt-get install -y curl git wget unzip gettext-base libgconf-2-4 gdb libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3
RUN apt-get clean
RUN curl -fsSL https://deb.nodesource.com/setup_12.x | bash -
RUN apt-get -y install nodejs
RUN npm install
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build
RUN git clone https://github.com/flutter/flutter.git /usr/local/flutter
ENV PATH="/usr/local/flutter/bin:/usr/local/flutter/bin/cache/dart-sdk/bin:${PATH}"
WORKDIR /app/flutter_src/payment
# Build the app
RUN touch .env
RUN flutter clean && flutter pub get
RUN flutter build web --release --web-renderer=html
# Build the final image
WORKDIR /app
RUN cp -R flutter_src/payment/build/web dist/flutter
CMD ["/bin/sh", "-c", "exec node dist/index.ts"]
Again, this works fine in both my windows node server environment, and also in my docker desktop container. When I deploy this via my CI/CD pipeline to AWS ECR, I am unable to load the application. I can hit the associated API endpoints in the node's index.ts file above. My kubernetes pod is healthy... but I am not able to load routes with a second slash... i.e.:
https:///welcome <--- loads fine
https:///user/fun_username <-- does not load.
After a ton of debugging, I'm finding another oddity in how this behaves in environment. See the Network log from a request to my application's deep-linked route in Chrome (and also Edge):
when requesting /page/{page-id-here}, the browser is requesting the main.dart.js at the subroute.
What's even more perplexing is that if I request the same deep route in Firefox, not only does my application load as intended and work as expected, the browser seems to literally be requesting my main.dart.js at the root (what it should be) of my node server, as it's serving the flutter directory statically... See this screenshot from firefox:
here, the main.dart.js file is requested from the root of the node server as I'd expect.
I have tried all sorts of routing tricks in the node server, but that just feels wrong, especially since it seems to be specific to my environment. If I revert the path strategy back to client-only routing (/#/path/here), this works fine, but does not meet my requirements on this project.
Thank you all for your help. This is weeks of struggling I'm deferring on at this point.
So,
I am using NUXT
I am deploying to google cloud run
I am using dotenv package with a .env file on development and it works fine.
I use the command process.env.VARIABLE_NAME within my dev server on Nuxt and it works great, I make sure that the .env is in git ignore so that it doesnt get uploaded.
However, I then deploy my application using the google cloud run... I make sure I go to the Enviroments tab and add in exactly the same variables that are within the .env file.
However, the variables are coming back as "UNDEFINED".
I have tried all sorts of ways of fixing this, but the only way I can is to upload my .env with the project - which I do not wish to do as NUXT exposes this file in the client side js.
Anyone come across this issue and know how to sort it out?
DOCKERFILE:
# base node image
FROM node:10
WORKDIR /user/src/app
ENV PORT 8080
ENV HOST 0.0.0.0
COPY package*.json ./
RUN npm install
# Copy local nuxt code to the container
COPY . .
# Build production app
RUN npm run build
# Start the service
CMD npm start
Kind Regards,
Josh
Finally I found a solution.
I was using Nuxt v1.11.x
From version equal to or greater than 1.13, Nuxt comes with Runtime Configurations, and this is what you need.
in your nuxt.config.js:
export default {
publicRuntimeConfig: {
BASE_URL: 'some'
},
privateRuntimeConfig: {
TOKEN: 'some'
}
}
then, you can access like:
this.$config.BASE_URL || context.$config.TOKEN
More details here
To insert value to the environment variables is not required to do it in the Dockerfile. You can do it through the command line at the deployment time.
For example here is the Dockerfile that I used.
FROM node:10
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm","start"]
this is the app.js file
const express = require('express')
const app = express()
const port = 8080
app.get('/',(req,res) => {
const envtest = process.env.ENV_TEST;
res.json({message: 'Hello world',
envtest});
});
app.listen(port, () => console.log(`Example app listening on port ${port}`))
To deploy use a script like this:
gcloud run deploy [SERVICE] --image gcr.io/[PROJECT-ID]/[IMAGE] --update-env-vars ENV_TEST=TESTVARIABLE
And the output will be like the following:
{"message":"Hello world","envtest":"TESTVARIABLE"}
You can check more detail on the official documentation:
https://cloud.google.com/run/docs/configuring/environment-variables#command-line
I am building a SPA in Angular 8. I have a multi-stage Docker image that runs ng build to build the distribution and then a simple express server is used to serve the application. (Note: The backend API is on an entirely separate express server.)
Requirements
I need to setup a login page "outside" of the SPA. The login page must be displayed if the user is not authenticated, that way the SPA is not bootstrapped until the authentication is successful (by checking a bearer token in the authorization header).
Questions
Do I need a separate Angular installation to load the login page separate from the rest of the app? Or, should I just skip Angular for the login page and build a simple express page with Pug that sends a POST to the API for authentication?
Note: I am seeking general advice on how to proceed and any examples would be very helpful as well.
Dockerfile
### Dev, QA, and Production Docker servers ###
### Stage 1: Build ###
# Base image
FROM node:12 as builder
# Set working directory
RUN mkdir -p /home/angular/app
WORKDIR /home/angular/app
# Add `/home/angular/app/node_modules/.bin` to $PATH
ENV PATH /home/angular/app/node_modules/.bin:$PATH
# Install and cache app dependencies
COPY angular/package.json /home/angular/app/package.json
RUN npm install -g #angular/cli#8 \
&& npm install
# Add app
COPY ./angular /home/angular/app
# Generate build
RUN ng build --output-path=dist
### Stage 2: Server ###
FROM node:12
USER node
# Create working directory
RUN mkdir /home/node/app
## From 'builder' stage copy over the artifacts in dist folder
COPY --from=builder --chown=node /home/angular/app/dist /home/node/app/dist
# Copy Express server code to container
COPY --chown=node ./express /home/node/app
WORKDIR /home/node/app
RUN npm install
# Expose ports
EXPOSE 4201
CMD ["npm", "start"]
Express server for Angular SPA
This server is run when the Dockerfile executes its command CMD ["npm", "start"]
const express = require('express');
const http = require('http');
const app = express();
// Set name of directory where angular distribution files are stored
const dist = 'dist';
// Set port
const port = process.env.PORT || 4201;
// Serve static assets
app.get('*.*', express.static(dist, { maxAge: '1y' }));
// Serve application paths
app.all('*', function (req, res) {
res.status(200).sendFile(`/`, { root: dist });
});
// Create server to listen for connections
const server = http.createServer(app);
server.listen(port, () => console.log("Node Express server for " + app.name + " listening on port " + port));
Angular supports multiple applications under same project. You can create separate login application using following command:
ng generate application <you-login-app-name-here>
This way you can keep only login related code in '' and other code in you main app. You can build, test or run this new app separate using following commands:
ng build <you-login-app-name-here>
ng test <you-login-app-name-here>
ng serve <you-login-app-name-here>
Angular will generate the build output in /dist/ folder which can be mapped to express route to serve file.
am trying to run a node js application that renders HTML pages on ECT templating engine on Docker, the case is that it works just fine once am interacting directly with the running container as below command and run node inside it.
docker run -p 80:5000 -it abdullahshahin/admin-panel bash
but when I run it as a daemon, express shows below error
Error: Failed to lookup view "index" in views directory "/views"
below is the declaration of ect
var ectRenderer = ECT({ watch: true, root:'/var/njs/html/views', ext : '.ect' });
also, I tried this
root: __dirname + '/views' and this root:'./views', nothing helped.
below is the main app file code, am using MVC on this one
// DECLARE VARIABLES
var express = require('express');
var instance = express();
var parser = require('body-parser');
instance.use(parser({strict:false}));
var commander = require('commander');
var ECT = require('ect');
var ectRenderer = ECT({ watch: true, root: __dirname + '/views', ext : '.ect' });
// PROMPTE USERS TO ENTER PORT
commander.option('-p, --port <n>', 'Port to run server on',parseInt).parse(process.argv);
if(!commander.port)
{
console.log("Please provide a port number");
process.exit(1);
}
// EXPRESS USES
instance.set('view engine', 'ect');
instance.engine('ect', ectRenderer.render);
// EXPRESS TO USE ROUTES
require("./routes/routes.js")(instance);
//instance.use(parser);
// EXPRESS TO USE PROMPTED PORT
instance.listen(commander.port);
below is the Dockerfile content
FROM ubuntu
RUN apt-get update
RUN apt-get --yes install software-properties-common
RUN apt-add-repository -y ppa:chris-lea/node.js
RUN apt-get update
RUN apt-get --yes install nodejs
COPY . /var/njs/html
RUN cd /var/njs/html; npm install
EXPOSE 1234
CMD node /var/njs/html/app.js -p 1234
has anyone had any thoughts on this ?
I just found it, I mentioned the working directory with the daemon option when run the container as below
docker run -p 80:1234 -w="/var/njs/html" -d abdullahshahin/admin-panel
https://docs.docker.com/reference/run/#env-environment-variables
I just started learning Node js, First I installed node,npm, express
I want to work in npm, but I don't know how to start. The command I gave in terminal was
sh-4.2$ cd new/
sh-4.2$ express new-project
sh-4.2$ cd new/
sh-4.2$ express new-project
sh-4.2$ node app
But I was not able to connect in localhost:3000
I dislike automatic project generators so here is how to create a new express project manually.
Create new folder:
mkdir myNewApp
cd myNewApp
Create a new package.json (makes managing dependencies much easier) - just press enter to all questions, you can change these things later:
npm init
Install express and save it in our package.json:
npm install express --save
Create our main server file:
touch server.js
And paste the following:
var express = require('express'),
server = express();
server.get('/', function (req, res) {
res.send('hello world');
});
server.listen(3000);
Now start it:
node server.js
And visit http://localhost:3000 in your browser.
Try this command in your project folder after you've generated the project:
node bin/www
The code to run the server is placed in this file.