i am currently working on docker with django and nuxt js. I can get json data from https://jsonplaceholder.typicode.com/posts in asyncData or nuxtServerInit it's correctly fetching and getting no error. But when i want to fetch from my django rest api service is getting error (cors headers added). I tested to fetching posts from django rest api with created() hook and it's working. I don't understand why its getting error.
pages/index.vue
export default {
asyncData(context){
return context.app.$axios.$get('http://127.0.0.1:8000') // this is throwing an error
.then((res) => {
let posts = [];
posts = res;
return { posts: res }
})
},
created() {
this.$axios.$get('http://127.0.0.1:8000') // this is not throwing any error
.then(e => {console.log(e);})
}
}
my docker compse ile
version: '3'
networks:
main:
driver: bridge
services:
api:
container_name: blog_api
build:
context: ./backend
ports:
- "8000:8000"
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
volumes:
- ./backend:/app
networks:
- main
web:
container_name: blog_web
build:
context: ./frontend
ports:
- "8080:8080"
volumes:
- ./frontend:/code
networks:
- main
backend docker file
FROM python:3.8-alpine
ENV PYTHONBUFFERED 1
ENV PYTHONWRITEBYTECODE 1
RUN mkdir /app
WORKDIR /app
COPY ./requirements.txt /app/
RUN pip install -r requirements.txt
EXPOSE 8000
ADD . /app
frontend dockerfile
FROM node:13-alpine
WORKDIR /code/
COPY . .
EXPOSE 8080
ENV NUXT_HOST=0.0.0.0
ENV NUXT_PORT=8080
RUN npm install
CMD ["npm", "run", "dev"]
Nuxt-config.js
server: {
port: 8080,
host: '0.0.0.0'
}
Related
I try to run both React and Nest apps with docker-compose up, but it fails while running the server.
However, there is not problems when starting 2 containers separately.
The error is:
server | Unknown command: "start:dev"
server |
server | Did you mean one of these?
server | npm run start:dev # run the "start:dev" package script
docker-compose.yml:
version: "3"
services:
client:
container_name: client
build:
context: ./client
dockerfile: Dockerfile
image: client
ports:
- "3000:3000"
volumes:
- ./client:/app
server:
container_name: server
build:
context: ./server
dockerfile: Dockerfile
image: server
ports:
- "8080:8080"
volumes:
- ./server:/app
Server Dockerfile:
FROM node:16
WORKDIR ./app
COPY package.json ./
RUN npm install
COPY . .
ENV PORT=8080
EXPOSE 8080
CMD ["npm", "run", "start:dev"]
Client Dockerfile:
FROM node:16
WORKDIR ./app
COPY package.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
The issue is with docker-compose.yml, but I can't see if it's connected to nest or docker.
Thank you for the help in advance.
I am creating a web architecture with Vue.JS, a node.js / express backend and a mongoDB database. All is docked in separate containers but on the same subnet.
Dockers config files :
docker-compose.yml
version: "2"
services:
backend:
container_name: WESCANADMINBACK
image: "node:latest"
depends_on:
- db
working_dir: /home/node/app
volumes:
- /home/***/***/data/backend:/home/node/app
- /usr/app/node_modules
environment:
- MONGO_URL=mongodb://172.0.43.9:27017/wescan
- APP_PORT=8080
expose:
- "8080"
command: "node app.js"
networks:
adminNet:
ipv4_address: 172.0.43.8
db:
container_name: WESCANADMINDB
image: mongo:4.0
restart: always
networks:
adminNet:
ipv4_address: 172.0.43.9
frontend:
container_name: WESCANADMIN
build:
context: /home/***/***/data
volumes:
- /home/***/***/data:/app
- /app/node_modules
expose:
- "80"
environment:
- BACKEND_URL=http://172.0.43.8/wescan
networks:
adminNet:
ipv4_address: 172.0.43.10
networks:
adminNet:
driver: bridge
ipam:
config:
- subnet: 172.0.43.0/24
Dockerfile (frontend)
FROM node:latest as builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx:alpine as production-build
COPY ./.nginx/nginx.conf /etc/nginx/nginx.conf
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /app/dist /usr/share/nginx/html
ENV HOST 0.0.0.0
EXPOSE 80
ENTRYPOINT ["nginx", "-g", "daemon off;"]
I tried to send request from my server to ip adress of the API and it's works. However, when the request is sent from the web application with Axios it is impossible to access to API.
import axios from 'axios'
const instance = axios.create({
crossDomain: true,
baseURL: 'http://172.0.43.8:8080/api'
})
export default instance
I tried a lot of things from the web like ENV HOST 0.0.0.0 in Dockerfile or crossDomain: true in axios parameters but nothing.
cya
I want to generate an .env and and an .env.example file and push it to the docker image during the docker build with Dockerfile.
The problem is that the .env.example file is not copied to the docker image. I think it works with the normal env, because I copy the env.production and save it as an env.
However, I am using node.js and typescript. Most people know that there is a way to use env viriables in the code by generating an env.d.ts and env.example.
During docker compose i am getting unfortunately that error.
no such file or directory, open '.env.example'
apparently the env.example is not copied to the dist folder. I need this one though.
Dockerfile
# stage 1 building the code
FROM node as builder
WORKDIR /usr/app
COPY package*.json ./
RUN npm install
COPY . .
COPY .env.production .env
RUN npm run build
COPY .env.example ./dist
# stage 2
FROM node
WORKDIR /usr/app
COPY package*.json ./
RUN npm install --production
COPY --from=builder /usr/app/dist ./dist
COPY ormconfig.docker.json ./ormconfig.json
EXPOSE 4000
CMD node dist/index.js
docker-compose.yaml
version: "3.7"
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: postgres
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
web:
build:
context: ./
dockerfile: ./Dockerfile
depends_on:
- db
ports:
- "4000:4000"
volumes:
- ./src:/src
exmaple how i am using env variables in my application
env.d.ts
declare namespace NodeJS {
interface ProcessEnv {
PORT: string
EMAIL: string
EMAIL_PASSWORD: string
}
}
env.exmaple (must be there)
PORT=
EMAIL=
EMAIL_PASSWORD=
index.ts
import 'dotenv-safe/config'
import express from 'express'
const app = express()
app.listen(process.env.PORT, () => {
console.log(`Server starten on port ${process.env.PORT}`)
})
I’m newbie in using docker, but I want to dockerize my node.js + nginx + react.js app via docker compose and I get this error when trying create nginx image (login via docker hub doesn’t help):
ERROR [nodereactcrm_client] FROM docker.io/library/build:latest
failed to solve: rpc error: code = Unknown desc = failed to load cache key: pull access denied, repository
does not exist or may require authorization: server message: insufficient_scope: authorization failed
My react Dockerfile:
FROM node:alpine as builder
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm install
COPY . ./
FROM nginx
COPY --from=build /home/node/dist /usr/share/nginx/html
COPY ./default.conf /etc/nginx/conf.d
My docker-compose file:
version: '3'
services:
db:
container_name: db
image: mysql
ports:
- '3306:3306'
environment:
MYSQL_ROOT_PASSWORD: root
api:
build:
dockerfile: Dockerfile
context: ./server
volumes:
- /app/node_modules
- ./server:/app
links:
- db
ports:
- '5000:5000'
depends_on:
- db
client:
build:
dockerfile: Dockerfile
context: ./client
volumes:
- /app/node_modules
- ./client:/app
links:
- api
ports:
- '80:80'
As #anemyte answered - there was a typo. I needed just change COPY --from=build to COPY --from=builder
I am trying to run Node/React project inside Docker containers. I have a NodeJS server for the API's and the client app. I also have concurrently installed and everything works fine when running npm run dev.
This issue is when I run the server and app via a docker-compose.yml file I get the following error from the client:
client | [HPM] Error occurred while trying to proxy request /api/current_user from localhost:3000 to http://localhost:5000 (ECONNREFUSED) (https://nodejs.org/api/errors.html#errors_common_system_errors)
Here is the docker-compose.yml
version: "3"
services:
frontend:
container_name: client
build:
context: ./client
dockerfile: Dockerfile
image: client
ports:
- "3000:3000"
volumes:
- ./client:/usr/src/app
networks:
- local
backend:
container_name: server
build:
context: ./
dockerfile: Dockerfile
image: server
ports:
- "5000:5000"
depends_on:
- frontend
volumes:
- ./:/usr/src/app
networks:
- local
networks:
local:
driver: bridge
Server Dockerfile
FROM node:lts-slim
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
EXPOSE 5000
# You can change this
CMD [ "npm", "run", "dev" ]
Client Dockerfile
FROM node:lts-slim
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
EXPOSE 3000
CMD [ "npm", "start" ]
I am using "http-proxy-middleware": "^0.21.0" so my setupProxy.js is
const proxy = require('http-proxy-middleware');
module.exports = function(app) {
app.use(proxy('/auth/google', { target: 'http://localhost:5000' }));
app.use(proxy('/api/**', { target: 'http://localhost:5000' }));
};
You should use container_name instead of localhost
app.use(proxy('/auth/google', { target: 'http://localhost:5000' }));
app.use(proxy('/api/**', { target: 'http://localhost:5000' }));
You can also check these details by inspecting your network using following command:-
docker inspect <network_name>
It will show all the connected containers to the network, also the host names created for those containers.
NOTE : Host names are created based on container_names otherwise based
on service names.