Cloudfare is blocking my request from local but not from docker - node.js

I have a nasty situation.
I use services from a provider that recently implemented Cloudflare security. It explained that, to bypass Cloudflare, I have to add "User-Agent": "Cosmos-API-Request" header.
In fact it works in curl. But it didn't work with axios.
My surprise was that I found that it just doesn't work from my machine
Even if I dockerize my POC and runs the docker container it works!
My machine is a Macbook pro 2019 with intel, running Ventura 13.1
Nodejs: v19.4.0
Axios 1.2.3
I wrote a POC to send them in a sandbox. That was when I found the problem was only local.
Here is the POC:
const axios = require("axios");
axios
.get("https://cdn-cosmos.bluesoft.com.br/products/7891000041178", {
headers: {
"User-Agent": "Cosmos-API-Request"
}
})
.then((response) => {
console.log(response.status);
})
.catch((error) => {
console.log(error);
});
I'm expecting response.status=200
Here my poc's Dockerfile:
FROM node
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
CMD [ "npm", "start" ]

Related

How to communicate Express API and React in separate Docker containers

I have a simple application that grabs data from express and displays it in react. It works as intended without docker, but not when launching them as containers. Both React and Express are able to launch and can be viewed in browser at localhost:3000 and localhost:5000 after running docker
How they are communicating
In the react-app package.json, I have
"proxy": "http://localhost:5000"
and a fetch to the express route.
React Dockerfile
FROM node:17 as build
WORKDIR /code
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx:1.12-alpine
COPY --from=build /code/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Express Dockerfile
FROM node:17
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD ["npm", "start"]
docker-compose.yml
version: "3"
services:
react-app:
image: react
stdin_open: true
ports:
- "3000:80"
networks:
- react-express
api-server:
image: express
ports:
- "5000:5000"
networks:
- react-express
networks:
react-express:
driver: bridge
from your example I figured out that you are using react-scripts?
If so, proxy parametr works only for development for npm start.
Keep in mind that proxy only has effect in development (with npm start), and it is up to you to ensure that URLs like /api/todos point to the right thing in production.
here: https://create-react-app.dev/docs/proxying-api-requests-in-development/
Using a proxy in package.json does not work, so instead you can put this in your react app. The same Dockerfile and docker-compose setup is used.
const api = axios.create({
baseURL: "http://localhost:5000"
})
and make request to express like this
api.post("/logs", {data:value})
.then(res => {
console.log(res)
})
This may raise an error with CORS, so you can put this in your Express API in the same file that you set the port and have it listening.
import cors from 'cors'
const app = express();
app.use(cors({
origin: 'http://localhost:3000'
}))

CORS "Access-Control-Allow-Credentials", "Access-Control-Allow-Origin" response headers appear for local but not docker container

can anyone help me solve this issue?
I'm running a php Laravel backend with fortify and sanctum for auth. I also have a React frontend which makes api calls to this backend, both of these are dockerized. If I run the php server locally, there are no issues with CORs. But once the backend is running via a docker container, the following issue pops up:
CORS Issue
Access to XMLHttpRequest at 'http://localhost:8000/api/tickets' from origin 'http://localhost' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
On the network tab, we see that the Request URL doesn't have the following fields in the Response Headers:
"Access-Control-Allow-Credentials", "Access-Control-Allow-Origin".
Is there a way to have two docker containers allow CORs?
What I have tried:
Setting up an NGINX container to act as a reverse proxy, including addition of fields in:
location / {
resolver 127.0.0.11;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Accept,Content-Type';
set $nextjs nextjs_upstream;
proxy_pass http://$nextjs;
}
Added all the required CORS fields in my config files for the server i.e. cors.php file.
<?php
return [
'paths' => ['*', 'api/*', 'login', 'logout', 'sanctum/csrf-cookie', 'api/tickets'],
'allowed_methods' => ['*'],
'allowed_origins' => ['http://localhost','*'],
'allowed_origins_patterns' => [],
'allowed_headers' => ['X-Custom-Header', 'Upgrade-Insecure-Requests','*'],
'exposed_headers' => [],
'max_age' => 0,
'supports_credentials' => true,
];
Note: This attempt specifically might make sense since localhost in "allowed_origins" refers to that of within the container, but how would we change it to be outside the container? Would it be that container's IP address or something?
Added additional installations in php Dockerfile: "&& a2enmod headers \ && sed -ri -e 's/^([ \t]*)(<\/VirtualHost>)/\1\tHeader set Access-Control-Allow-Origin "*"\n\1\2/g' /etc/apache2/sites-available/*.conf"
Interestingly, my login/logout routes don't face the same CORS error which plagues my route: api/tickets, but this is besides the point since It seems like more of an issue with docker containers attempting to communicate.
Note: localhost has no port since NGINX serves the yarn/npm build.
Laravel Dockerfile:
# The base image
FROM php:8.0.2 as base
# Install system dependencies i.e. Zip , curl:
RUN apt-get update && apt-get install -y \
zip \
unzip
# Get extensions for PHP & Install Postgres
RUN apt-get install -y libpq-dev \
&& docker-php-ext-configure pgsql -with-pgsql=/usr/local/pgsql \
&& docker-php-ext-install pdo pdo_pgsql pgsql
# Get the latest composer:
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Copy everything to the working directory
COPY . /app
# Set the current working directory
WORKDIR /app
# Install dependencies for laravel
COPY composer.json ./
RUN composer install
# Move the exec file to our workdir and give permissions
COPY docker-init.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/docker-init.sh
# Expose internal port which will be mapped
EXPOSE 8000
# Run our server
ENTRYPOINT [ "docker-init.sh" ]
Frontend Dockerfile:
FROM node:alpine as build
# Set the current working directory
WORKDIR /usr/app
# Install PM2 globally
RUN npm install --global pm2
# Get all dependencies
COPY package*.json ./
# Copy everything to the working directory
RUN npm install --force --production
RUN yarn add --dev typescript #types/react
COPY . ./
RUN npm run build
# Expose internal port which will be mapped
EXPOSE 3000
USER node
CMD [ "pm2-runtime", "npm", "--", "start" ]
Nginx Dockerfile:
# Prod environment
FROM nginx:alpine
# Remove any existing config files
RUN rm /etc/nginx/conf.d/*
# Copy config files
COPY ./default.conf /etc/nginx/conf.d/
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
docker-init.sh
#!/bin/bash
# Exit immediately if a pipeline returns a non zero status
set -e
# Change to our working directory
cd /app
# php artisan migrate
# Serve for all IPv4 addresses:
php artisan serve --host=0.0.0.0 --port=8000
Thanks for the help lads
Solved it, had to make a change to the app/Http/kernel.php file. Basically in the lines, I had to make 'web' and 'api' the same:
protected $middlewareGroups = [
'web' => [
\App\Http\Middleware\EncryptCookies::class,
\Illuminate\Cookie\Middleware\AddQueuedCookiesToResponse::class,
\Illuminate\Session\Middleware\StartSession::class,
// \Illuminate\Session\Middleware\AuthenticateSession::class,
\Illuminate\View\Middleware\ShareErrorsFromSession::class,
\App\Http\Middleware\VerifyCsrfToken::class,
\Illuminate\Routing\Middleware\SubstituteBindings::class,
],
'api' => [
EnsureFrontendRequestsAreStateful::class,
'throttle:api',
\Illuminate\Routing\Middleware\SubstituteBindings::class,
],
];
Becomes:
protected $middlewareGroups = [
'web' => [
\App\Http\Middleware\EncryptCookies::class,
\Illuminate\Cookie\Middleware\AddQueuedCookiesToResponse::class,
\Illuminate\Session\Middleware\StartSession::class,
// \Illuminate\Session\Middleware\AuthenticateSession::class,
\Illuminate\View\Middleware\ShareErrorsFromSession::class,
\App\Http\Middleware\VerifyCsrfToken::class,
\Illuminate\Routing\Middleware\SubstituteBindings::class,
],
'api' => [
EnsureFrontendRequestsAreStateful::class,
'throttle:api',
\Illuminate\Routing\Middleware\SubstituteBindings::class,
\App\Http\Middleware\EncryptCookies::class,
\Illuminate\Cookie\Middleware\AddQueuedCookiesToResponse::class,
\Illuminate\Session\Middleware\StartSession::class,
\Illuminate\View\Middleware\ShareErrorsFromSession::class,
\App\Http\Middleware\VerifyCsrfToken::class,
\Illuminate\Routing\Middleware\SubstituteBindings::class,
],
];

Can't access Hapi server inside Docker Container

I built a simple NodeJS server with Hapi and tried to run it inside a Docker container.
It runs nicely inside Docker, but I can't get access to it (even though I have done port mapping).
const hapi = require("#hapi/hapi");
const startServer = async () => {
const server = hapi.Server({
host: "localhost",
port: 5000,
});
server.route({
method: 'GET',
path: '/sample',
handler: (request, h) => {
return 'Hello World!';
}
});
await server.start();
console.log(`Server running on port ${server.settings.port}`);
};
startServer();
Docker file is as follows:
FROM node:alpine
WORKDIR /usr/app
COPY ./package.json ./
RUN npm install
COPY ./ ./
CMD [ "npm","run","dev" ]
To run docker, I first build with:
docker build .
I then run the image I get from above command to do port mapping:
docker run -p 5000:5000 <image-name>
When I try to access it via postman on http://localhost:5000/sample or even localhost:5000/sample, it keeps saying Couldn't connect to server and when I open in chrome, it says the same Can't display page.
PS. When i run the code as usual without Docker container, with simply npm run dev from my terminal, the code runs just fine.
So, I am confident, the API code is fine.
Any suggestions??
As mentioned by #pzaenger on your HAPI server configuration change localhost to 0.0.0.0.
host: 'localhost' to host: '0.0.0.0',

Dockerized NodeJS application is unable to invoke another dockerized SpringBoot API

I am running a SpringBoot application in a docker container and another VueJS application in another docker container using docker-compose.yml as follows:
version: '3'
services:
backend:
container_name: backend
build: ./backend
ports:
- "28080:8080"
frontend:
container_name: frontend
build: ./frontend
ports:
- "5000:80"
depends_on:
- backend
I am trying to invoke SpringBoot REST API from my VueJS application using http://backend:8080/hello and it is failing with GET http://backend:8080/hello net::ERR_NAME_NOT_RESOLVED.
Interestingly if I go into frontend container and ping backend it is able to resolve the hostname backend and I can even get the response using wget http://backend:8080/hello.
Even more interestingly, I added another SpringBoot application in docker-compose and from that application I am able to invoke http://backend:8080/hello using RestTemplate!!
My frontend/Dockerfile:
FROM node:9.3.0-alpine
ADD package.json /tmp/package.json
RUN cd /tmp && yarn install
RUN mkdir -p /usr/src/app && cp -a /tmp/node_modules /usr/src/app
WORKDIR /usr/src/app
ADD . /usr/src/app
RUN npm run build
ENV PORT=80
EXPOSE 80
CMD [ "npm", "start" ]
In my package.json I mapped script "start": "node server.js" and my server.js is:
const express = require('express')
const app = express()
const port = process.env.PORT || 3003
const router = express.Router()
app.use(express.static(`${__dirname}/dist`)) // set the static files location for the static html
app.engine('.html', require('ejs').renderFile)
app.set('views', `${__dirname}/dist`)
router.get('/*', (req, res, next) => {
res.sendFile(`${__dirname}/dist/index.html`)
})
app.use('/', router)
app.listen(port)
console.log('App running on port', port)
Why is it not able to resolve hostname from the application but can resolve from the terminal? Am I missing any docker or NodeJS configuration?
Finally figured it out. Actually, there is no issue. When I run my frontend VueJS application in docker container and access it from the browser, the HTML and JS files will be downloaded on my browser machine, which is my host, and the REST API call goes from the host machine. So from my host, the docker container hostname (backend) is not resolved.
The solution is: Instead of using actual docker hostname and port number (backend:8080) I need to use my hostname and mapped port (localhost:28080) while making REST calls.
I would suggest:
docker ps to get the names/Ids of running containers
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' BACKEND_CONTAINER_NAME to get backend container's IP address from the host.
Now put this IP in the front app and it should be able to connect to your backend.

How can I run Ghost in Docker with the google/node-runtime image?

I'm very new to Docker, Ghost and node really, so excuse any blatant ignorance here.
I'm trying to set up a Docker image/container for Ghost based on the google/nodejs-runtime image, but can't connect to the server when I run via Docker.
A few details: I'm on OS X, so using I'm boot2docker. I'm running Ghost as a npm module, configured to use port 8080 because that's what google/nodejs-runtime expects. This configuration runs fine outside of Docker when I use npm start. I also tried a simple "Hello, World" Express app on port 8080 which works from within Docker.
My directory structure looks like this:
my_app
content/
Dockerfile
ghost_config.js
package.json
server.js
package.json
{
"name": "my_app",
"private": true,
"dependencies": {
"ghost": "0.5.2",
"express": "3.x"
}
}
Dockerfile
FROM google/nodejs-runtime
ghost_config.js
I changed all occurrences of port 2368 to 8080.
server.js
// This Ghost server works with npm start, but not with Docker
var ghost = require('ghost');
var path = require('path');
ghost({
config: path.join(__dirname, 'ghost_config.js')
}).then(function (ghostServer) {
ghostServer.start();
});
// This "Hello World" app works in Docker
// var express = require('express');
// var app = express();
// app.get('/', function(req, res) {
// res.send('Hello World');
// });
// var server = app.listen(8080, function() {
// console.log('Listening on port %d', server.address().port);
// });
I build my Docker image with docker build -t my_app ., then run it with docker run -p 8080 my_app, which prints this to the console:
> my_app# start /app
> node server.js
Migrations: Up to date at version 003
Ghost is running in development...
Listening on 127.0.0.1:8080
Url configured as: http://localhost:8080
Ctrl+C to shut down
docker ps outputs:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f4c7027f62f my_app:latest "/nodejs/bin/npm sta 23 hours ago Up About a minute 0.0.0.0:49165->8080/tcp pensive_lovelace
And boot2docker ip outputs:
The VM's Host only interface IP address is: 192.168.59.103
So I point my browser at: 192.168.59.103:49165 and get nothing, an no output in the Docker logs. Like I said above, running the "Hello World" app in the same server.js works fine.
Everything looks correct to me. The only odd thing that I see is that sqlite3 fails in npm install during docker build:
[sqlite3] Command failed:
module.js:356
Module._extensions[extension](this, filename);
^
Error: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found
...
node-pre-gyp ERR! Testing pre-built binary failed, attempting to source compile
but the source compile appears to succeed just fine.
I hope I'm just doing something silly here.
In your ghost config, change the related server host to 0.0.0.0 instead of 127.0.0.1:
server: {
host: '0.0.0.0',
...
}
PS: for the SQLite error. Try this Dockerfile:
FROM phusion/baseimage:latest
# Set correct environment variables.
ENV HOME /root
# Regenerate SSH host keys. baseimage-docker does not contain any, so you
# have to do that yourself. You may also comment out this instruction; the
# init system will auto-generate one during boot.
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
# ...put your own build instructions here...
# Install Node.js and npm
ENV DEBIAN_FRONTEND noninteractive
RUN curl -sL https://deb.nodesource.com/setup | sudo bash -
RUN apt-get install -y nodejs
# Copy Project Files
RUN mkdir /root/webapp
WORKDIR /root/webapp
COPY app /root/webapp/app
COPY package.json /root/webapp/
RUN npm install
# Add runit service for Node.js app
RUN mkdir /etc/service/webapp
ADD deploy/runit/webapp.sh /etc/service/webapp/run
RUN chmod +x /etc/service/webapp/run
# Add syslog-ng Logentries config file
ADD deploy/syslog-ng/logentries.conf /etc/syslog-ng/conf.d/logentries.conf
# Expose Ghost port
EXPOSE 2368
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
Note I used phusion/baseimage instead of google/nodejs-runtime and installed node.js & npm with:
ENV DEBIAN_FRONTEND noninteractive
RUN curl -sL https://deb.nodesource.com/setup | sudo bash -
RUN apt-get install -y nodejs
In your Dockerfile, you need this command EXPOSE 8080.
But that only makes that port accessible outside the Docker container. When you run a container from that image you need to 'map' that port. For example:
$ docker run -d -t -p 80:8080 <imagename>
The -p 80:8080 directs port '8080' in the container to port '80' on the outside when it is running.
The syntax always confuses me (I think it is backwards).

Resources