I am using the lastest version of strapi (v3.x)with Node v10.15.2. I am trying to deploy to Azure Web App using this server.js configuration.
module.exports = ({ env }) => ({
host: env('HOST', 'localhost'),
port: env.int('PORT', 1337),
url: 'https://clinicaback.azurewebsites.net',
cron: {
enabled: false
},
admin: {
url: "/dashboard",
autoOpen: false,
build: {
backend: "https://clinicaback.azurewebsites.net"
}
}
});
It build successful and seems like is running with the development configuration. Here is the output from Azure's kudu service
but when I enter to the website, it does not load. and I ran Diagnose and solve problems from Azure and it's showing this...
The webapp only supports port 80 and port 443. It is recommended to modify the relevant port settings in your code.
It is recommended to release the code after build, add npx serve -s as Startup Command for your App Service> General settings.
Related
Bug report
Required System information
Node.js version: v16.17.1
NPM version: 8.15.0
Strapi version: 4
Database: mysql
Operating system: o2switch
Describe the bug
I'm trying to deploy a strapi app on o2switch using Nodejs (and Mysql) for a Nextjs app.
Steps to reproduce the behavior
Each time I connect to my admin panel this error appears
Screenshots
Before log in
Code snippets
.env :
APP_HOST=127.0.0.1
PORT=1338
./config/server.js:
module.exports = ({ env }) => ({ host: env('APP_HOST', '0.0.0.0'), port: env.int('PORT', 1337), app: { keys: env.array('APP_KEYS'), }, });
I've tried to configure server.js and .env but nothing changed
I am following the tutorial for Strapi on CGP App Engine (nodejs- standard env) and unable to get the app to start because the connection is being refused Error: connect ECONNREFUSED 127.0.0.1:5432 by the GCP Postgres instance (Public IP) .
Why I'm confused
GCP Service Principle Persmissions: <project_name>#appspot.gserviceaccount.com has Cloud SQL Client for the App Engine default service account so this should apply to all App Engine Services.
I have other App Engine Services (python) connecting successfully to other Postgres Databases. This tells me I have the correct permissions, Cloud SQL Admin API enabled, and the correct username/password.
The code works locally (Docker) while linking the GCP Postgres database, but only with TCP routing, not a Unix Socket SQL proxy:
../../cloud_sql_proxy -instances=<project_name>:europe-west1:<sql_instance_name>=tcp:5432 & (sleep 5 && yarn strapi start)
I can login to the locally hosted Strapi app, add users, etc. and the changes are reflected in the GCP Postgres database.
The only difference between the local deployment (docker-compose.yml) and the App engine (app.yml) is how I set the environment variables.
#Dockerfile
FROM node:14-buster
RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && apt-get update -y && apt-get install google-cloud-sdk -y
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy && chmod +x cloud_sql_proxy
#docker-compose.yml
version: "3.8"
services:
dev:
build: .
ports:
- "1337:1337"
volumes:
- .:/src
command: ["yarn", "run", "start"]
working_dir: /src
environment:
NODE_ENV: "production"
DATABASE_NAME: '<database name>'
DATABASE_USERNAME: '<username>'
DATABASE_PASSWORD: '<password>'
INSTANCE_CONNECTION_NAME: '<project_name>:europe-west1:<instance_name>'
# app.yml
runtime: nodejs14
instance_class: F2
service: strapi
env_variables:
HOST: '0.0.0.0'
NODE_ENV: 'local'
DATABASE_NAME: '<database name>'
DATABASE_USERNAME: '<username>'
DATABASE_PASSWORD: '<password>'
INSTANCE_CONNECTION_NAME: '<project_name>:europe-west1:<instance_name>'
beta_settings:
cloud_sql_instances: '<project_name>:europe-west1:<instance_name>'
The code that defines the connection in the nodejs project, from the Strapi tutorial:
module.exports = ({ env }) => ({
defaultConnection: 'default',
connections: {
default: {
connector: 'bookshelf',
settings: {
client: 'postgres',
socketPath: `/cloudsql/${env('INSTANCE_CONNECTION_NAME')}`,
database: env('DATABASE_NAME'),
username: env('DATABASE_USERNAME'),
password: env('DATABASE_PASSWORD'),
},
options: { }
},
},
});
What have I missed? What else can I check? Someone please help me end this insanity.
What fixed it for me was the following:
Go to my App engine default service principal and give it the following roles (as described here)
Cloud SQL Client
Cloud SQL Editor
Cloud SQL Admin
Change socketPath key to 'host' in the following default connection settings:
module.exports = ({ env }) => ({
defaultConnection: 'default',
connections: {
default: {
connector: 'bookshelf',
settings: {
client: 'postgres',
----> socketPath: `/cloudsql/${env('INSTANCE_CONNECTION_NAME')}`,
database: env('DATABASE_NAME'),
username: env('DATABASE_USERNAME'),
password: env('DATABASE_PASSWORD'),
},
options: { }
},
},
});
Attempting to add clustering ability via PM2 and deploy via my Node/Express application.
I've set up the following command:
pm2 start build/server/app.js -i max
The above works fine locally. I'm testing the functionality on a staging environment on Heroku via Performance 1X.
The above shows the log for the command but attempting 1 instance rather than max. It shows typical info after successfully running pm2 start however you can see app immediately crashes afterward.
Any advice or guidance is appreciated.
I ended up using the following documentation: https://pm2.keymetrics.io/docs/integrations/heroku/
Using a ecosystem.config.js with the following:
module.exports = {
apps : [
{
name: `app-name`,
script: 'build/server/app.js',
instances: "max",
exec_mode: "cluster",
env: {
NODE_ENV: "localhost"
},
env_development: {
NODE_ENV: process.env.NODE_ENV
},
env_staging: {
NODE_ENV: process.env.NODE_ENV
},
env_production: {
NODE_ENV: process.env.NODE_ENV
}
}
],
};
Then the following package.json script handles the deployment per the environment I am looking to deploy e.g. production:
"deploy:cluster:prod": "pm2-runtime start ecosystem.config.js --env production --deep-monitoring",
I got the same error but I fixed it by adding
{
"preinstall":"npm I -g pm2",
"start":"pm2-runtime start build/server/app.js -i 1"
}
To my package.json file
This is advised for production environment
But running
pm2 start build/server/app.js -i max
Is for development purpose
I have a react app which is using the webpack dev server. The server proxies into another web api for CRUD. I can use it when running locally but when building the container, the app does not connect.
Webpack config
devServer: {
contentBase: resolve(__dirname, 'dist'),
host: 'localhost',
port: 3001,
hot: true,
open: true,
inline: true,
proxy: {
'/api': {
target: 'http://localhost:4567/streams',
secure: false,
pathRewrite: { '^/api': '' },
changeOrigin: true,
},
},
},
dockerfile
FROM node:12-alpine
WORKDIR /app
COPY ./package*.json ./
RUN npm ci
COPY . ./
docker-compose
version: '3.7'
services:
web:
container_name: fm-admin
restart: always
build:
context: .
ports:
- '3001:3001'
command: npm start
environment:
- CHOKIDAR_USEPOLLING=true
stdin_open: true
Further more, when i swapped the host from localhost to 0.0.0.0, I am getting the following error
[HPM] Error occurred while trying to proxy request from 0.0.0.0:3001 to http://localhost:4567/streams (ECONNREFUSED) (https://nodejs.org/api/errors.html#errors_common_system_errors)
But the streams api is running.
Hoping i can get some help here.
The whole context is not described very precisely in the question, especially the service listening at 'http://localhost:4567.
I assume that target: 'http://localhost:4567/streams' is executed inside the container that is listening on :3000 in the attempt to proxy to :4567.
If this is true, then it is normal to lack the connectivity. When localhost is used inside the container, the container will try to proxy into itself on :4567.
If you are trying to reach a service that is listening on :4567 on the host machine, you probably need to use the IP address of the docker0 interface, instead of localhost as suggested in this post.
I built a simple NodeJS server with Hapi and tried to run it inside a Docker container.
It runs nicely inside Docker, but I can't get access to it (even though I have done port mapping).
const hapi = require("#hapi/hapi");
const startServer = async () => {
const server = hapi.Server({
host: "localhost",
port: 5000,
});
server.route({
method: 'GET',
path: '/sample',
handler: (request, h) => {
return 'Hello World!';
}
});
await server.start();
console.log(`Server running on port ${server.settings.port}`);
};
startServer();
Docker file is as follows:
FROM node:alpine
WORKDIR /usr/app
COPY ./package.json ./
RUN npm install
COPY ./ ./
CMD [ "npm","run","dev" ]
To run docker, I first build with:
docker build .
I then run the image I get from above command to do port mapping:
docker run -p 5000:5000 <image-name>
When I try to access it via postman on http://localhost:5000/sample or even localhost:5000/sample, it keeps saying Couldn't connect to server and when I open in chrome, it says the same Can't display page.
PS. When i run the code as usual without Docker container, with simply npm run dev from my terminal, the code runs just fine.
So, I am confident, the API code is fine.
Any suggestions??
As mentioned by #pzaenger on your HAPI server configuration change localhost to 0.0.0.0.
host: 'localhost' to host: '0.0.0.0',