Bug report
Required System information
Node.js version: v16.17.1
NPM version: 8.15.0
Strapi version: 4
Database: mysql
Operating system: o2switch
Describe the bug
I'm trying to deploy a strapi app on o2switch using Nodejs (and Mysql) for a Nextjs app.
Steps to reproduce the behavior
Each time I connect to my admin panel this error appears
Screenshots
Before log in
Code snippets
.env :
APP_HOST=127.0.0.1
PORT=1338
./config/server.js:
module.exports = ({ env }) => ({ host: env('APP_HOST', '0.0.0.0'), port: env.int('PORT', 1337), app: { keys: env.array('APP_KEYS'), }, });
I've tried to configure server.js and .env but nothing changed
Related
I try to publish a program to a gitlab-server, via electron-builder. This is my electron-config.yml file:
appId: ch.janisperren.arawexdashboard
publish:
provider: github
token: mytoken
host: gitlab.myserver.ch
owner: myname
repo: myrepo
asar: true
files:
- "app.js"
- "dist/myfiles/*"
linux:
target:
target: AppImage
arch: x64
The app is being generated, but it is not published to the repo. I always get following error message:
API V3 is no longer supported. Use API V4 instead
But I don't know how to force the electron-builder in using the API V4. Any suggestions?
Thanks!
I'm creating a WebApp that has a React front end whose build is served by a node.js back end. There is some security implementation using Auth0 token validation using JWT & JWKS for secured API access (which I'm suspecting to be the cause of this)
Everything worked fine on my local machine (Windows 10.0.19043 pro x86) using node to do some local testing. Everything worked perfectly fine when I built the Docker image using this Dockerfile:
FROM node:16
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
COPY client ./client
RUN npm run build
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "npm", "run", "start" ]
My node back end has a connexion to some redis instance, which is working fine, no connection problems here.
So, the main problem is that when I'm deploying my stack using docker-compose, my back end throws 500s for secured API requests.
Error example:
Error: connect ECONNREFUSED 127.0.0.1:80
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1157:16)
What I'm struggling to understand is that I'm not calling 127.0.0.1:80 (or any of its equivalents) anywhere in the back or front end!
The app is running and is exposed on port 8080, and even the front end's requests are properly formed
GET http://localhost:8080/api/secure/v1/test
and Authorization is a valid Auth0 token.
I have been searching everywhere to get a solution for this but everything so far was talking about some external connection problems (i.e. mongodb, mysql or redis instances are not atteignable and this error is thrown when trying to connect to those), which isn't my case.
My back end code used to check Auth0's token is the following (I'm using express and I followed the documentation's official guide).
var jwtCheck = jwt({ // Creating authentication validator as described by Auth0's Doc.
secret: jwks.expressJwtSecret({
cache: true,
rateLimit: true,
jwksRequestsPerMinute: 1000,
jwksUri: process.env.JWKSURI
}),
audience: process.env.AUDIENCE,
issuer: process.env.ISSUER,
algorithms: [process.env.ALGORITHMS]
});
app.route("/api/secure/v1/test")
.get(jwtCheck, (req,res) =>{
res.status(200).send({"status":"working lol"})
})
When trying to access http://localhost:8080/api/secure/v1/test I get the error mentioned above but only with docker-compose deployment. docker run works fine but I need to use docker-compose for convenience and security reasons.
example of docker-compose file:
version: "3.9"
services:
web:
restart: unless-stopped
image: some.private.container.registry/myWebImage
ports:
- "8080:8080"
expose:
- 8080
environment:
- PORT=8080
- DOMAIN="as described by Auth0 documentation"
- CLIENTID="as described by Auth0 documentation"
- JWKSURI="as described by Auth0 documentation"
- AUDIENCE="as described by Auth0 documentation"
- ISSUER="as described by Auth0 documentation"
- ALGORITHMS="as described by Auth0 documentation"
- REDISHOST=someIp
- REDISPORT=6379
networks:
- default
Logs of the web container:
> tpi-louvie-web#1.0.0 start
> node server.js
API and front are running on http://localhost:8080/
refreshed redis 2022-03-28T06:57:50.906Z 2022-03-28T06:48:27.044Z 2022-03-26T21:45:00.000Z 2022-03-27T00:45:00.000Z
Error: connect ECONNREFUSED 127.0.0.1:80
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1157:16)
Error: connect ECONNREFUSED 127.0.0.1:80
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1157:16)
refreshed redis 2022-03-28T06:58:50.870Z 2022-03-28T06:48:27.044Z 2022-03-26T21:45:00.000Z 2022-03-27T00:45:00.000Z
"refreshed redis" lines are here to log every time the webapp requests data from redis with some meaningful timestamps in the use case of my app.
The "error" appears only when a request is made to a secure endpoint.
Environment informations:
Env 1:
kernel: 5.4.0-99-generic
OS: Ubuntu 20.04.3 LTS x86_64
docker version: Docker version 20.10.12, build e91ed5707e
docker-compose version: docker-compose version 1.29.2, build unknown
Env 2:
kernel: 5.15.25-1-MANJARO
OS: Manjaro Linux x86_64
docker version: Docker version 20.10.12, build e91ed5707e
docker-compose version: Docker Compose version 2.2.3
The symptoms are the same on those 2 environments.
I am using the lastest version of strapi (v3.x)with Node v10.15.2. I am trying to deploy to Azure Web App using this server.js configuration.
module.exports = ({ env }) => ({
host: env('HOST', 'localhost'),
port: env.int('PORT', 1337),
url: 'https://clinicaback.azurewebsites.net',
cron: {
enabled: false
},
admin: {
url: "/dashboard",
autoOpen: false,
build: {
backend: "https://clinicaback.azurewebsites.net"
}
}
});
It build successful and seems like is running with the development configuration. Here is the output from Azure's kudu service
but when I enter to the website, it does not load. and I ran Diagnose and solve problems from Azure and it's showing this...
The webapp only supports port 80 and port 443. It is recommended to modify the relevant port settings in your code.
It is recommended to release the code after build, add npx serve -s as Startup Command for your App Service> General settings.
I'm trying to build a stack with two containers as a first step, one with the app, one with a MS SQL server. Using no stack, and a container with the SQL server and the app locally works fine, but I can't manage to figure out the proper way to make the containerised app to connect to the DB.
My stack file is as follows :
version: "3.4"
services:
db:
image: orizon/training-library-sql
ports:
- 1443:1443
networks:
- backend
app:
image: orizon/training-library
ports:
- 4000:4000
networks:
- backend
depends_on:
- db
links:
- db:db
deploy:
replicas: 1
networks:
backend:
Db image is based on microsoft/mssql-server-linux:2017-latest and works fine when the app is not in a container and use 'localhost' as hostname.
In the node app, the mssql config is the following:
const config = {
user: '<username>',
password: '<password>',
server: 'db',
database: 'library',
options: {
encrypt: false // Use this if you're on Windows Azure
}
};
And the message I received from node app container :
2018-09-07T10:11:57.404Z app ConnectionError: Failed to connect to db:1433 - getaddrinfo ENOTFOUND db
EDIT
Simplified my stackfile and the connectivity now kind of works.
links seems deprecated and replaced with depends_on
version: "3.4"
services:
db:
image: orizon/training-library-sql
ports:
- 1443:1443
app:
image: orizon/training-library
ports:
- 4000:4000
depends_on:
- db
deploy:
replicas: 1
Now the error message changed and let me think it's more of a kind of delay issue. Database container seems like it needs a bit more time to get ready before popping up the app container.
I guess I'm now looking for means to delay connecting to the database either through docker or by code.
Finally made it work properly.
See OP for much simpler and effective stack file.
In addition, I added a retry strategy in my app code to let time for the MS SQL server to properly start in the container.
function connectWithRetry() {
return sql.connect(config, (err) => {
if (err) {
debug(`Connection to DB failed, retry in 5s (${chalk.gray(err.message)})`);
sql.close();
setTimeout(connectWithRetry, 5000);
} else {
debug('Connection to DB is now ready...');
}
});
}
connectWithRetry();
Docker documentation shows a parameter that should answer this (sequential_deployment: true) but docker stackdoesn't allow its usage. Docker documentation itself advise to either manage this issue by code or by adding a delay script.
First i installed UBUNTU 14.04 64bit/32bit (Tried it too with UBUNTU 16.04 64bit).
When i launch mup setup, i get this error message:
----------------------------------
Started TaskList: Setup Docker
[xx.xx.xx.xx] - Setup Docker
[xx.xx.xx.xx] - Setup Docker: SUCCESS
Started TaskList: Setup Meteor
[xx.xx.xx.xx] - Setup Environment
[xx.xx.xx.xx] - Setup Environment: SUCCESS
Started TaskList: Setup Mongo
[xx.xx.xx.xx] - Setup Environment
[xx.xx.xx.xx] - Setup Environment: SUCCESS
[xx.xx.xx.xx] - Copying mongodb.conf
[xx.xx.xx.xx - Copying mongodb.conf: SUCCESS
Started TaskList: Start Mongo
[xx.xx.xx.xx] - Start Mongo
[xx.xx.xx.xx] x Start Mongo: FAILED
-----------------------------------STDERR-----------------------------------
Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon at unix:///var
/run/docker.sock. Is the docker daemon running?). Using system default: https://index.docker.io/v1/
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
-----------------------------------STDOUT-----------------------------------
----------------------------------------------------------------------------
I created a username inside the VPS and granted it in order to work without sudo, i tried too with root access but all time the same error.
And launched this command before everything inside the VPS: apt-get update
Here is my MUP version: 1.3.7 (Under Windows 7 64bit)
And here mup.js file:
module.exports = {
servers: {
one: {
host: 'xx.xx.xx.xx',
username: 'myusername',
password: 'password',
}
},
meteor: {
name: 'myApp',
path: '../myApp',
servers: {
one: {}
},
buildOptions: {
serverOnly: true
},
env: {
ROOT_URL: 'https://m.domain.com',
MONGO_URL: 'mongodb://localhost/meteor'
},
docker:{
image: 'abernix/meteord:base'
},
deployCheckWaitTime: 96,
enableUploadProgressBar: false
},
mongo: {
oplog: true,
port: 27017,
version: '3.4.1',
servers: {
one: {}
}
}
};
When i try to restart docker from VPS, here the error message:
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
Here the error inside the logs:
Your Linux kernel version 2.6.32-042stab127.2 is not supported for running docker. Please upgrade your kernel to 3.10.0 or newer.
It's seem to me that you misconfigured the
env: {
ROOT_URL: 'https://m.domain.com',
MONGO_URL: 'mongodb://localhost/meteor'
},
mup create a mongo container and the database is named using the name of your app
env: {
ROOT_URL: 'https://m.domain.com',
MONGO_URL: 'mongodb://localhost/myapp'
},
And it should work