PRISMA: Authentication token is invalid: 'Authorization' header not provided - node.js

Running Prisma on my locally without secret runs fine..Now i am trying to run it for Production i am always encountering this error ERROR: Authentication token is invalid: 'Authorization' header not provided on my server and locally. i am missing something for sure but don't know what. Please help following are my prisma.yml and docker-compose.yml files.
Prisma.yml
# This service is based on the type definitions in the two files
# databasetypes.prisma` and `database/enums.prisma`
datamodel:
- ./packages/routes/index.directives.graphql
- ./packages/routes/index.scalar.graphql
- ./packages/routes/account/index.enum.graphql
- ./packages/routes/account/index.prisma
...
# Generate a Prisma client in JavaScript and store in
# a folder called `generated/prisma-client`.
# It also downloads the Prisma GraphQL schema and stores it
# in `generated/prisma.graphql`.
generate:
- generator: javascript-client
output: ./prisma
# The endpoint represents the HTTP endpoint for your Prisma API.
# It encodes several pieces of information:
# * Prisma server (`localhost:4466` in this example)
# * Service name (`myservice` in this example)
# * Stage (`dev` in this example)
# NOTE: When service name and stage are set to `default`, they
# can be omitted.
# Meaning http://myserver.com/default/default can be written
# as http://myserver.com.
endpoint: 'http://127.0.0.1:4466/soul/dev'
# The secret is used to create JSON web tokens (JWTs). These
# tokens need to be attached in the `Authorization` header
# of HTTP requests made against the Prisma endpoint.
# WARNING: If the secret is not provided, the Prisma API can
# be accessed without authentication!
secret: ${env:SECRET}
Docker-compose.yml
version: '3'
services:
server:
container_name: soul
restart: always
build: .
command: 'npm run dev'
links:
- redis
- prisma
env_file:
- ./.env
volumes:
- .:/node/soul/
working_dir: /node/soul/
ports:
- '3000:3000'
redis:
container_name: "redisserver"
image: redis:latest
restart: always
command: ["redis-server", "--bind", "redis", "--port", "6379"]
prisma:
image: prismagraphql/prisma:1.34
restart: always
ports:
- '4466:4466'
environment:
PRISMA_CONFIG: |
managementApiSecret: ${SECRET}
port: 4466
databases:
default:
connector: mysql
host: mysql
port: 3306
user: root
password: ******
mysql:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: ******
volumes:
- mysql:/var/lib/mysql
volumes:
mysql: ~

It looks like you're using the API Management Secret where you are supposed to be using a Service Secret.
According to the Prisma docs, the Service Secret and API Management Secret are two different things.
For Prisma v1.34 you can read about the differences here:
https://v1.prisma.io/docs/1.34/prisma-server/authentication-and-security-kke4/#prisma-server
Quote from that page:
A Prisma server provides the runtime environment for one or more Prisma services. To create, delete and modify the Prisma services on a Prisma server, the Management API is used. The Management API is protected with the Management API secret specified in the Docker Compose files when the Prisma server is deployed. Learn more here.
Prisma services are secured via the service secret that's specified in your prisma.yml. A Prisma service typically serves application data that's stored in relation to a certain datamodel. Learn more here.
const db = new Prisma({
typeDefs: 'src/generated/prisma.graphql',
endpoint: process.env.PRISMA_ENDPOINT,
secret: <YOUR_PRISMA_SERVICE_SECRET>, // Note: This must match what is in your prisma.yml
});
# prisma.yml
endpoint: ${env:PRISMA_ENDPOINT}
datamodel: mydatamodel.graphql
secret: <YOUR_PRISMA_SERVICE_SECRET>
In their Prisma 1.34 docs, Prsima recommends using an environment variable to get the secret into the prisma.yml file. There are risks associated with this but that is what is in their docs.
See: https://v1.prisma.io/docs/1.34/prisma-cli-and-configuration/prisma-yml-5cy7/#environment-variable
Quote from that page:
In the following example, an environment variable is referenced to determine the Prisma service secret:
# prisma.yml (as per the docs in the above link)
secret: ${env:PRISMA_SECRET}

Related

MongoServerError: not authorized on app to execute command on docker container

I am using docker to connect node and mongo and I am trying to insert data in a database. All the containers are up and running. And it is running perfectly on my local machine but in the server I get the following error.
MongoServerError: not authorized on app to execute command { insert: "users", documents: [ { username: "riwaj", password: "$2a$12$C3hpChig42coIoMEbtegsepw7tJeflHqpW7x.0/jPseX6G5KUXWO.", _id: ObjectId('63d41dc11d038db2b950a744'), __v: 0 } ], ordered: true, lsid:....
This clearly states that the user riwaj is not allowed to perform insert operation on the database. However, I have defined the necessary attributes required for mongo container as mentioned in the documentation in my docker-compose file which are:
MONGO_INITDB_ROOT_USERNAME=riwaj
MONGO_INITDB_ROOT_PASSWORD=dummypasswordxx
The users are created as per the credentials but I checked it by going into the interactive shell of the container and executing the following command
mongosh -u riwaj -password
However even here if I try to insert data into a database using the mongo insert() function I get a similar error related to authorization.
For more reference here is my docker-compose files:
docker-compose.yml
version: "3"
services:
nginx:
image: nginx:stable-alpine
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
node-app:
build: .
environment:
- PORT=3000
depends_on:
- mongo
#adding mango container
mongo: #service name for mongo
image: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=riwaj
- MONGO_INITDB_ROOT_PASSWORD=mypassword
volumes:
- mongo-db:/data/db #Named volume for data persistance
#adding redis container
redis:
image: redis
volumes:
mongo-db:
docker-compose.prod.yml
version: "3"
services:
nginx:
ports:
- "80:80"
node-app:
build:
context: .
args:
NODE_ENV: production
environment:
- NODE_ENV=production
- MONGO_USER=${MONGO_USER}
- MONGO_PASSWORD=${MONGO_PASSWORD}
- SESSION_SECRET=${SESSION_SECRET}
command: node index.js
mongo:
environment:
- MONGO_INITDB_ROOT_USERNAME=${MONGO_INITDB_ROOT_USERNAME}
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_INITDB_ROOT_PASSWORD}
- MONGO_INITDB_DATABASE= app
It is clear the issue is with authorization but shouldn't the root user have all the authorization to do all read-write operations? Here is the link to the repo https://github.com/Riwajchalise/node-docker where the project is pushed the endpoint for the signup api is mentioned in the Readme file
Would be really helpful if you can contribute in any way

Docker - Redis connect ECONNREFUSED 127.0.0.1:6379

I know this is a common error, but I literally spent the entire day trying to get past this error, trying everything I could find online. But I cant find anything that works for me.
I am very new to Docker and using it for my NodeJS + Express + Postgresql + Redis application.
Here is what I have for my docker-compose file:
version: "3.8"
services:
db:
image: postgres:14.1-alpine
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=admin
ports:
- "5432:5432"
volumes:
- db:/var/lib/postgresql/data
- ./db/init.sql:/docker-entrypoint-initdb.d/create_tables.sql
cache:
image: redis:6.2-alpine
restart: always
ports:
- "6379:6379"
command: redis-server --save 20 1 --loglevel warning
volumes:
- cache:/data
api:
container_name: api
build:
context: .
# target: production
# image: api
depends_on:
- db
- cache
ports:
- 3000:3000
environment:
NODE_ENV: production
DB_HOST: db
DB_PORT: 5432
DB_USER: postgres
DB_PASSWORD: admin
DB_NAME: postgres
REDIS_HOST: cache
REDIS_PORT: 6379
links:
- db
- cache
volumes:
- ./:/src
volumes:
db:
driver: local
cache:
driver: local
Here is my app.js upper part:
const express = require('express')
const app = express()
const cors = require('cors')
const redis = require('redis')
const client = redis.createClient({
host: 'cache',
port: 6379,
legacyMode: true // Also tried without this line, same behavior
})
client.connect()
client.on('connect', () => {
log('Redis connected')
})
app.use(cors())
app.use(express.json())
And my Dockerfile:
FROM node:16.15-alpine3.14
WORKDIR ./
COPY package.json ./
RUN npm install
COPY ./ ./
EXPOSE 3000 6379
CMD [ "npm", "run", "serve" ]
npm run serve is nodemon ./app.js.
I also already tried to prune the system and network.
What am I missing? Help!
There are two things to put in mind here,
First of All Docker Network:
Containers are exposed to your localhost system, so as a "Server" you can access each of them directly through the browser or the command-line, But
Taken for granted that you only can access the containers because they are exposed to a default network that is accessible by the root of the system - the docker user which you can inspect by the way.
The deployed containers are not exposed to each other by default, so you need to define a virtual network and expose them to it so they can talk to each other through the ports or the host name -- which will be the container_name
So you need to do two things:
Add a container name to the redis, in the compose file just like you did on the API
Create a network and bind all the services to it, one way of doing that will be:
version: "3.8"
Network:
my-network:
name: my-network
services:
....
cache:
container_name: cache
image: redis:6.2-alpine
restart: always
ports:
- "6379:6379"
command: redis-server --save 20 1 --loglevel warning
volumes:
- cache:/data
networks: # add it in all containers that communicate together
- my-network
Then and only then you can call redis container name as the host, since docker network will create a host name for the service by the container name,
When deploying the whole compose file later, the containers will be created and all will be joined to the network by default on startup and that will allow you API app to communicate with Redis container via the docker container name as the host name
Refer to these resources for more details:
Networking on Docker Compose
Docker Network Overview
A side Unrelated note:
I personally used redis from npm for some testing projects, but I found that ioredis was much better with TypeScript projects and more expected in its behavior
To Avoid any problems with Redis, make sure to create a password and use it to connect, sometimes redis randomly considers the client as a ReadOnly client and fails to find a read replica, adding the password solved it for me

403 Forbidden, communication among docker containers

I have an application composed of react-client (frontend), express server (backend), and keycloak. For development purpose, I run keycloak inside a docker-container and expose its corresponding port (8080); frontend and backend run locally on my machine. They connect to keycloak on the aforementioned port. Backend serves some REST end-points and these end-points are protected by keycloak. Everything works fine.
However, when I tried to containerize my application for production purpose by putting backend in a container and run everything with docker-compose (frontend still run on my local machine), backend rejected all requests from frontend, although these requests are attached with a valid token. I guess the problem is that backend cannot connect with keycloak to verify the token but I don't know why and how to fix the problem.
This is my docker-compose.yml:
version: "3.8"
services:
backend:
image: "backend"
build:
context: .
dockerfile: ./backend/Dockerfile
ports:
- "5001:5001"
keycloak:
image: "jboss/keycloak"
ports:
- "8080:8080"
environment:
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
- KEYCLOAK_IMPORT=/tmp/realm-export.json
volumes:
- ./realm-export.json:/tmp/realm-export.json
mongo_db:
image: "mongo:4.2-bionic"
ports:
- "27017:27017"
mongo_db_web_interface:
image: "mongo-express"
ports:
- "4000:8081"
environment:
- ME_CONFIG_MONGODB_SERVER=mongo_db
This is keycloak configuration in backend code:
{
"realm": "License-game",
"bearer-only": true,
"auth-server-url": "http://keycloak:8080/auth/",
"ssl-required": "external",
"resource": "backend",
"confidential-port": 0
}
This is keycloak configuration in frontend code:
{
URL: "http://localhost:8080/auth/",
realm: 'License-game',
clientId: 'react'
}
This is the configuration of keycloak for backend
Backend and frontend are using different Keycloak URLs in your case - http://keycloak:8080/auth/ vs http://localhost:8080/auth/, so they are expecting different issuer in the token.
So yes, token from the frontend is valid, but not for the backend. Because that one is expecting different issuer value in the token. Use the same keycloak domain everywhere and you want have this kind of problem.
I was having the same problem this days. As previously answered the problem is within the token issuer.
In order to make it work refer to this solution

Access forbidden to Django resource when accessing through Node.js frontend

I cloned a Django+Node.js open-source project, the goal of which is to upload and annotate text documents, and save the annotations in a Postgres db. This project has stack files for docker-compose, both for Django dev and production setups. Both these stack files work completely fine out of the box, with a Postgres database.
Now I would like to upload this project to Google Cloud - as my first ever containerized application. As a first step, I simply want to move the persistent storage to Cloud SQL instead of the included Postgres image in the stack file. My stack-file (Django dev) looks as follows
version: "3.7"
services:
backend:
image: python:3.6
volumes:
- .:/src
- venv:/src/venv
command: ["/src/app/tools/dev-django.sh", "0.0.0.0:8000"]
environment:
ADMIN_USERNAME: "admin"
ADMIN_PASSWORD: "${DJANGO_ADMIN_PASSWORD}"
ADMIN_EMAIL: "admin#example.com"
# DATABASE_URL: "postgres://doccano:doccano#postgres:5432/doccano?sslmode=disable"
DATABASE_URL: "postgres://${CLOUDSQL_USER}:${CLOUDSQL_PASSWORD}#sql_proxy:5432/postgres?sslmode=disable"
ALLOW_SIGNUP: "False"
DEBUG: "True"
ports:
- 8000:8000
depends_on:
- sql_proxy
networks:
- network-overall
frontend:
image: node:13.7.0
command: ["/src/frontend/dev-nuxt.sh"]
volumes:
- .:/src
- node_modules:/src/frontend/node_modules
ports:
- 3000:3000
depends_on:
- backend
networks:
- network-overall
sql_proxy:
image: gcr.io/cloudsql-docker/gce-proxy:1.16
command:
- "/cloud_sql_proxy"
- "-dir=/cloudsql"
- "-instances=${CLOUDSQL_CONNECTION_NAME}=tcp:0.0.0.0:5432"
- "-credential_file=/root/keys/keyfile.json"
volumes:
- ${GCP_KEY_PATH}:/root/keys/keyfile.json:ro
- cloudsql:/cloudsql
networks:
- network-overall
volumes:
node_modules:
venv:
cloudsql:
networks:
network-overall:
I have a bunch of models, e.g. project in the Django backend, which I can view, modify, add and delete using Django admin interface, but while trying to access them through Node.js views I get a 403 Forbidden error. This is the case of all my Django models.
For reference, in the above stack file, I have listed the only difference from the originally cloned Docker-compose stack file, where the DATABASE_URL used to point to a local Postgres Docker image, as follows
postgres:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
POSTGRES_USER: "doccano"
POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}"
POSTGRES_DB: "doccano"
networks:
- network-backend
To check if my GCP keys are correct, I tried to deploy the Cloud SQL Proxy container alone and interact with it (add, remove and update rows in included tables), and that was possible. However, the fact that I can use the Django admin interface successfully in the deployed Docker-compose stack should already prove that things are ok with the Cloud SQL proxy.
I'm not an experienced Node.js developer by any means, and have a little experience with Django and Django admin. My intention behind using a Docker-compose setup was that I will not have to bother with the intricacies of js views, and only have to deal with the Python business logic.

Cannot sign-in to Octopus Deploy after docker installation

I am trying to setup octopus using linux docker image. After container was created I saw sign-in page but I cannot sign-in using admin login and password ("Invalid username or password."). Do you have any suggestions what can be wrong?
This is my docker-compose file:
version: '3.7'
services:
octopus:
image: octopusdeploy/octopusdeploy
hostname: octopus
container_name: octopus
privileged: true
environment:
ACCEPT_EULA: Y
OCTOPUS_SERVER_NODE_NAME: ${OCTOPUS_SERVER_NODE_NAME}
DB_CONNECTION_STRING: ${MSSQL_DB_CONNECTION_STRING}
ADMIN_USERNAME: ${OCT_ADMIN_USERNAME}
ADMIN_PASSWORD: ${OCT_ADMIN_PASSWORD}
ADMIN_EMAIL: ${OCT_ADMIN_EMAIL}
OCTOPUS_SERVER_BASE64_LICENSE: ${OCTOPUS_SERVER_BASE64_LICENSE}
MASTER_KEY: ${OCT_MASTER_KEY}
ADMIN_API_KEY: ${OCT_ADMIN_API_KEY}
ports:
- "8086:8080"
- "10943:10943"
expose:
- "443"
depends_on:
- octopus_mssql
volumes:
- ./octopus/octopus/repository:/repository
- ./octopus/octopus/artifacts:/artifacts
- ./octopus/octopus/taskLogs:/taskLogs
- ./octopus/octopus/cache:/cache
networks:
- tech-network
octopus_mssql:
image: mcr.microsoft.com/mssql/server:2017-latest-ubuntu
hostname: octopus_mssql
container_name: octopus_mssql
environment:
SA_PASSWORD: ${MSSQL_SA_PASSWORD}
ACCEPT_EULA: Y
# Prevent SQL Server from consuming the defult of 80% physical memory.
MSSQL_MEMORY_LIMIT_MB: 2048
MSSQL_PID: Express
expose:
- "1433"
healthcheck:
test: [ "CMD", "/opt/mssql-tools/bin/sqlcmd", "-U", "sa", "-P", "${MSSQL_SA_PASSWORD}", "-Q", "select 1"]
interval: 10s
retries: 10
volumes:
- ./octopus/mssql/data:/var/opt/mssql
networks:
- tech-network
Env file (some values like values and API KEY were changed):
MSSQL_SA_PASSWORD=_passtomssql_
OCT_ADMIN_USERNAME=admin
OCT_ADMIN_PASSWORD=_adminPassword_
OCT_ADMIN_EMAIL=octopus#g.com
OCTOPUS_SERVER_NODE_NAME=octopus
MSSQL_DB_CONNECTION_STRING=Server=octopus_mssql,1433;Database=OctopusDeploy;User=sa;Password=_passtomssql_
OCTOPUS_SERVER_BASE64_LICENSE=_license in base64_
OCT_MASTER_KEY=_master key_
OCT_ADMIN_API_KEY=API-1234567890E1F1234567
I tried to change password for admin in octopus container:
/Octopus/Octopus.Server admin --username=admin --password=NewPassword1234
Command was successful, yet I still cannot sign-in from UI:
Checking the Octopus Master Key has been configured.
Making sure it's safe to upgrade the database schema...
Ensuring pre-conditions for upgrading the database are satisfied...
Searching for indexes that might upset the database upgrade process...
- PASS: All columns use the default collation.
- PASS: Your Octopus Server will be compliant with your license after upgrading.
- PASS: We've done our best to remove any unexpected database indexes.
- PASS: The version of your SQL Server satisfies Octopus Server installation requirements.
Executing always run pre scripts...
Executing TSQL Database Server script 'Octopus.Core.UpgradeScriptsAlwaysPre.Script0000 - Set highest available compatibility level.sql'
Current COMPATIBILITY_LEVEL for OctopusDeploy is set to 140
Ensuring COMPATIBILITY_LEVEL for OctopusDeploy is set to 140
COMPATIBILITY_LEVEL for OctopusDeploy is already 140 or higher
Checking to see if database schema upgrade is required...
Database already has the expected schema. No changes are required.
Executing always run post scripts...
Executing TSQL Database Server script 'Octopus.Core.UpgradeScriptsAlwaysPost.Script0000 - Refresh Views.sql'
Refreshing view dbo.Dashboard
Refreshing view dbo.IdsInUse
Refreshing view dbo.MultiTenancyDashboard
Refreshing view dbo.Release_WithDeploymentProcess
Refreshing view dbo.RunbookSnapshot_WithRunbookProcess
Refreshing view dbo.TenantProject
Creating or modifying administrator 'admin'
Setting password for user admin
Done.
Two suggestions/questions:
Try removing the Admin API key. It should work if you have API key and password specified, but if something is going wrong here, it could be that the admin user is being created as a service account.
The latest tag of octopusdeploy/octopusdeploy for Linux seems to be broken at time of writing. Can you add the 2020.4 tag to your image? "octopusdeploy/octopusdeploy:2020.4"
Just removing the API key also worked on octopusdeploy/octopusdeploy:2020.6 also

Resources