I am trying to get PubSub emulator to run in a docker-compose project but I keep getting the error:
auth-service_1 | Error: Unable to detect a Project Id in the current environment.
auth-service_1 | To learn more about authentication and Google APIs, visit:
auth-service_1 | https://cloud.google.com/docs/authentication/getting-started
auth-service_1 | at /app/node_modules/google-auth-library/build/src/auth/googleauth.js:84:31
auth-service_1 | at processTicksAndRejections (internal/process/task_queues.js:93:5)
I have added every environment variable that I could find on numerous GitHub issues threads but nothing seems to help.
Here is the docker-compose file:
version: "3"
services:
pubsub:
image: andrewrjones/google-pubsub-emulator:latest
environment:
- PUBSUB_PROJECT_ID=project-test
- PUBSUB_LISTEN_ADDRESS=0.0.0.0:8042
- PUBSUB_EMULATOR_HOST="localhost:8042"
- GOOGLE_CLOUD_PROJECT="api-xx-dev"
- PUBSUB_PROJECT_ID="api-xx-dev"
- PROJECT_ID="api-xx-dev"
- GOOGLE_APPLICATION_CREDENTIALS="/path/to/serviceAccount.json"
- GCLOUD_PROJECT="api-xx-dev"
ports:
- "8042:8042"
networks:
- main
auth-service:
build: ./auth
volumes:
- ./auth:/app
ports:
- 4000:8080
networks:
- main
user-service:
build: ./user
volumes:
- ./user:/app
ports:
- 4001:8081
networks:
- main
networks:
main:
Route calling PubSub:
import express = require("express");
import { Request, Response } from "express";
import admin from "firebase-admin";
const { PubSub } = require("#google-cloud/pubsub");
const serviceAccount = require("../serviceAccount.json");
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
});
const router = express.Router();
const pubsub = new PubSub();
const publishTopic = async (user: any, topic: any) => {
try {
const buffer = Buffer.from(JSON.stringify(user));
return await pubsub.topic(topic).publish(buffer);
} catch (error) {
console.log(error);
}
};
router.post(
"/api/v2/create-user",
async (req: Request, res: Response) => {
try {
const { email, password } = req.body;
const newUser = await admin.auth().createUser({ email, password });
await publishTopic(newUser, "NEW_FIREBASE_USER_CREATED");
res.send(newUser);
} catch (error) {
console.log(error);
res.status(409).send({ errors: [{ message: error.message }] });
}
}
);
export { router as createUserRouter };
I recall that I had to log into google via the cli to get this error to go on my local machine.
Related
I am running my lambdas on port:4566 using localstack using below image
version: "2.1"
services:
localstack:
image: "localstack/localstack"
container_name: "localstack"
ports:
- "4566-4620:4566-4620"
- "127.0.0.1:8055:8080"
environment:
- SERVICES=s3,es,dynamodb,apigateway,lambda,sns,sqs,sloudformation
- DEBUG=1
- EDGE_PORT=4566
- DATA_DIR=/var/lib/localstack/data
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR}
- LAMBDA_EXECUTOR=docker
- DYNAMODB_SHARE_DB=1
- DISABLE_CORS_CHECKS=1
- AWS_DDB_ENDPOINT=http://localhost:4566
volumes:
- "${TMPDIR:-/var/lib/localstack}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
- "local"
elasticsearch:
container_name: tqd-elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
# volumes:
# - esdata:/usr/share/elasticsearch/data
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
depends_on:
- "localstack"
logging:
driver: none
ports:
- 9300:9300
- 9200:9200
networks:
- "local"
networks:
local:
driver: "bridge"
Problem: Not getting any response from elasticsearch while calling it from lambda
This is my lambda code
module.exports.createIndex = async () => {
const elasticClient = new Client({
node: "http://localhost:9200"
});
console.log("before the client call")
console.log(getIndices().then(res => { console.log(res) }).catch(err => {
console.log(err)
}))
console.log("after the client call")
const getIndices = async () =>
{
return await elasticClient.indices.create({
index:"index-from-lambda"
})
}
return {
statusCode: 201,
body: JSON.stringify({
msg:"index created successfully"
})
}
}
logs on my docker image,
before the client call
Promise { <pending> }
console.log("after the client call")
After this even when i go to bash and validate whether this index has been created or not , it returns empty set of indexes i.e. no index has been created
But, the same code works fine i.e. creates index on elasticsearch at port 9200 while called from httpserver #port 3000 and from standalone javascript file
standalone server code
const express = require('express')
const app = express();
const { Client } = require('#elastic/elasticsearch');
const elasticClient = new Client({
node: "http://localhost:9200"
});
app.listen(3000, () => {
console.log('listening to the port 3000')
})
const getIndices = async () =>
{
return await elasticClient.cat.indices()
}
console.log(getIndices().then(res => { console.log(res) }).catch(err => {
console.log(err)
}))
this is standalone js script
const { Client } = require('#elastic/elasticsearch');
const elasticClient = new Client({
node: "http://localhost:9200"
});
const getIndices = async () =>
{
return await elasticClient.cat.indices()
}
console.log(getIndices().then(res => { console.log(res) }).catch(err => {
console.log(err)
}))
Is this any kind of networking error or docker image error?
This issue has been listed out here,
The problem is with the endpoint.
So localhost actually addresses your docker container, not your host machine.
If you run your express server on the host, please use host.docker.internal as hostname, which will address the host from your docker container.
Same is the thing with Elasticsearch image.
Now code becomes,
elasticClient = new Client({
node:"http://host.docker.internal:9200"
})
Rest remains the same.
I'm creating an api using docker, postgresql, and nodejs (typescript). I've had this error ever since creating an admin user and nothing seems to be able to fix it:
Error in docker terminal:
Error: getaddrinfo EAI_AGAIN database
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:72:26)
[ERROR] 12:33:32 Error: getaddrinfo EAI_AGAIN database
Error in Insomnia:
{
"status": "error",
"message": "Internal server error - Cannot inject the dependency \"categoriesRepository\" at position #0 of \"ListCategoriesUseCase\" constructor. Reason:\n No repository for \"Category\" was found. Looks like this entity is not registered in current \"default\" connection?"
}
I'm following an online course and this same code seems to work for everyone else since there are no reports of this error in the course's forum.
From what I've gathered it seems to be some type of problem when connecting to my database, which is a docker container. Here is my docker-compose.yml file:
version: "3.9"
services:
database_ignite:
image: postgres
container_name: database_ignite
restart: always
ports:
- 5432:5432
environment:
- POSTGRES_USER=something
- POSTGRES_PASSWORD=something
- POSTGRES_DB=rentx
volumes:
- pgdata:/data/postgres
app:
build: .
container_name: rentx
restart: always
ports:
- 3333:3333
- 9229:9229
volumes:
- .:/usr/app
links:
- database_ignite
depends_on:
- database_ignite
volumes:
pgdata:
driver: local
My server.ts file:
import "reflect-metadata";
import express, { Request, Response, NextFunction } from "express";
import "express-async-errors";
import swaggerUi from "swagger-ui-express";
import { AppError } from "#shared/errors/AppError";
import createConnection from "#shared/infra/typeorm";
import swaggerFile from "../../../swagger.json";
import { router } from "./routes";
import "#shared/container";
createConnection();
const app = express();
app.use(express.json());
app.use("/api-docs", swaggerUi.serve, swaggerUi.setup(swaggerFile));
app.use(router);
app.use(
(err: Error, request: Request, response: Response, next: NextFunction) => {
if (err instanceof AppError) {
return response.status(err.statusCode).json({
message: err.message,
});
}
return response.status(500).json({
status: "error",
message: `Internal server error - ${err.message}`,
});
}
);
app.listen(3333, () => console.log("Server running"));
And this is my index.ts file, inside src>modules>shared>infra>http>server.ts:
import { Connection, createConnection, getConnectionOptions } from "typeorm";
export default async (host = "database"): Promise<Connection> => {
const defaultOptions = await getConnectionOptions();
return createConnection(
Object.assign(defaultOptions, {
host,
})
);
};
I've tried restarting my containers, remaking them, accessing my postgres container and checking the tables, I've switched every "database" to "localhost" but it's the same every time: the containers run, but the error persists. I've checked the course's repo and my code matches. I've flushed my DNS and that also did nothing.
Here's the admin.ts file that "started it all":
import { hash } from "bcryptjs";
import { v4 as uuidv4 } from "uuid";
import createConnection from "../index";
async function create() {
const connection = await createConnection("localhost");
const id = uuidv4();
const password = await hash("admin", 6);
await connection.query(`
INSERT INTO Users (id, name, email, password, "isAdmin", driver_license, created_at)
VALUES (
'${id}',
'admin',
'admin#rentx.com.br',
'${password}',
true,
'0123456789',
NOW()
)
`);
await connection.close;
}
create().then(() => console.log("Administrative user created"));
I would love to know what is causing this error.
It looks like you have a service named database_ignite in your docker-compose.yml file. Docker by default creates a host using the name of your service. Try changing your host from database inside your index.ts file to database_ignite:
import { Connection, createConnection, getConnectionOptions } from "typeorm";
export default async (host = "database_ignite"): Promise<Connection> => {
// Changed database to ^^ database_ignite ^^
const defaultOptions = await getConnectionOptions();
return createConnection(
Object.assign(defaultOptions, {
host,
})
);
};
Solution
I found the solution, thanks for your time. StackOverflow doesn't let me answer me. >:(
Description
I would like to know what kind of configuration I need to perform to deploy a lambda Localstack of an application in NesJS that has Graphql, not RestApi. This is my configuration.
Is correct this configuration?
How can I get the fake link to access the playground across lambda Localstack?
Implementation
Configuration the nestJS lambda function project
import { ValidationPipe } from '#nestjs/common';
import { NestFactory } from '#nestjs/core';
import { ExpressAdapter } from '#nestjs/platform-express';
import serverlessExpress from '#vendia/serverless-express';
import { APIGatewayProxyHandler, Handler } from 'aws-lambda';
import express from 'express';
import { AppModule } from './app.module';
let cachedServer: Handler;
const bootstrapServer = async (): Promise<Handler> => {
const expressApp = express();
const app = await NestFactory.create(
AppModule,
new ExpressAdapter(expressApp),
);
app.useGlobalPipes(new ValidationPipe());
app.enableCors();
await app.init();
return serverlessExpress({
app: expressApp,
});
};
export const handler: APIGatewayProxyHandler = async (
event,
context,
callback,
) => {
if (!cachedServer) {
cachedServer = await bootstrapServer();
}
return cachedServer(event, context, callback);
};
Serverless.yml Configuration
service: test
provider:
name: aws
runtime: nodejs14.x
stage: ''
# profile: local # Config your AWS Profile
environment: # Service wide environment variables
NODE_ENV: local
GLOBAL_PREFIX: graphql
PORT: 4000
plugins:
# - serverless-plugin-typescript
# - serverless-plugin-optimize
# - serverless-offline
- serverless-localstack
custom:
localstack:
debug: true
stages:
- local
- dev
endpointFile: localstack_endpoints.json
individually: true
# serverless-offline:
# httpPort: 3000
functions:
main:
handler: dis/index.handler
# local:
# handler: dist/main.handler
# events:
# - http:
# path: /
# method: any
# cors: true
Docker Compose Localstack
version: '3.8'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
- SERVICES=lambda,s3,cloudformation,sts
ports:
- '4566-4597:4566-4597'
volumes:
- "${TMPDIR:-/tmp}/localstack:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
Services
{
"CloudFormation" : "http://localhost:4566",
"CloudWatch" : "http://localhost:4566",
"Lambda" : "http://localhost:4566",
"S3" : "http://localhost:4566"
}
Commands
serverless deploy --stage local
When executing this command I get this error
serverless info --stage local
When executing this command I get this information
serverless invoke local -f main -l
When executing this command
Not sure, but probably you have a typo here at Serverless.yml:
handler: dis/index.handler
I have a node application running in docker with mongodb and it works fine on development environment. However, I'm creating some tests with mocha and chai and I can't connect to mongo when I run these tests.
The function I want to test is:
const Interactor = require("interactor");
const Donation = require("../models/donations");
module.exports = class CreateDonation extends Interactor {
async run(context) {
this.context = context;
this.donation = new Donation.Model({
donationId: context.id,
status: context.status,
amount: context.chargeInfo.donatedValue,
donatorEmail: context.donatorInfo.email,
source: context.source,
});
await this.donation.save();
}
rollback() {
Donation.Model.findOneAndRemove({ donationId: this.context.id });
}
};
My test:
/* eslint-disable no-unused-vars */
/* eslint-disable no-undef */
const chai = require("chai");
const chaiHttp = require("chai-http");
const CreateDonation = require("../../interactors/create-donation");
require("../../config/db");
const should = chai.should();
const { expect } = chai;
chai.use(chaiHttp);
describe("CreateDonation", () => {
it("Creates a donation when context passed is correct", async (done) => {
const context = {
id: "123123",
status: "AUTHORIZED",
chargeInfo: {
donatedValue: 25.0,
},
donatorInfo: {
email: "test#example.com",
},
source: "CREDIT_CARD",
};
const result = await CreateDonation.run(context);
console.log(result);
done();
});
});
My db config file:
const mongoose = require("mongoose");
require("dotenv/config");
mongoose
.connect("mongodb://db:27017/donations", {
useNewUrlParser: true,
useUnifiedTopology: true,
reconnectInterval: 5000,
reconnectTries: 50,
})
.then(() => {
console.log("good");
})
.catch((err) => {
console.log(err);
});
mongoose.Promise = global.Promise;
module.exports = mongoose;
The error I get from the test above is:
MongooseServerSelectionError: getaddrinfo ENOTFOUND db
What am I doing wrong? Am I missing to import something?
When you run your services inside docker with a docker compose file, they'll get an hostname based on the name you wrote for the service inside the docker-compose file.
Example:
version: "3.9"
services:
web:
build: .
ports:
- "5000:5000"
redis:
image: "redis:alpine"
In this example, the web service can reach the redis db at the redis hostname.
If you change the service name in this way:
db:
image: "redis:alpine"
The web service must connect to the db host.
So, when you run the compose file, your the db service is reached with the db hostname from you app service. But when you run your tests outside a docker compose, the db hostname isn't available and you need to use localhost because your db is running on your OS directly (or it is running inside a container with the 27017 port mapped on the main host).
If you're using a unix OS, you can solve your problem adding an alias in your /etc/hosts file:
127.0.0.1 localhost db
In this way you can run your tests keeping the db connection string.
Otherwise, and this is the suggested solution, you can use an environment variable to change the connection string at the application startup:
mongoose.connect(process.env.MONGO_URI)
And run it using
MONGO_URI=mongodb://db:27017/donations npm start
Then in the docker compose you can add a fixed environment variable using this code:
environment:
- MONGO_URI=mongodb://db:27017/donations
Just found out that when testing, I need to use "localhost" on my connection string for mongo (I was using the name from docker-compose). So with the URI as "mongodb://localhost:27017/donations" it worked. I don't know why.
I cannot connect to Elasticsearch docker server from my NodeJS application.
My code
This is my docker-compose file:
version: "3.7"
services:
backend:
container_name: vstay-api
ports:
- "4000:4000"
build:
context: .
dockerfile: Dockerfile
env_file:
- ./.env
environment:
- DB_URI=mongodb://mongo:27017/vstay-db
- DB_HOST=mongo
- DB_PORT=27017
restart: always
links:
- mongo
- elasticsearch
mongo:
image: mongo:4.2
ports:
- "9000:27017"
container_name: vstay-db
restart: always
volumes:
- "./data/mongo:/data/db"
environment:
- DB_HOST=mongo
- DB_PORT=27017
command: mongod
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.3
container_name: vstay_elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=datasearch
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- cluster.initial_master_nodes=elasticsearch
ports:
- 9200:9200
- 9300:9300
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./data/elastic:/usr/share/elasticsearch/data
kibana:
image: docker.elastic.co/kibana/kibana:7.9.3
container_name: vstay_kibana
logging:
driver: none
elastic.js
const { Client } = require("#elastic/elasticsearch");
module.exports.connectES = () => {
try {
const client = new Client({
node: "http://localhost:9200",
maxRetries: 5,
requestTimeout: 60000,
sniffOnStart: true,
});
client.ping(
{
// ping usually has a 3000ms timeout
requestTimeout: Infinity,
// undocumented params are appended to the query string
hello: "elasticsearch!",
},
function (error) {
if (error) {
console.log(error);
console.trace("elasticsearch cluster is down!");
} else {
console.log("All is well");
}
}
);
return client;
} catch (error) {
console.log(error);
process.exit(0);
}
};
And index.js to connect:
const { app } = require("./config/express");
const { connect: dbConnect } = require("./config/mongo");
const { connectES } = require("./config/elastic");
const { port, domain, env } = require("./config/vars");
let appInstance;
const startApp = async () => {
const dbConnection = await dbConnect();
const ESConnection = await connectES();
app.locals.db = dbConnection;
app.locals.ESClient = ESConnection;
app.listen(port, () =>
console.log(`Server is listening on ${domain.API} (${env})`)
);
return app;
};
appInstance = startApp();
module.exports = { appInstance };
Error
I have an application that is dockerized (NodeJS and Elasticsearch - v7.9.3). The server could be started well, but when I tried to create a Client instance in Elasticsearch, it showed me an error:
ConnectionError: connect ECONNREFUSED 127.0.0.1:9200
at ClientRequest.<anonymous> (/app/node_modules/#elastic/elasticsearch/lib/Connection.js:109:18)
at ClientRequest.emit (events.js:210:5)
at Socket.socketErrorListener (_http_client.js:406:9)
at Socket.emit (events.js:210:5)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
name: 'ConnectionError',
meta: {
body: null,
statusCode: null,
headers: null,
meta: {
context: null,
request: [Object],
name: 'elasticsearch-js',
connection: [Object],
attempts: 5,
aborted: false
}
}
}
The server of Elasticsearch and Kibana are started, I can connect it on my browser at: http://localhost:9200 and http://localhost:5601.
But when I connect from my nodeJS app, it still shows error. I also tried to find my Container IP and replace it with 'localhost' but it still not working.
Can anyone help me to resolve this? Thanks.
My Environment
node version: v10.19.0
#elastic/elasticsearch version: 7.9.3
os: Linux
Enviroment: Docker
when you run the docker-compose file, the elasticsearch service instance will not be available to your backend service at localhost. change http://localhost:9200 to http://elasticsearch:9200 in your node.js code.
docker compose automatically creates dns entries with same name as the service name for each service.
Following solution works for me
http://127.0.0.1:9200