I have following code to save to a local running mongo instance:
MongoCredential credential = MongoCredential.createCredential("myuser", "mydatabase", "mypassword".toCharArray());
MongoClient mongo = MongoClients.create(MongoClientSettings.builder()
.applyToClusterSettings(builder -> builder.hosts(Arrays.asList(new
ServerAddress("localhost", 27017))))
.credential(credential)
.build());
MongoDatabase database = mongo.getDatabase("mydatabase");
MongoCollection<Document> collection = database.getCollection("mycollection");
collection.insertOne(document);
I have created a user for usernmae/password used in code above using db.createUser() command in mongo.exe shell and these are same credentials I provided while installing mongodb.
db.createUser(
{ user: "myuser",
pwd: "mypassword",
roles:[{role: "userAdminAnyDatabase" , db:"admin"}]})
But code fails with:
Exception in thread "main" com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=SCRAM-SHA-1, userName='myuser', source='mydatabase', password=<hidden>, mechanismProperties={}}
What am I missing here?
Where, i.e. in which database did you create the user? Typically users are created in database admin. When you connect to a MongoDB then you should always specify the authentication database and the database you like to use.
The defaults are a bit confusing and not really consistent, esp. different drivers/tools seem to behave different. See this table to get an overview:
+-------------------------------------------------------------------------------------+
|Connection parameters | Authentication | Current |
| | database | database |
+-------------------------------------------------------------------------------------+
|mongo -u user -p pwd --authenticationDatabase admin myDB | admin | myDB |
|mongo -u user -p pwd myDB | myDB | myDB |
|mongo -u user -p pwd --authenticationDatabase admin | admin | test |
|mongo -u user -p pwd --host localhost:27017 | admin | test |
|mongo -u user -p pwd | admin | test |
|mongo -u user -p pwd localhost:27017 | test | test |
|mongosh -u user -p pwd localhost:27017 | admin | test | -> Different on mongosh and legacy mongo shell
+-------------------------------------------------------------------------------------+
If you like to use Connection string in URI format, it would correspond to these ones. There it is more consistent and well documented.
+-------------------------------------------------------------------------------------+
|Connection string | Authentication | Current |
| | database | database |
+-------------------------------------------------------------------------------------+
|"mongodb://user:pwd#hostname/myDB?authSource=admin" | admin | myDB |
|"mongodb://user:pwd#hostname/myDB" | myDB | myDB |
|"mongodb://user:pwd#hostname?authSource=admin" | admin | test |
|"mongodb://user:pwd#hostname" | admin | test |
+-------------------------------------------------------------------------------------+
I guess you created the user in admin database but as you don't specify authenticationDatabase while connecting, Mongo defaults it to mydatabase where it fails, because user does not exist in database mydatabase.
Related
This issue has to do with the fact that the file exists on the backend container but not the postgres container. How could I transfer the file between containers automatically?
I am currently trying to execute the following script:
COPY climates(
station_id,
date,
element,
data_value,
m_flag,
q_flag,
s_flag,
obs_time
)
FROM '/usr/api/2017.csv`
DELIMITER ','
CSV HEADER;
within a docker container running a sequelize backend connecting to a postgres:14.1-alpine container.
The following error is returned:
db_1 | 2022-08-30 04:23:58.358 UTC [29] ERROR: could not open file "/usr/api/2017.csv" for reading: No such file or directory
db_1 | 2022-08-30 04:23:58.358 UTC [29] HINT: COPY FROM instructs the PostgreSQL server process to read a file. You may want a client-side facility such as psql's \copy.
db_1 | 2022-08-30 04:23:58.358 UTC [29] STATEMENT: COPY climates(
db_1 | station_id,
db_1 | date,
db_1 | element,
db_1 | data_value,
db_1 | m_flag,
db_1 | q_flag,
db_1 | s_flag,
db_1 | obs_time
db_1 | )
db_1 | FROM '/usr/api/2017.csv'
db_1 | DELIMITER ','
db_1 | CSV HEADER;
ebapi | Unable to connect to the database: MigrationError: Migration 20220829_02_populate_table.js (up) failed: Original error: could not open file "/usr/api/2017.csv" for reading: No such file or directory
ebapi | at /usr/api/node_modules/umzug/lib/umzug.js:151:27
ebapi | at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
ebapi | at async Umzug.runCommand (/usr/api/node_modules/umzug/lib/umzug.js:107:20)
ebapi | ... 2 lines matching cause stack trace ...
ebapi | at async start (/usr/api/index.js:14:3) {
ebapi | cause: Error
ebapi | at Query.run (/usr/api/node_modules/sequelize/lib/dialects/postgres/query.js:50:25)
ebapi | at /usr/api/node_modules/sequelize/lib/sequelize.js:311:28
ebapi | at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
ebapi | at async Object.up (/usr/api/migrations/20220829_02_populate_table.js:10:5)
ebapi | at async /usr/api/node_modules/umzug/lib/umzug.js:148:21
ebapi | at async Umzug.runCommand (/usr/api/node_modules/umzug/lib/umzug.js:107:20)
ebapi | at async runMigrations (/usr/api/util/db.js:52:22)
ebapi | at async connectToDatabase (/usr/api/util/db.js:32:5)
ebapi | at async start (/usr/api/index.js:14:3) {
ebapi | name: 'SequelizeDatabaseError',
...
Here is my docker-compose.yml
# set up a postgres database version: "3.8" services: db:
image: postgres:14.1-alpine
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- "5432:5432"
volumes:
- db:/var/lib/postgresql/data
- ./db/init.sql:/docker-entrypoint-initdb.d/create_tables.sql api:
container_name: ebapi
build:
context: ./energybot
depends_on:
- db
ports:
- 3001:3001
environment:
DATABASE_URL: postgres://postgres:postgres#db:5432/postgres
DB_HOST: db
DB_PORT: 5432
DB_USER: postgres
DB_PASSWORD: postgres
DB_NAME: postgres
links:
- db
volumes:
- "./energybot:/usr/api"
volumes: db:
driver: local
I am building REST API with PERN(postgres,express,react,node). I am trying to test my user registration route on postman and when I send the request I get this error. "database [my database name] does not exist"
I checked postgres server and i can clearly see i own the database and it is created. This is my connection.
const Pool = require("pg").Pool
const pool = new Pool({
host: "localhost",
user: "[myuser]",
password: "[mypassword]",
port: 5432,
database: "rental"
})
module.exports = pool;
Name | Owner | Encoding | Collate | Ctype | Access privileges
----------------+----------------+----------+-------------+-------------+-----------------------
lucasleiberman | lucasleiberman | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
rental | lucasleiberman | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
rentalapp | lucasleiberman | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres
I am new to ELK stack. Stuck with data ingestion in elasticsearch using logstash.
I am mentioning the steps I followed:-
Installed ELK Stack successfully.
Then installed plugin logstash-input-mongodb.
After that configured logstash with below file:-
input {
mongodb {
uri => 'mongodb://localhost:27017/dbName'
placeholder_db_dir => '/opt/logstash-mongodb/'
placeholder_db_name => 'logstash_sqlite.db'
collection => 'notifications'
batch_size => 1
}
}
output {
elasticsearch {
action => "index"
index => "notifications_data"
hosts => ["localhost:9200"]
}
stdout { codec => json }
}
Saved the above file as mongo-connector.conf
then run this using
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/mongo-connector.conf
After this, the logs on the terminal was:-
D, [2020-11-07T14:01:45.739178 #29918] DEBUG -- : MONGODB | localhost:27017 req:480 conn:1:1 sconn:78 | dbName.listCollections | STARTED | {"listCollections"=>1, "cursor"=>{}, "nameOnly"=>true, "$db"=>"dbName", "lsid"=>{"id"=><BSON::Binary:0x2064 type=uuid data=0x4508feda2dce4ec6...>}}
D, [2020-11-07T14:01:45.741919 #29918] DEBUG -- : MONGODB | localhost:27017 req:480 | dbName.listCollections | SUCCEEDED | 0.002s
D, [2020-11-07T14:01:50.756430 #29918] DEBUG -- : MONGODB | localhost:27017 req:481 conn:1:1 sconn:78 | dbName.find | STARTED | {"find"=>"notifications", "filter"=>{"_id"=>{"$gt"=>BSON::ObjectId('5fa012440d0e947dd8dfd2f9')}}, "limit"=>1, "$db"=>"dbName", "lsid"=>{"id"=><BSON::Binary:0x2064 type=uuid data=0x4508feda2dce4ec6...>}}
D, [2020-11-07T14:01:50.758080 #29918] DEBUG -- : MONGODB | localhost:27017 req:481 | dbName.find | SUCCEEDED | 0.001s
D, [2020-11-07T14:01:50.780259 #29918] DEBUG -- : MONGODB | localhost:27017 req:482 conn:1:1 sconn:78 | dbName.listCollections | STARTED | {"listCollections"=>1, "cursor"=>{}, "nameOnly"=>true, "$db"=>"dbName", "lsid"=>{"id"=><BSON::Binary:0x2064 type=uuid data=0x4508feda2dce4ec6...>}}
D, [2020-11-07T14:01:50.782687 #29918] DEBUG -- : MONGODB | localhost:27017 req:482 | dbName.listCollections | SUCCEEDED | 0.002s
D, [2020-11-07T14:01:53.986862 #29918] DEBUG -- : MONGODB | Server description for localhost:27017 changed from 'standalone' to 'standalone' [awaited].
D, [2020-11-07T14:01:53.987784 #29918] DEBUG -- : MONGODB | There was a change in the members of the 'Single' topology.
D, [2020-11-07T14:01:54.311966 #29918] DEBUG -- : MONGODB | Server description for localhost:27017 changed from 'standalone' to 'standalone'.
D, [2020-11-07T14:01:54.312747 #29918] DEBUG -- : MONGODB | There was a change in the members of the 'Single' topology.
D, [2020-11-07T14:01:55.799418 #29918] DEBUG -- : MONGODB | localhost:27017 req:483 conn:1:1 sconn:78 | dbName.find | STARTED | {"find"=>"notifications", "filter"=>{"_id"=>{"$gt"=>BSON::ObjectId('5fa012440d0e947dd8dfd2f9')}}, "limit"=>1, "$db"=>"dbName", "lsid"=>{"id"=><BSON::Binary:0x2064 type=uuid data=0x4508feda2dce4ec6...>}}
Below is the logstash logs file:-
[2020-11-07T16:32:33,678][WARN ][logstash.inputs.mongodb ][main][6a52e3ca90ba4ebc63108d49c11fcede25b196c679f313b40b02a8e17606c977] MongoDB Input threw an exception, restarting {:exception=>#<Sequel::DatabaseError: Java::JavaSql::SQLException: attempt to write a readonly database>}
Index gets created on elasticsearch but docs don't get inserted there.
when i try to run a bashscript in GIT BASH (working on a Windows OS) to execute psql commands i get the following result:
name#DESKTOP-AVB2MRC MINGW64 ~/dragonstack/backend
a npm run configure
> backend#1.0.0 configure C:\Users\name\dragonstack\backend
> bash ./bin/configure_db.sh
Configuring dragonstackdb
My Files:
configure_db.sh
#!/bin/bash
echo "Configuring dragonstackdb";
dropdb -U node_user dragonstackdb;
createdb -U node_user dragonstackdb;
psql -U node_user dragonstackdb <./bin/sql/generation.sql;
psql -U node_user dragonstackdb <./bin/sql/dragon.sql;
echo "dragonstackdb configured";
generation.sql
CREATE TABLE generation(
id SERIAL PRIMARY KEY ,
expiration TIMESTAMP NOT NULL
);
dragon.sql
CREATE TABLE dragon(
id SERIAL PRIMARY KEY,
birthdate TIMESTAMP NOT NULL,
nickname VARCHAR(64),
"generationId" INTEGER,
FOREIGN KEY ("generationId") REFERENCES generation(id)
);
I have two projects in my application:
App/AppServer
libraries/domain
Below is the folder structure:
+---apps
| \---AppServer
| +---config
| +---node_modules
| +---src
| | +---auth
| | | \---dto
| | +---config
| | +---masterDataHttp
| | \---tasks
| | +---dto
| | \---pipes
| \---test
+---libraries
| \---domain
| +---node_modules
| \---src
| \---masterData
\---node_modules
I have some entities defined under libraries\domain\src\masterData and a few entities under apps\AppServer\src\tasks.
My ormconfig is defined under apps\AppServer\src\config. It imports the entities using
__dirname + '/../**/*.entity.{js,ts}'
Using above we can import entities under apps\AppServer\src. But I am trying to figure out what best approach to import entities defined under libraries\domain\src.
One option is import entities directly using
import { Entity1, Entity2 } from '#myproj/domain'
What is the recommended practice/approach to address this? TIA
Importing the entities directly
entities: [__dirname + '/../**/*.entity.{js,ts}', Entity1, Entity2]
Or you could wrap all of the entities of the library in a variable:
// libraries/domain
export const entities = [Entity1, Entity2];
// importing the entities
import { entities } from '#myproj/domain' ;
...
entities: [__dirname + '/../**/*.entity.{js,ts}', ...entities]
Importing the entities with path string as you already do
entities: [
__dirname + '/../**/*.entity.{js,ts}',
'src/libraries/domain/**/*.entity.{js,ts}'
]
This will just work.