I have elasticsearch(v6.4.0) running on windows 10 machine and also a python(v3.6.0) application with angular 5 frontend and mongodb as backend.I want to use elasticsearch with python so that I can send data from UI to insert into mongodb as well as create index in elasticsearch with same data.How to achieve this?
I succeeded to connect to elasticsearch server from python.But stuck at creating index and querying indexed data.
Please help.
Regards,
Vidyashree
You can do indexing using
es = ElasticSearch()
es.index('indexName', 'type', 'indexQuery').
And searching from:
es.search(index=indexName',doc_type= type,body= searchQuery)
Related
I have created a neo4j and graphql application with neo4j 4.0. In my application, I used two neo4j databases. These instances run in a docker container on my PC. But When I tried to run a query using graphql playground, graphql server gives the following error.
"Could not perform discovery. No routing servers available. Known routing table: RoutingTable[database=default database, expirationTime=0, currentTime=1592037819743, routers=[], readers=[], writers=[]]"
I created neo4j driver instance and session instance as following
const driver = neo4j.driver(
process.env.NEO4J_URI || "neo4j://localhost:7687",
neo4j.auth.basic(
process.env.NEO4J_USER,
process.env.NEO4J_PASSWORD
)
);
const session = driver.session(
{
database: 'mydb',
}
)
I couldn't find any way to fix this issue. Can someone help me to fix this? thank you.
If you use single server please use bolt:// as protocol. The it will not ask the server for routing tables
I want to print all the queries executed on MongoDB from my loopback 3 application when in debug mode. I tried setting "DEBUG" : "loopback:connector:mongodb"
I am using Loopback 2 and I also had to check the MongoDB queries for my APIs. I just used DEBUG=loopback:connector:mongodb node . command to start my loopback server with debugging enabled.
There is one more alternative way to do this. You can add a key debug and set it true in your datasource config file datasource.json files.
If the above two methods don't work for you, Please check the values of debug property in MongoDB function in node_modules/loopback-connector-mongodb/lib/mongodb.js file.
Resources
https://loopback.io/doc/en/lb2/Setting-debug-strings.html
I'm building a webapp using the following the architecture:
a postgresql database (called DB),
a NodeJS service (called DBService) using Sequelize to manipulate the DB and Epilogue to expose a REST interface via Express,
a NodeJS service called Backend serving as a backend and using DBService threw REST calls
an AngularJS website called Frontend using Backend
Here are the version I'm using:
PostgreSQL 9.3
Sequelize 2.0.4
Epilogue 0.5.2
Express 4.13.3
My DB schema is quite complex containing 36 tables and some of them contains few hundreds of records. The DB is not meant to write data very often, but mostly to read them.
But recently I created a script in Backend to make a complete check up of datas contained inside the DB: basically this script retrieve all datas of all tables and do some basic checks on datas. Currently the script only does reading on database.
In order to achieve my script I had to remove the pagination limit of Epilogue by using the option pagination: false (see https://github.com/dchester/epilogue#pagination).
But now when I launch my script I randomly obtained that kind of error:
The request failed when trying to retrieve a uniquely associated objects with URL:http://localhost:3000/CallTypes/178/RendererThemes.
Code : -1
Message : Error: connect ECONNRESET 127.0.0.1:3000
The error randomly appears during the script execution: then it's not always this URL which is returned, and even not always the same tables or relations. The error message before code is a custom message returned by Backend.
The URL is a reference to the DBService but I don't see any error in it, even using logging: console.log in Sequelize and DEBUG=express:* to see what happens in Express.
I tried to put some setTimeout in my Backend script to slow it, without real change. I also tried to manipulate different values like PostgreSQL max_connections limit (I set the limit to 1000 connections), or Sequelize maxConcurrentQueries and pool values, but without success yet.
I did not find where I can customize the pool connection of Express, maybe it should do the trick.
I assume that the error comes from DBService, from the Express configuration or somewhere in the configuration of the DB (either in Sequelize/Epilogue or even in the postgreSQL server itself), but as I did not see any error in any log I'm not sure.
Any idea to help me solve it?
EDIT
After further investigation I may have found the answer which is very similar to How to avoid a NodeJS ECONNRESET error?
: I'm using my own object RestClient to do my http request and this object was built as a singleton with this method:
var NodeRestClient : any = require('node-rest-client').Client;
...
static getClient() {
if(RestClient.client == null) {
RestClient.client = new NodeRestClient();
}
return RestClient.client;
}
Then I was always using the same object to do all my requests and when the process was too fast, it created collisions... So I just removed the test if(RestClient.client == null) and for now it seems to work.
If there is a better way to manage that, by closing request or managing a pool feel free to contribute :)
I already have my Solr configured and running in the port 8983. Initially i indexed all the data in the MongoDB using data import handler. But now I realized that for every update and new insertion to automatically index we need a Mongo connector . I followed these links: Mongo Connector and Usage-with-Solr.
I am getting struck at the point
python mongo_connector.py -m localhost:27017 -t http://localhost:8983/solr
It shows error
python: can't open file 'mongo_connector.py': [Errno 2] No such file or directory
how to integrate a collection named food with the mongodb collection testfood such that new insertions and update automatically reflects in solr.
When I Create the azure database with Entity Framework model first pattern, then the creation works. But when I want to save the database I get the following error:
"Tables without a clustered index are not supported in this version of SQL Server. Please create a clustered index and try again."
I updated entity Framework to Version 6.1.2. But still get the same eroor. Do you have any idea?