Nodejs/Vuejs implementing Elasticsearch - node.js

I am new to Elasticsearch and also confused how do I actually start implementing it. I have developed an office management software where on a daily basis tasks and other information based to that task belonging to a specific clients are stored. I have written API's in nodejs and the front-end in vuejs and MySQL db is used. So I want to implement a search functionality using Elasticsearch wherein user can search the tasks with any parameters they would like to.
Listed below are some of my questions
Now do Elasticsearch will work as an another db. If so, then how do I keep the record updated in Elasticsearch db as well.
Would it effect the efficiency in any way.
Also what is kibana and logstash in simple terms.
Is implementing Elasticsearch on client side is a good idea? Is Yes, then how can I implement Elasticsearch and kibana using vuejs?
I am confused with all the above things, can anyone kindly share their knowledge on the above listed questions and also tell which articles/docs/videos should I refer for implementing Elasticsearch in the best possible way?

Elasticsearch
It is a data store, all the JSON data will(Single Record/Row) be stored in indexes(Tables)
Update the records in elasticsearch using your backend only, even though we have packages available to connect the frontend to Elasticsearch.
Efficiency, nothing gets affected except the new stack in your application.
Implementing elasticsearch on the client-side is not a recommended option, Same code same API can be used till your MySQL DB connection, add a function to save update any data along with MySQL save call.
Example : MySQLConfig.SaveStudent(student)
ElasticsearchConfig.SaveStudent(student)
Till here there is no code change needed to save/update/delete/getByPrimaryID/GetByParamSearch,
For `getByPrimaryID/GetByParamSearch` search, you have to create a different API either to elasticsearch to MySQL but not both.
Kibana
GUI for your Elasticsearch - Look at it like dbForge Studio, MySQL Workbench, phpMyAdmin
Other than GUI it has a lot of other functionalities like cluster monitoring, all the elastic stack monitoring, analytics, and so on.
Logstash
It ships many files and save it into elasticsearch index, this is not needed until u need it for use cases like
application-prod.log to searchable index
Kafka topic to searchable index
MySQL Table to searchable index
Huge list of use-cases available to ship anything and make it a searchable index
To understand clearly about index, mappings, document in elasticsearch vs database, table, scheme, record in MySQL read from here

Related

How do I sync data between PostgreSQL and Elasticsearch using kibana in nodejs?

Ultimately I want to have a scalable search engine solution for the data in PostgreSql. I want to connect postgreSQL and elasticsearch data. My finding points me towards using Logstash or docker to write events from Postgres to ElasticSearch, however I have not found a usable solution.
I want a scalable solution for that i hope you guys share your comments on this and give me some clues to proceed further.

Elasticsearch: NodeJS MongoDB

My customer has an existing application built on Node and MongoDB. In this app, there is a search functionality which is more complex. I want to convert it to Elasticsearch. How can I achieve it? I have installed Elasticsearch on my system (Windows) and now I am having confusion that, do I need to put all data in Elasticsearch? No, I don't know the concept. How can I implement Elasticsearch to the current app? Is Elasticsearch a separate database like MongoDB?

How can I switch between a live and a production database without duplicating code?

Here is my situation. I have an extensive REST based API that connects to a MongoDB database using Mongoose. The API is written as a standard "MEAN" stack application.
Currently, when a developer queries the API they're always connecting to the live production database. What I want to do is have an exact duplicate database as a "staging" database, where new data will be added first, vetted over a period of time, and then move to the live database. Then I want developers to be able to query either one simply by modifying their query.
I started looking into this with the Mongoose documentation, and it appears as though the models are tied to the DB connection, and if I want to have multiple connections I also have to have multiple models, one for each connection. This would be a nightmare of WET code and not the path I want to take.
What I want to do is not touch any of my code at all and simply have a switch that changes to the proper database for a given query. So my question is, how can I achieve this? Is it possible? The documentation seems to imply it is not.
Rather than trying to maintain connections two environments in the same code base have you considered setting up stage version of your application? Which database it connects to could be set through an environment variable or some other configuration option.
The developers would still then only have to make a change to query one or the other and you could migrate data from the stage database to production/live database once you have finished your vetting process.

Keyword Search in microservices based architecture

I need some advice as to how a search needs to be implemented to search keywords within relational databases owned by microservices.
I have some microservices with their own relational DB. These microservices are likely to be deployed in a docker container.
What would be the best way to use a search engine like Apache SOLR so that each of the microservices' database can be indexed and we can achieve keyword search
Thanks in advance
This seems to be an architectural question and while it makes sense, the question is also a little open ended depending on what your system requires. A couple of things that come off the top of my head is:
Use the DataImportHandler from Apache SOLR.
Use a message queue like Kafka or Kinesis and have the independent services consume from it to propagate to their data stores in this case, a search service backed by Apache SOLR and another service backed by MySQL.
Personally, I've never used the DataImportHandler myself but my initial thoughts are that it couples Apache SOLR to MySQL. Setting up the DataImportHandler requires Apache SOLR to know the MySQL schema, the access credentials, etc. Because of this, I would advise the second option which moves towards a shared-nothing architecture.
I'm going to call the service that is backed by MySQL the "entity" service as it sounds like its going to be the canonical service for saving some particular type of object. The entity service and the search service will have its own particular consumer that ingests events from Kinesis or Kafka into their data stores, the search service to Apache SOLR and the entity service to MySQL.
This helps decouple the services from knowing that each other exists and also allow each of them to scale independently from each other. It'll be a redundancy in data but it should be alright because the data access patterns are different.
The other caveat that I'd like to mention is that it assumes that the entity for which you're saving is allowed to be asynchronous. Notice that messages in this system doesn't require it to be persisted in MySQL at the moment which in this case is the entity service. However, you may change it to your liking such that a message persists in the entity service and then is propagated through a queue to a search service to index. After its been index, you can just add additional endpoints to search Apache SOLR. Hope this gives you some insight on how some other architectures may come into play. If you give a little more insight as to your system and the entities that involved, you might be able to get a better answer.

SubSonic-based app that connects to multiple databases

I currently developed an app that connects to SQL Server 2005 database, so my DAL objects where generated using information from that DB.
It will also be possible to connect to an Oracle and MySQL db, all with the same table structures (aside from the normal differences in fields, such as varbinary(max) in SQL Server and BLOB in Oracle, and so on). For this purpose, I already defined multiple connection strings and multiple SubSonic providers for the different DB's the app will run on.
My question is, if I generated my objects using a SQL Server database, should the generated objects work transparently with the other DB's or do I need to generate a different DAL for each database engine I use? Should I be aware of any possible bugs I may encounter while performing these operations?
Thanks in advance for any advice on this issue.
I'm using SubSonic 2.2 by the way....
From what I've been able to test so far, I can't see an easy way to achieve what I'm trying to do.
The ideal situation for me would have been to generate SubSonic objects using SQL Server for example, and just be able to switch dynamically to MySQL by just creating at runtime the correct Provider for it along with its connection string. I got to a point where my app would correctly connect from SQL Server to a MySQL DB, but there's a point where the app fails since SubSonic internally generates queries of the form
SELECT * FROM dbo.MyTable
which MySQL doesn't support obviously. I also noticed queries that enclosed table names with brackets ([]), so it seems that there are a number of factors that would limit the use of one Provider along multiple DB engines.
I guess my only other option is to sort it out with multiple generated providers, although I must admit it does not make me comfortable knowing that I'll have N copies of basically the same classes along my project.
I would really love to hear from anyone else if they've had similar experiences. I'll be sure to post my results once I get everything sorted out and working for my project.
Has any of this changed in 3.0? This would definitely be a worthy reason for me to upgrade if life is any easier on this matter...

Resources