We have a mysql database of size 4TB. It has around 3000 tables with few tables having size of 200-300GB.
Sometimes queries takes longer than 60 to 100 seconds on these bigger tables.
A Java application loads data into this database. Another Java spring based web application searches this database..so only select queries are used by this web application.
I am planning to use redis database in between web application and MySQL to improve the select query and in turn web application performance.
Planning to do one time migration to redis initially and then edit Java application to insert data into both MySQL and redis.
Can I use redis for this use case? Please let me know if there are any other ideas.
As per MySQL DBA, it is tuned to maximum extent.
We can't do much changes at database side due to infrastructure challenges. I can try only Software related changes
It depends on which data model in your web application.
you can choose Memcached if it just about just k/v load, otherwise you can choose Redis.
Related
how to serve a large database to the clients?
how to notify clients to update only the changes?
event better, how to provide this functionality with sync-mechanism?
scenario:
the scenario itself has some requirements I'll try to explain:
there is an offline database here that devices need to get for working offline independently.
clients only have to sync themself. some replication mechanism like master-slave .
the master can write data to DB and slaves only have to sync and read data .
I have two bottlenecks here:
database is about: 60 Mb but it can grow much more.
because of multi-platform use case : client's devices maybe be in macos , windows, andriod, ios
first I was using google-firestore for this purpose but our data is some kind of sensitive and we cannot use migration strategy in the future. so I created large sqllite db for clients and . the clients can download the database manually. this is not right. even with small updates, our clients have to download db again.
is it possible to create a self synced mechanism that backend notify clients for getting updates?
I am learning how to use socket.io and nodejs. In this answer they explain how to store users who are online in an array in nodejs. This is done without storing them in the database. How reliable is this?
Is data stored in the server reliable does the data always stay the way it is intended?
Is it advisable to even store data in the server? I am thinking of a scenario where there are millions of users.
Is it that there is always one instance of the server running even when the app is served from different locations? If not, will storing data in the server bring up inconsistencies between the different server instances?
Congrats on your learning so far! I hope you're having fun with it.
Is data stored in the server reliable does the data always stay the way it is intended?
No, storing data on the server is generally not reliable enough, unless you manage your server in its entirety. With managed services, storing data on the server should never be done because it could easily be wiped by the party managing your server.
Is it advisable to even store data in the server? I am thinking of a scenario where there are millions of users.
It is not advisable at all, you need a DB of some sort.
Is it that there is always one instance of the server running even when the app is served from different locations? If not, will storing data in the server bring up inconsistencies between the different server instances?
The way this works typically is that the server is always running, and has some basics information regarding its configuration stored locally - when scaling, hosted services are able to increase the processing capacity automatically, and handle load balancing in the background. Whenever the server is retrieving data for you, it requests it from the database, and then it's loaded into RAM (memory). In the example of the user, you would store the user data in a table or document (relational databases vs document oriented database) and then load them into memory to manipulate the data using 'functions'.
Additionally, to learn more about your 'data inconsistency' concern, look up concurrency as it pertains to databases, and data race conditions.
Hope that helps!
Currently, My web application is on Redis db(all database). it's required more than 4 GB RAM which is cost me a lot.
I want to migrate some part of my application into permanent storage DB(SQL, mongo...)
So, Can anyone tell me which is the best choice(SQL, mongo...)?
Technology stack of my application:
nodejs(express)
angularjs
redis
It really depend on your design. Is your data highly relational? Redis is considered a NoSQL technology so I guess MongoDb would be somewhat similar but implementation will be file-based instead of key-value set. If you need your data to have strong relationship between each data set then SQL family is designed for exactly that, but a lot of work is needed to build the tables first and then separate the data.
I'm migrating from SQL Server to Azure SQL and I'd like to ask you who have more experience in Azure(I have basically none) some questions just to understand what I need to do to have the best migration.
Today I do a lot of cross database queries in some of my tasks that runs once a week. I execute SPs, run selects, inserts and updates cross the dbs. I solved the executions of SPs by using external data sources and sp_execute_remote. But as far as I can see it's only possible to select from an external database, meaning I won't be able to do any inserts or updates cross the dbs. Is that correct? If so, what's the best way to solve this problem?
I also read about cross db calls are slow. Does this mean it's slower that in SQL Server? I want to know if I'll face a slower process comparing to what I have today.
What I really need is some good guidelines on how to do the best migration without spending loads of time with trial and error. I appreciate any help in this matter.
Cross database transactions are not supported in Azure SQL DB. You connect to a specific database, and can't use 3 part names or use the USE syntax.
You could open up two different connections from your program, one to each database. It doesn't allow any kind of transactional consistency, but would allow you to retrieve data from one Azure SQL DB and insert it in another.
So, at least now, if you want your database in Azure and you can't avoid cross-database transactions, you'll be using an Azure VM to host SQL Server.
I am working on inventory application (C# .net 4.0) that will simultaneously inventory dozens of workstations and write the results to a central database. To save me having to write a DAL I am thinking of using Fluent NHibernate which I have never used before.
It is safe and good practice to allow the inventory application which runs as a standalone application to talk directly to the database using Nhibernate? Or should I be using a client server model where all access to the database is via a server which then reads/writes to database. In other words if 50 workstations when currently being inventoried there would be 50 active DB sessions. I am thinking of using GUID-Comb for the PK ID's.
Depending on the environment in which your application will be deployed, you should also consider that direct database connections to a central server might not always be allowed for security reasons.
Creating a simple REST Service with WCF (using WebServiceHost) and simply POST'ing or PUT'ing your inventory data (using HttpClient) might provide a good alternative.
As a result, clients can get very simple and can be written for other systems easily (linux? android?) and the server has full control over how and where data is stored.
it depends ;)
NHibernate has optimistic concurrency control ootb which is good enough for many situations. So if you just create data on 50 different stations there should be no problem. If creating data on one station depends on data from all stations it gets tricky and a central server would help.