I wonder if there are options to use an AQL-Query over several Databases?
Using pyArango I know you could write a script that iterates over all databases and executes the AQL, but natively, is that possible?
No, this is not possible. By design, each database is isolated from the others. The documentation states the following:
Please note that commands, actions, scripts or AQL queries should never access multiple databases, even if they exist. The only intended and supported way in ArangoDB is to use one database at a time for a command, an action, a script or a query. Operations started in one database must not switch the database later and continue operating in another.
Related
CosmosDB can geo-replicate collections and clients can be configured to make (read-only) queries to these "follower" regions.
Is there a built-in way for CosmosDB to provide a "follower" collection in the same region?
The scenario for using that is to use the "main" collection for fast interactive queries, and use the "follower" collection for slower, heavier backend queries, without the possibility of hitting limits and causing throttling that would impact the interactive case.
The usual answer for "copying" collections is to use a change feed (possibly via an Azure function), but this is "manual" work and the client (me) would have to take care of general dev-ops overhead like provisioning, telemetry, monitoring, alerting, key rotation etc.
I'd like to know if there's a "managed" way to do this, like there is for geo-replication.
The built-in geo-replication feature only works when replicating to different regions. You cannot replicate the same collection(s) back to the same region.
You'll need to set this up yourself. As you've already mentioned, you can use Change Feed to do this (though you called it a "manual" process and I don't see it as such, since this can be completely automated in code). You can also incorporate a messaging/event pattern: subscribe to database update events, and have multiple consumers writing to different database collections, per your querying needs.
Also: by having an independent collection where you provide the data-movement code, you can choose a different data model for your slower, heavier backend queries (maybe with a different partition key; maybe with some helpful aggregations; etc.).
There's really no way to avoid the added infrastructure setup.
Replication is limited to a single container/collection. For most scenarios like yours, one would use an alternate partition key to make the second collection read optimized. You should also review your top queries and consider using an alternate database which is more read optimize.
You could use this new tool:
https://github.com/Azure-Samples/azure-cosmosdb-live-data-migrator
I'm migrating from SQL Server to Azure SQL and I'd like to ask you who have more experience in Azure(I have basically none) some questions just to understand what I need to do to have the best migration.
Today I do a lot of cross database queries in some of my tasks that runs once a week. I execute SPs, run selects, inserts and updates cross the dbs. I solved the executions of SPs by using external data sources and sp_execute_remote. But as far as I can see it's only possible to select from an external database, meaning I won't be able to do any inserts or updates cross the dbs. Is that correct? If so, what's the best way to solve this problem?
I also read about cross db calls are slow. Does this mean it's slower that in SQL Server? I want to know if I'll face a slower process comparing to what I have today.
What I really need is some good guidelines on how to do the best migration without spending loads of time with trial and error. I appreciate any help in this matter.
Cross database transactions are not supported in Azure SQL DB. You connect to a specific database, and can't use 3 part names or use the USE syntax.
You could open up two different connections from your program, one to each database. It doesn't allow any kind of transactional consistency, but would allow you to retrieve data from one Azure SQL DB and insert it in another.
So, at least now, if you want your database in Azure and you can't avoid cross-database transactions, you'll be using an Azure VM to host SQL Server.
Is there any disadvantage to execute native SQL query using either {dataSource.connector.execute(sql, params, cb) or dataSource.connector.query(sql, params, cb)} than using connected models?
I have recently tested the same scenario with sample data with both Native SQL and connected models. When I used Native SQL I noticed that MYSQL connection get lost when I increase the load. But The same operations performance with Loopback's connected models and filters, It can sustain almost 4 times the load compared to Native SQL Connector.
Loopback also don't recommend using Native SQL Connector:
This feature has not been fully tested and is not officially
supported: the API may change in future releases. In general, it is
always better to perform database actions through connected models.
Directly executing SQL may lead to unexpected results, corrupted data,
and other issues. - Loopback
So My exact question is Has anybody noticed any Disadvantage or performance panalty Using of Using Native SQL instead of Loopback's connected models?
Simply put running raw queries(Native sql) may be faster than having using ORMs(models) as they perform validations , build queries from parameters and filters you provide etc. where as running raw queries simply issue the query to the connected sql server over network without going through all those extra layers , but those difference maybe barely noticeable. Now industry standard is that you should use ORMs(models) to access you data as they protect you from a lot of threats like sql injection and simple human errors as it performs comprehensive validations based on the schema. So simply put ORMs are safer than Raw queries. But its a matter of choice what you want to do. I honestly fall somewhere in the middle where i use ORM provided models whenever possible but i have found some instances where running Raw queries brings far more simplicity like when you have to perform joins on more than 2 tables as well as manipulate the data using queries than use ORM. This last bit is just my way of doing things which in many peoples opinion may be wrong. I suggest use ORMs ie. in your case Strong Loop connectors as much as possible.
I'm quite new to CouchDB... let say that I have a multiple databases in my CouchDB, one per user, each of db have a one config document. I need to add a property to that document across all dbs. Is it possible to update that config document in all databases (not doing it one by one)? if yes what is the best way to achieve this?
If I'm reading your question correctly, you should be able to update the document in one database, and then use filtered replication to update the other databases (though you can't have modified the document in those other databases, otherwise you'll get a conflict).
Whether it makes sense for the specific use case, depends. If it's just a setting shared by all users, I'd probably create a shared settings database instead.
I do not think that is possible, and I don't think that is the intended use for CouchDB. Even if you have everything in a single database it's not possible to do it in a quick way (there is no equivalent to a SQL update/where statement).
What's the point of using the driver and JavaScript if we can perform the same query operations a lot easier from the Mongo shell?
In theory any piece of code can as well be achieved through a good shell.
So, why we actually stay away from shell at all cost?
Security concerns, when an application uses shell to perform operations, it is very sensitive to exploits.
Configuration. What if server does not have the needed client, or the client is of a wrong version?
Driver handles many edge cases you might not notice at first glance. Connection loss handling, multiple connections, and such.
Briefly, imagine shell command as a User Interface for administrators. It might be powerful enough for a task, but being a developer you would like to pass this middle-man and communicate directly with the server.
If you program in a certain language (say Java), it's much easier to use the Java driver to access MongoDB rather call the mongodb shell from Java, and execute commands to MongoDB this way (from the shell). The same applies for the JavaScript language and the NodeJS JavaScript host environment in particular. That's why using a driver makes sense.
Actually this whole thing applies not just to MongoDB but to relational databases too (like MySQL, Oracle, etc.).