I have some data query in several ways. Is it enough to use just one NSFetchedResultsController for multiple types of queries, or do I need one per query type?
If you need the data queried in various ways at the same time. I would suggest to use multiple NSFetchedResultsController. So you can react on changes of the data for each query.
Related
I am using DynamoDB to store my data. I am creating a dashboard application where users can sort by fields, search by fields, and add multiple filters at once. There will be approx 100 - 1000 entries in the table.
To achieve this search, filter, sort functionality, there are two ways I can achieve this:
Use FilterExpression. A simple solution, however, requires ALL the data to be pulled before filtering (not a 'true' query), requires more server-side processing + FilterExpression often seen as 'bad practice'.
Create GSIs for each field individually. Allows me to search and sort by fields using a true query, reducing server-side processing - can directly get the items I need. The issue with this is adding multiple filters, as it is not possible to use multiple GSIs in a single query call. If I had multiple filters, this approach would require multiple query calls, and manually aggregating / finding common items on client-side.
Would it be acceptable to use FilterExpression in this situation? It would simplify the process so much from a coding / maintenance perspective, but I am unsure if it's good practice. If GSIs are the better option, how would you deal with multiple filters?
Lastly, would there be a better approach, aside from the two options listed above?
Thanks so much in advance!
Honestly, sorting/searching is where DDB falls down..
With the amount of data you're talking about, I'd simply use Aurora.
At scale, assuming you hit the limits of Aurora, you'd be better served by front-ending DDB with Elasticsearch.
i am new with nosql concept, so when i start to learn PouchDB, i found this conversion chart. My confusion is, how PouchDB handle if lets say i have multiple table, does it mean that i need to create multiple databases? Because from my understanding in pouchdb a database can store a lot of documents, but a document mean a row in sql or am i misunderstood?
The answer to this question seems to be surprisingly under-documented. While #llabball clearly gave a decent answer, I don't think that views are always the way to go.
As you can read here in the section When not to use map/reduce, Nolan explains that for simpler applications, the key is to abuse _ids, and leverage the power of allDocs().
In other words, if you had two separate types (say artists, and albums), then you could prefix the id of each type to obtain an easily searchable data set. For example _id: 'artist_name' & _id: 'album_title', would allow you to easily retrieve artists in name order.
Laying out the data this way will result in better performance due to not requiring extra indexes, and less code. Clearly however, if your data requirements are more complex, then views are the way to go.
... does it mean that i need to create multiple databases?
No.
... a document mean a row in sql or am i misunderstood?
That's right. The SQL table defines column header (name and type) - that are the JSON property names of the doc.
So, all docs (rows) with the same properties (a so called "schema") are the equivalent of your SQL table. You can have as much different schemata in one database as you want (visit json-schema.org for some inspiration).
How to request them separately? Create CouchDB views! You can get all/some "rows" of your tabular data (docs with the same schema) with one request as you know it from SQL.
To write such views easily the property type is very common for CouchDB docs. Your known name from a SQL table can be your type like doc.type: "animal"
Your view names will be maybe animalByName or animalByWeight. Depends on your needs.
Sometimes multiple-databases plan is a good option, like a database per user or even a database per user-feature. Take a look at this conversation on CouchDB mailing list.
Normally if you have a 1 to many relationship in Core Data I understand that you should set that up as a relationship in the data model.
In this case, it is difficult to do because of the origin and management of the data.
I'm trying to essentially accomplish a join.
I'd like to fetch an entity A which meets some criteria on A but also meets a criteria on B.code and another attribute.
select statement would be
select attributeFromA from A, B where A.code = B.code and B.attrib="foo"
Is there a reasonable way to accomplish this without creating a relationship in core data?
I've only found two solutions, neither very good.
From what I've read, Core Data does not support a query against multiple entities unless they have a relationship between them.
Add a relationship anyway. This can be particularly bad since the data is coming from a server. No way to easily maintain relationships when individually updating each table from the server. Need to recreate relationships when data changes.
Manually perform the join outside of Core Data. In the above case, the intent is to get the set of object identifiers ('code') that match. One way to do that is to perform separate queries then get the intersection. Setup each query to only retrieve 'code', not managed objects.
I am new to RavenDB and I'm not sure how to address this issue.
I have a document store with around 200 different document types. Each type can contain thousands of documents.
In my business logic all the different document types are treated the same - they can be all mapped to a generic object such as a DataTable.
I would like to query all the properties of all the documents from all types in a single free text search. What is the best way to do that?
You can do this using multi maps. Take a look at this post:
http://ayende.com/blog/156225/relational-searching-sucks-donrsquo-t-try-to-replicate-it
I have a Solr instance that gets and indexes data about companies from DB. A DB data about a single company can be provided in several languages(english and russian for example).All the companies, of course, have a unikue key that is a uniqueKey in solr index too. I need to present solr search in all the languages at once.
How can it be performed?
1. Multicore? I've build two seperate cores with each language data, but i can't search in two indexes simultaneously.
localhost:8983/solr/core0/select?shards=localhost:8983/solr/core0/,localhost:8983/solr/core1/&indent=true&q=*:*&distributed=true
or
localhost:8983/solr/core0/select?shards=localhost:8983/solr/core0/,localhost:8983/solr/core1/&indent=true&id:123456
gives no results. while searching in each core is succesful.
Enable Name field(for example) as a multivalued is not a solution, because a different language data data from DB are get by different procedures. And the value is just rewritten.
I'm not sure about the multicore piece, but have you considered creating two fields in a single core - one for each language? You could then combine with an "OR" which is the default, so a query for:
en:"query test here" OR ru:"query test here"
would be an example
Sounds like you are possibly using the DataImportHandler to load your data. You can implement #Mike Sokolov's answer or implement the multivalued solution via the use of a Solr client. You would need to write some custom code in a client like SolrJ (or one of the other clients listed on IntegratingSolr in the Solr Wiki) to pull both languages in separate queries from your database and then parse the data from both results into a common data/result set that can be transformed into a single Solr document.