Is it possible to get a list of all the views of a database in couchdb using [dscape/nano][1]? The closest I can get with just curl request is this:
http://URL/DBNAME/_all_docs?key=_design/views&include_docs=true.
The above returns all the views including the javascript functions. But I would like to extract only the view names.
In newer CouchDB versions, you can use "_design_docs" to only list the views:
GET /dbname/_design_docs
This will get you the wanted list much faster than if you'd have to go through all the docs (_all_docs).
See 1.3.3. /{db}/_design_docs of the official documentation.
Note: The documentation as of today states this is new in CouchDB version 2.2, but I successfully tested it on 2.1.
Unfortunately the only possible way of doing this is by extracting view names from the result of the query you've included in your question. Futon is doing it this way when populating drop-down list of views, so I think it is safe to assume this is the only solution.
You may also want to change your query to the following to include all design documents, instead of just the one named views:
GET /dbname/_all_docs?startkey="_design/"&endkey="_design0"&include_docs=true
Related
I have several documents in a solr collection that I want to be able to search through. Most of the data comes from web sites I can easily crawl, however, I need to add some attributes manually to because I have to add these attributes manually.
So as an example I get the following info from a site (all attributes returned from crawled site):
Name: Porsche Boxter
Year: 1996
...
I want to add additional fields through a web interface (info not present on crawled sites):
Cool: yes
foo: bar
My questions:
Does it make sense at all to store additional information along the indexed data within Solr (inside the documents) or would a best practice only have all crawled data in Solr and merge with an external managed database during query time? To me it makes more sense to have all my data that is eventually queried in Solr as some of the manually added attributed are required search criteria (e.g. look only for cool cars from the 90s).
Is it possible to use Solr to store additional information about indexed documents? I know the entire schema in advance, perhaps this is useful?
If I store my data exclusively in Solr, how can I ensure that during the next crawl the manually added data is not overwritten? Would partial update be required?
Since I am new to Solr it would also be very helpful if someone could simply manage what to look for in the documentation that describes my use case.
That depends on how often the external data changes. The more often, the less meaningful. Generally it is a good idea to store such data along the index data, because you get them without an additional database query.
Yes. Use indexed:falseand stored:true. If you knew not know all of such fields in advance you could use a dynamicField like <dynamicField name="*_stored" type="string" indexed="false" stored="true" />.
Yes. You have to use partial update. This is no problem in your case, because the fields not updated have stored:true.
I'm new to Orchard but from various blog posts and SO answers it seems that Lists would help me achieve what I want. Unfortunately it seems that Lists have now been deprecated.
The documentation at http://docs.orchardproject.net/Documentation/Creating-lists suggests using Taxonomies instead. Is it possible to achieve the functionality of a List using Taxonomies?
You can use taxonomies to produce a list of content items. Create your taxonomy, add some items to it, attach the taxonomy field to your content type, and select the taxonomy you want to be displayed in the content type settings. Create a content item with a taxonomy selected then go to the url:
/taxonomy-name/*selected-taxonomy-item*
and it should display a list of all items with that taxonomy selected.
However, lists were deprecated and replaced by Projections, I would recommend using these. They are part of core and can be enabled from the backend. You can then create a query with your projection uses. This query determines what items you want displaying. It is pretty nice and simple, just need to play around with it. Let me know if you need more details about it, or probably just googling Orchard Projections will produce some useful links if you need to do some more complex things with it
How can I get last created document in couchdb? Maybe some how I can use _changes feature of couchdb? But documentation says, that I only can get list of document, ordered by first created document, ant there is no way to change order.
So how can I get last created document?
You can get the changes feed in descending order as it's also a view.
GET /dbname/_changes?descending=true
You can use limit= as well, so;
GET /dbname/_changes?descending=true&limit=1
will give the latest update.
Your only surefire way to get the last created document is to include a timestamp (created_at or something) with your document. From there, you just need a simple view to output all the docs by their creation date.
I was going to suggest using the last_seq information from the database, but the sequence number changes with every single write, and replication also complicates the matter further.
I'm using haystack with whoosh for development purposes.
I want search results based on django models to be filtered by the user that created them.
Please see my other post Filter haystack result with SearchQuerySet for details.
Basically I had to add User to my search index. But I noticed, when I manually change the user_id of a record, search is broken. After thinking about it this even makes sense. But, this means I have to rebuild the index after each field update in each model? Surely that doesn't scale at all?
I thought the engine would find the object by id, then look it up in the database, and return a current instance for further processing like filtering. It seems like everything is cached in the index so must be synchronized in realtime for search results to show up? Am I missing something here?
This documentation helped shed some light:
http://docs.haystacksearch.org/dev/searchindex_api.html
We have the sharepoint 2010 environment with Document ID's enabled.
Given (part of) a Doc ID, we want to programmatically retrieve the document(s) matching that ID. The problem seems to be that this column is rather special, in that it might need special handling.
Using an SPSiteDataQuery, fetching the _dlc_DocId field as part of the viewfields works fine. However, including it as part of the where query never results in any documents being fetched.
Using the Search API has gotten us nowhere at all.
Has anyone pulled this off, or any suggestions on how to tackle this problem?
[Update] Turns out we were fooled by subtle errors in the XML and bad debugging misinterpretations. This stuff just works fine.
I don't normally contribute to these sorts of things because cleverer people than I always get there before me, but as this is an old one with no proper answer I think I'll add my thoughts for those who find this page.
I was struggling with this but after a little digging around and learning a bit of Caml I got this working.
I am using the SharePoint Client Object Model against SharePoint 2010 and Office365 beta.
Start off your query by looking at the all list items query:
Microsoft.SharePoint.Client.CamlQuery.CreateAllItemsQuery().ViewXml
"<View Scope=\"RecursiveAll\">\r\n <Query>\r\n </Query>\r\n</View>"
Stick a where child inside the query
Then add in
<Eq><FieldRef Name="_dlc_DocId" /><Value Type="Text">MDXC2KE55ASN-3-80</Value></Eq>
replacing MDXC2KE55ASN-3-80 with the doc ID you are looking for inside the where.
Also don't forget you might want to make use of these too:
<ViewFields><FieldRef Name="_dlc_DocId" /></ViewFields>
<RowLimit>1</RowLimit>
Then use List.GetItems() method to bring back the ListItemCollection.
Just in case nobody comes with a slick solutions from the depths of the Sharepoint infrastructure:
What would Google Do?
Slice is, Dice it and dump it in a reverse index.
Solr and Lucene offer supreme tools for this. The idea is to cut the DocId's in small pieces and add the location of the document to the bucket for that piece.
Say We have "A real nice document" with Id ABCD123. You would add it to the buckets
ABCD, BCD1, CD12, D123
When searching for a partial ID (+ other data like dates, types, ...) you (well the search engine) creates the union of the buckets + applies additonal constraints.
To make this happen you need to write a spider for the sharepoint server and a routine which makes a record of data elements to be indexed.
Put a nice REST interface in frnt of it (actually SOLR already has that), integrate it in the main sharepoint server, and nobody needs to know there is something else running behind it.
These products can also incrementally update the indexes, so they can be kept up to date.
you could use the following to get the Document ID.
SPFile file = MethodToUploadFileToServer(web, filepath);
SPListItem item = file.Item;
string DocID = item.Properties["_dlc_DocId"].ToString();