My document in Cassandra is
{
"id": "1234",
"vowels": "aeu",
"alpha":[
"xyz",
"efgh"
],
"new": [
{"name":"nam1"},
{"name":"nam2"}
]
}
I am using Stargate Document API, query documents that has alpha value as xyz. I am trying to query using the where clause where={"alpha.[*]":{"$eq":"xyz"}}. This does not work and returns empty data. Is there a way to search using the Stargate Document API for arrays
This appears to be an issue with the Stargate API.
I'm currently discussing it with the Stargate engineers and I will post an update once I have something more concrete. Cheers!
[UPDATE] I've received feedback from the Stargate developers and logged issue #1247 to address this problem. Thank you for bringing it to our attention. 🍻
Related
I created a materialized view on Datastax cassandra. When I am using Stargate rest api to get the data from the view, it returns
{
"description": "Resource not found: table 'view_user_by_status' not found",
"code": 404
}
At this time no. I've been meaning to log a Github issue on this, it should be a pretty simple change. I'll get an issue logged and update this answer with the link.
EDIT: Here's the issue.
I ran into this error while building a Logic App in Azure. The solution to this DocumentDB REST API: PartitionKey extracted from document doesn't match is not working.
My logic app receives a POST request with the raw JSON data, and then sends it to a Cosmos "Create or Update Document" step. In there, I am able to specify my DB, and for my inputs, I have body and headers like so:
...
"inputs": {
"body": "#triggerBody()",
"headers": {
"x-ms-documentdb-partitionkey": "#triggerBody()?['date']"
},
...
}
...
My JSON data looks like this:
{
"id": "20190106",
"date": "20190106",
...
}
In the error output, it literally shows my PartitionKey as 20190106, so the #triggerBody()?['date'] seems to have worked.
Any ideas?
Don't forget to add Content-Type: application/json in the Post request.(mentioned in this thread)
Cosmos DB connector:
Post Request:
Output:
If your partition key is date, then try passing the value of date [“20190106"] (use the square brackets) in the partition key header. This worked for me 2 days ago using Rest API for CosmoDb.
The answer is that Logic Apps do not support the Mongo API. Yes, there's a workaround, as in the Logic App can pass data from one endpoint to a serverless function that then writes to Mongo.
Problems inserting document with Mongodb and Logic Apps
simply I have AngularJs client. He wants to use every API using CRUD architecture. For example:
GET /user
GET /user/:id
POST /user
PUT /user/:id
DEL /user/:id
This is all endpoints he want to use for my schema.(using MongoDB database).
I have user schema like (simplified):
{
id : ObjectId("..."),
name: "Foo fooer",
itemIds : [
ObjectId("..."),
ObjectId("..."),
ObjectId("...")
]
}
and schema Items(do not need to show for questions).
We need to add/remove itemsId from user.itemIds.
Client wants to create new schema userItems:
{
id : ObjectId("..."),
userId : ObjectId("..."),
itemID : ObjectId("...")
}
and He want to remove user.itemIds from user schema and create 4 CRUD endpoints /userItems.
I think this wrong approach, normalizing the mongo database.
But I don't know which one of these is better from both sides(client and server).
1) Create 2 endpoints POST /UserItem, DEL /UserItem to update items in user.itemIds.
2) Update user.itemIds using existing API PUT /user, but client needs to send whole Array of itemsIds to update it(if there are many this is probably bad approach.
Client say these 2 approaches are bad, and he only know his SQL REST archictere(where everything is normalized). How can I prove him, that he is wrong? Because he said to me this:
Server to adapt to the client and not vice versa.
Thank you.
This is a good article about RESTful API.
In brief, your RESTful API should:
Focus on resource
Make sense
Consistence
For example:
When you want to add an item to an user: find the user first, not the other way around. So, POST users/:userId/items/ to add a new item to user is good. Or, DELETE users/:userId/items/:itemId to remove an item from the user.
When you want to find all the users who have this specific item: GET /items/:itemId/users/.
Youtube API v3 enables developers to retrieve specific information regarding a particular video, among the returned data a part called topicDetails can be requested as per the following example:
Request:
https://www.googleapis.com/youtube/v3/videos?id=7lCDEYXw3mM&part=snippet,topicDetails
Response:
{
"kind": "youtube#videoListResponse",
"etag": "\"r3ahjFekUqNiL0By9B5wQ2uTZHM/i4Bt9XfY8YZ1ctSlg8BWcLD8HFQ\"",
"items": [
{
"id": "7lCDEYXw3mM",
"kind": "youtube#video",
"etag": "\"r3ahjFekUqNiL0By9B5wQ2uTZHM/hYUGsnkhqATV4OXNG43HIObqlyw\"",
"snippet": {
"title": "Google I/O 101: Q&A On Using Google APIs",
[ some properties have been omitted here ],
"categoryId": "28"
},
"topicDetails": {
"topicIds": [
"/m/045c7b",
"/m/0z5n",
"/m/04136kj"
]
}
}
]
}
The topicIds field comes machine encoded and topic information can be retrieved through the Freebase API (using this identifiers) which is being deprecated. I was unable to locate any update on YouTube documentation regarding the impact of freebase API deprecation and as I manage a large node.js code base that relies heavily on YouTube topicDetails I would like to ask if there is any known programatic alternative to recover topic information from stable sources other than Freebase?
As part of the Freebase retirement announcement, Google said that they would be offering a couple of new Knowledge Graph APIs and that there would be a 3 month overlap between the availability of these new services and the shutdown of Freebase.
Presumably Google services like YouTube which depend on Freebase MIDs will transition to these new services will transition to the new API (perhaps with unchanged identifiers), but not much is known concretely because a) Google is overdue with the new APIs and b) they haven't said anything else about the transition process.
UPDATE: The Google Knowledge Graph Search API became available in December 2015 and provides support for looking things up by KG ID (which is the same as the old Freebase ID used by YouTube).
I have a Logic App with Twitter connector and a Dropbox connector. The latter has repeater, which loops over the Twitter body and upload a text file in each iteration with Tweet_ID as file name. The Dropbox connector many times returns conflict errors, it seems Tweet connector keeps returning same tweets again and again, which had been already processed, which results in duplicate file names.
When I look at the output of the Dropbox connector, below is the body it returns.
"body": {
"status": 409,
"source": "api-content.dropbox.com",
"message": "conflict_file"
}
You have probably seen this page https://azure.microsoft.com/sv-se/documentation/articles/app-service-logic-use-logic-app-features/ where they show how to do this.
Have you checked that you don't supply the same Tweet_ID several times? The logic app json format it a bit tricky right now, with not so much documentation.
/dag
You are right. The twitter connector doesn't "remember" the tweets that are returned from a search. It will return the same again. (Just to be clear. We are discussing the Twitter Connector Action Search Tweets.)