Can someone explain the differences or the reasons why there're the two methods ClientContext.Load and e.g. for ListItems ListItem.RefreshLoad()? Is there a difference?
Why has the ClientContext no equivalent .Update or Delete methods?
And when do I have to call the ClientContext.ExecuteQuery method?
ListItem item = ...;
// 1. Is there a difference between ClientContext.Load(ListItem) and ListItem.RefreshLoad()?
clientContext.Load(item);
item.RefreshLoad();
// 2. Why aren't there methods like ClientContext.Update(...) or ClientContext.Delete(...)?
item.Update();
item.DeleteObject();
// 3. When is the ClientContext.ExecuteQuery needed (load / update / delete)?
clientContext.ExecuteQuery();
Thank you!
The main thing to realize is that the client object model is designed to be asynchronous from the get go.
Think of your client context object as a vessel for sending instructions and receiving data. The .Load() method queues up instructions, such as .Load(item) queuing up the instructions to retrieve data about a given list item.
The .ExecuteQuery() and .ExecuteQueryAsync() methods send those queued instructions and retrieve the results from the server.
Those operations are different from the operations you can perform against actual SharePoint objects, such as lists and list items. Consider this example from Microsoft:
ListItemCreationInformation itemCreateInfo = new ListItemCreationInformation();
ListItem newListItem = targetList.AddItem(itemCreateInfo);
newListItem["Title"] = "New Announcement";
newListItem["Body"] = "Hello World!";
newListItem.Update();
clientContext.Load(newListItem);
clientContext.ExecuteQuery(); // only at this point is the item actually created
When you create a ListItem object in the client object model, all you're doing is creating an object in local memory-- you haven't sent anything to the server yet to actually create an item in the list. The ListItem object is just a placeholder, and anything you do to it (such as create it and set its field values in the example above) is stored as instructions that need to be carried out.
When you load that object into a client context object (via clientContext.Load(newListItem) you're just feeding those instructions to your Client Context. Once you run clientContext.ExecuteQuery(), those instructions are carried out and the placeholder object gets populated with any actual relevant data returned from the server.
Related
So in the documentation, it says if you want to get tasks with immutable id instead of normal one just add a header 'Prefer: IdType="ImmutableId"'. I've done that but it still returns tasks with normal ID.
It works fine when I try it with outlook events and if I try to get outlook task by ID (get single task instead of listing all). But as soon as I try getting all tasks with immutable id it doesn't work. It doesn't say any error it just returns the data but with the normal id.
Also, I know that outlook tasks API is getting deprecated but todo list API is not going to cut it right now and I've already tried it - there is no way to retrieve any form of immutable ids, just normal ones.
This is the code I use to retrieve all tasks (list all tasks) in NodeJS:
let response = await client
.api('/me/outlook/tasks?$top=25000')
.header("Prefer", "IdType=\"ImmutableId\"")
.header('Prefer', `outlook.timezone="${timeZone}"`)
.version('beta')
.get();
It is very weird because when trying to get one specific task by ID and setting prefer id type header, it works.
Anyway here is how requests look:
LIST OUTLOOK TASKS (GET ALL OUTLOOK TASKS)
GET https://graph.microsoft.com/beta/me/outlook/tasks
GET ONE SPECIFIC TASK VIA ID
GET /me/outlook/tasks/{id}
HEADER FOR GETTING IMMUTABLE IDS INSTEAD OF NORMAL ONES
Prefer: IdType="ImmutableId"
POTENTIONALLY HELPFUL
This is the code I use to retrieve all events with Immutable ID's (this works compared to tasks)
let response = await client.
api('/me/calendar/events?$top=25000')
.header('Prefer', `outlook.timezone="${timeZone}"`)
.header("Prefer", "IdType='ImmutableId'")
.get();
MS Graph official documentation: How to retrieve a list of outlookTasks
MS Graph official documentation: outlookTask resource type
MS Graph official documentation: event resource type
MS Graph official documentation: Get immutable identifiers for Outlook resources
Okay, so I've found the solution and it's just ridiculous. If any MS Graph SDK developers see this please fix it.
Instead of this:
let response = await client
.api('/me/outlook/tasks?$top=25000')
.header("Prefer", "IdType=\"ImmutableId\"")
.header('Prefer', `outlook.timezone="${timeZone}"`)
.version('beta')
.get();
You MUST do this:
let response = await client
.api('/me/outlook/tasks?$top=25000')
.header("Prefer", `IdType="ImmutableId", outlook.timezone="${timeZone}"`)
.version('beta')
.get();
I guess setting the second Prefer header overrides the first one and consequentially only the second one is sent. Unfortunately, I've discovered this right after I implemented the solution via OpenTypeExtension.
I wrote a transformation in xquery which unquotes an XML-String and inserts a element with its content. This works fine.
I need to create a collection dependant on the root element of this element as well. I can't do this on new documents as xdmp:document-add-collections() is not working. How do I add the collection to new Documents in transformations?
Here my ServerSide xQuery Code:
xquery version "1.0-ml";
module namespace transform = "http://marklogic.com/rest-api/transform/smtextdocuments";
import module namespace mem = "http://xqdev.com/in-mem-update" at '/MarkLogic/appservices/utils/in-mem-update.xqy';
declare function transform(
$context as map:map,
$params as map:map,
$content as document-node()
) as document-node()
{
let $uri := base-uri($content)
let $doccont := $content/smtextdocuments/documentcontent
let $newcont := xdmp:unquote($doccont)
let $contname := node-name($newcont/*)
let $result := if ( exists($content/smtextdocuments/content))
then mem:node-replace($content/smtextdocuments/content, <content>11{$newcont}</content>)
else mem:node-insert-after($doccont, <content>{$newcont}</content>)
let $log := xdmp:log($content)
return (
$result,
xdmp:document-add-collections($uri, fn:string($contname)),
xdmp:document-remove-collections($uri, "raw")
)
};
The script ist running with the java api (4.0.4) create methode via parameter ServerTransform transform. As per documentation the transformation script is running before the document is stored in the Database.
Its a new document; I need to transform the content and then create the collection.
I can see the document after the create, the content is available. Just the collection is missing. I can try xdmp:document-insert method but is it correct writing the document while create is running?.
The transform mechanism of the Java API / REST API takes responsibility for the document write. At present, there's no way for the transform to supply collections to the writer. That would be a reasonable request for enhancement.
The transform shouldn't attempt to write the document, because the writer would also attempt to write the same document.
One alternative would be to transform the document in Java before writing it and specify the collection as part of the write request.
Another alternative would be to rewrite the transform as a resource service extension, implement the write within the resource service extension, and modify the Java client to send the document to the resource service extension.
Depending on the model, a final alternative might be to use a range index on an element within the document to collect documents into sets instead of using a collection on the document.
Hoping that helps,
What do you mean by "new documents"? Is the document already inserted into the MarkLogic database at the time you are adjusting the collections of it? If not, you may want to modify your return to ($result, xdmp:document-insert($uri, $result, xdmp:default-permissions(), fn:string($contname)) ) for that case.
Otherwise, can you edit your question to share the error or problem more specifically you are facing?
It is a pity that REST transforms do not allow this, like MLCP transforms do. Until changed you have the options drawn by ehennum, or you can consider delaying adding of collections to a pre- or post-commit trigger. It takes some overhead, but it sometimes makes perfect sense to do something like that in a trigger, since it makes sure it is always enforced, and a good place to do content validation, audit logging, and things like that as well.
HTH!
Using Raven client and server #30155. I'm basically doing the following in a controller:
public ActionResult Update(string id, EditModel model)
{
var store = provider.StartTransaction(false);
var document = store.Load<T>(id);
model.UpdateEntity(document) // overwrite document property values with those of edit model.
document.Update(store); // tell document to update itself if it passes some conflict checking
}
Then in document.Update, I try do this:
var old = store.Load<T>(this.Id);
if (old.Date != this.Date)
{
// Resolve conflicts that occur by moving document period
}
store.Update(this);
Now, I run into the problem that old gets loaded out of memory instead of the database and already contains the updated values. Thus, it never goes into the conflict check.
I tried working around the problem by changing the Controller.Update method into:
public ActionResult Update(string id, EditModel model)
{
var store = provider.StartTransaction(false);
var document = store.Load<T>(id);
store.Dispose();
model.UpdateEntity(document) // overwrite document property values with those of edit model.
store = provider.StartTransaction(false);
document.Update(store); // tell document to update itself if it passes some conflict checking
}
This results in me getting a Raven.Client.Exceptions.NonUniqueObjectException with the text: Attempted to associate a different object with id
Now, the questions:
Why would Raven care if I try and associate a new object with the id as long as the new object carries the proper e-tag and type?
Is it possible to load a document in its database state (overriding default behavior to fetch document from memory if it exists there)?
What is a good solution to getting the document.Update() to work (preferably without having to pass the old object along)?
Why would Raven care if I try and associate a new object with the id as long as the new object carries the proper e-tag and type?
RavenDB leans on being able to serve the documents from memory (which is faster). By checking for persisting objects for the same id, hard to debug errors are prevented.
EDIT: See comment of Rayen below. If you enable concurrency checking / provide etag in the Store, you can bypass the error.
Is it possible to load a document in its database state (overriding default behavior to fetch document from memory if it exists there)?
Apparantly not.
What is a good solution to getting the document.Update() to work (preferably without having to pass the old object along)?
I went with refactoring the document.Update method to also have an optional parameter to receive the old date period, since #1 and #2 don't seem possible.
RavenDB supports optimistic concurrency out of the box. The only thing you need to do is to call it.
session.Advanced.UseOptimisticConcurrency = true;
See:
http://ravendb.net/docs/article-page/3.5/Csharp/client-api/session/configuration/how-to-enable-optimistic-concurrency
I am new to Cloudant but have found it useful for a first stage of IoT data. But I need to subscribe to changes based on an id field that is separate from the _id and is unique to the sensor that is sending the data. The examples that I’ve seen so far haven’t helped with this problem. What I’m doing now is sending a separate json doc for each post, so it should return new docs with this sensor id. The json docs sometimes come in by the second but it can be hours as well.
I’m using c# in a .Net web app. The code below creates a call to the Cloudant database and returns the data that I want based on an index that was created for the field SensorID,
json =
{{
"selector": {
"SensorID" : "h7365cf3-17bc-4422-b436-f7bcf12b2e2a"
},
"fields": [
"Data"
]
}}
url = My Cloudant url + ” /_find”.
This returns all docs with the sensorID field that corresponds to the SensorID value in the json query, but just the json object of each doc nested in the Data field.
using (WebClient client = new WebClient())
{
byte[] postBytes = System.Text.ASCIIEncoding.UTF8.GetBytes(json.ToString());
client.UseDefaultCredentials = true;
client.Credentials = new NetworkCredential(username, password);
client.Headers[HttpRequestHeader.ContentType] = "application/json";
var response = client.UploadData(url, "POST", postBytes);
JObject iJson = JObject.Parse(client.Encoding.GetString(response));
return parseIncoming(iJson);
}
When the call is to My Cloudant url + “GET /_DB_UPDATES”, it returns information regarding changes to the whole database. This can be set up as a continuous feed.
I was hoping that this meant that i could subscribe to changes in documents to get new data coming, like Redis Pub/Sub. I’m starting to think that this might not be the case, but if anybody can show me how to do it I would be grateful.
As #adasilva70 said, you need to use the _changes feed.
You can filter changes with an appropriate filter function (so that only changes regarding the documents you're interested in show up).
You can get all updates since a given sequence point (everything since the last data you got) and/or you can use long polling or continuous mode for instant notifications.
Here is the deal, i have a website that is required to search from multiple webservices, then join all the results returned from webservices and display them mixed. I've done the code for search a single place at one
WsPesquisa pesq = new WsPesquisa();
IEnumerable<Objecto> Resultados = pesq.PesquisaObjecto("URL TO SEARCH", "TEXT TO SEARCH");
now i need to use threads to search in multiple places at once but having doubts how to do so.
Can someone please provide a thread sample to call multiple times the code i've used above and then join the results from all threads in a single List of Objectos?
Thanks in advance
One way to do this is to use a standard LINQ query, and use PLINQ to parallelise it.
Assuming you have your query stored in query, a list of the websites you want to search stored in variable called sites, and you have a method SearchSite(string query, string site) that runs the search against a single site, the following should do the trick:
var searchResults = from site in sites.AsParallel()
select SearchSite(query, site);
var resultList = new List<object>();
foreach (var searchResult in searchResults)
{
// process result
resultList.Add(searchResult);
}
This assumes the search query is the same for each site, so:
AsParallel() indicates that you want your LINQ query to be run in parallel
select SearchSite(query, site) - takes your query and runs the SearchSite method on it
PLINQ takes care of waiting for all the results to come in, so you can just process them in a regular for loop