Inserting 200 records at a time (JSON array) into a custom object using the Salesforce REST API. Example with one record:
{
"records": [
{
"attributes" : {"type" : "Timecard__c"},
"Project__c": "a9S1F0000004LHDUA2",
"Milestone__c": "a9F1F00000007GOUAY",
"Resource__c": "0031F00000TApKqQAL",
"Date__c": "2020-08-16",
"Hours__c": 7,
"Notes__c": "Did some work"
},
]
}
The first three fields are lookups to other objects. The data I'm given to insert has names for the lookup fields (eg Project__c = "Canoe reconstruction", Milestone__c = "Rebuild gunwales", Resource__c = "John Smith".
My current plan is to generate arrays of Projects, Milestones, and Resources containing the Ids and Names then patch the JSON I have to load.
Does the Salesforce REST API offer a way to set the values of the Lookups to the text name such that it would find the Id on its own or is my current approach the most efficient way to handle this?
Here's the code I'm using for the processed data load...
const submitTimecards = async() => {
const token = await getAccessToken()
const data = JSON.parse(fs.readFileSync('timecards.json', 'utf-8'))
const response = await axios({
method: 'post',
url: `${salesforceUrl}/composite/sobjects`,
data,
headers: {
'Authorization': `OAuth ${token}`,
'Content-Type': 'application/json'
}
})
return response
}
By Name it's bit tricky. SF "natural" way would be to specify a helper field marked as external id (ideally it'd be marked unique too) and then you can use your references. "Dear Salesforce, I don't care what's your internal primary key of that Account record I need to link to, on my end it's 12345, go do your magic, look it up yourself".
It's in https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_upsert.htm?search_text=patch, look for example that says "Upserting Records and Associating with an External ID". It might not be very clear but if you have SF admin in team he/she should know what can be done with "upsert" operation in Data Loader, same principles would apply. I have an example that upserts multiple objects in one go, it'll be bit too crazy but try to read it: https://salesforce.stackexchange.com/a/274696/799
Or you could batch multiple requests into one all-or-none API call. It'll be like series of instructions to SF, not multiple round trips to you and having to cache results somewhere. In that call you could run queries and then use their temporary results in your final request. It'll look bit like https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_composite_record_manipulation.htm or https://developer.salesforce.com/blogs/tech-pubs/2017/01/simplify-your-api-code-with-new-composite-resources.html (scroll to the "A Simple Example, Now Using Composite!" part)
{
"compositeRequest" : [{
"method" : "POST",
"url" : "/services/data/v38.0/sobjects/Account",
"referenceId" : "refAccount",
"body" : {
"Name" : "My New Account"
}
},{
"method" : "GET",
"url" : "/services/data/v38.0/query/?q=select+id+from+contact+where+name='Howard+Jones'",
"referenceId" : "refContact"
},{
"method" : "PATCH",
"url" : "/services/data/v38.0/sobjects/Contact/#{refContact.records[0].Id}",
"referenceId" : "refContactUpdated",
"body" : {
"AccountId" : "#{refAccount.id}"
}
}]
}
The downside is that with composite you won't be able to do all 200 in 1 go.
You can have up to 25 subrequests in a single call. Up to 5 of these
subrequests can be sObject Collections or query operations, including
Query and QueryAll requests.
Related
I'm working on contentful to create blog posts.
I have created a field named with category with dropdown data, like the below image.
I have created more blogs for each categories (Ex: game has 5 blogs, Tour has 10 blogs and etc).
I want to show the list of all categories with content counts,
Is there any possible to get all of the categories with content count? ( I can get it by getting all blogs using this query
const res = ContentfulService.getEntries({ content_type: 'blog'})
then I grouped with category, but to get the category only, I don't want to get all of the blogs.)
Please let me know if there is a solution.
Thanks
The only way to do this through the API would be to make a request for each category and look at the total property of the response and that would be less efficient than what you're already suggesting.
https://cdn.contentful.com/spaces/{space_id}/environments/{environment_id}/entries?access_token={access_token}&content_type={content_type}&fields.category[in]={categoryValue}
I have achieved that using graphQL,
const query =
{
categoryCollection
{
items{
linkedFrom {
blogCollection {
total
}
}
name:category
sys {
id
}
}
}
}
let categories = await fetch(https://graphql.contentful.com/content/v1/spaces/${spaceId}, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
// Authenticate the request
Authorization: Bearer ${accessToken},
},
// send the GraphQL query
body: JSON.stringify({ query }),
})
I have a requirement to generate a unique number (ARN) in this format
DD/MM/YYYY/1, DD/MM/YYYY/2
and insert these in elastic search index.
The approach i am thinking of is to create an auto increment field in the doc and use it to generate a new entry and use the new auto generated number to create the ARN and update the doc.
doc structure that i am planning to use:
{ id: 1, arn: 17/03/2018/01 }
something like this.
How can i get auto increment field in elastic search?
It can't be done in a single step. First you have to insert the record into the database, and then update the ARN with it's id
There is no auto-increment equivalent, for example, to hibernate id generator. You could use the Bulk API (if you have to save multiple documents at a time) and increase the _id and the ending of your ARN value programmatically.
Note: if you want to treat your id as a number, you should implement it yourself (in this example, I added a new field "my_id", because the _id of the documents is treated as a string.
POST /bulk
{ "index" : { "_index" : "your_index", "_type" : "your_type", "_id" : "1" } }
{ "arn" : "2018/03/17/1", my_id: 1 }
{ "index" : { "_index" : "your_index", "_type" : "your_type", "_id" : "2" } }
{ "arn" : "2018/03/17/2", my_id: 2 }
Then, the next time that you want to save new documents, you query for the maximum id something like:
POST /my_index/my_type/_search?size=1
{
"query": {
"fields": ["my_id"],
"sort": [{
"my_id": { "order": "desc" } }
]
}
}
If your only requirement is that this ARN should be unique, you could also let elasticsearch calculate your _id by simply not setting it. Then you could relay at some unique token generator (UID.randomUUID().toString() if work with java). Pseudo code follows:
String uuid = generateUUID() // depends on the programming language
String payload = "{ \"arn\" : + uuid + "}" // concatenate the payload
String url = "http://localhost:9200/my_index" // your target index
executePost(url, payload) // implement the call with some http client library
I tried to insert the following test document:
db.documents.write(
{
uri: "/test/doc1.json",
contentType: "application/json",
collections: "test",
content: {
name : "Peter",
hobby: "Sleeping",
other: "Some other info",
"triple": {
"subject": {
"datatype": "http://example.com/name/",
"value": "Peter"
},
"predicate": {
"datatype": "http://example.com/relation/",
"value": "livesin"
},
"object": {
"datatype": "http://example.com/location/",
"value": "Paris"
}
}
}
}
).
result(function(response){
console.log("Done loading");
});
Then I queried as follows:
var query = [
'SELECT ?s ?p ?o' ,
'WHERE { ?s ?p ?o }' ,
];
db.graphs.sparql('application/sparql-results+json', query.join('\n')
).result(function (result) {
console.log(JSON.stringify(result, null, 2));
}, function(error) {
console.log(JSON.stringify(error, null, 2));
});
The results showed me the values of the triple, but what if I also want to get the entire document where the triple was embedded? Is it also possible to filter by other fields in the document?
There isn't a way to retrieve the document that contains the result of a SPARQL query, because those results may not be a triple that exists within a particular document (instead, it returns a "solution" consisting of 1 or more values).
If you know you are looking for a particular triple, and you want the document that holds that triple, I would normally say to use a cts:triple-range-query; however, I don't see a way to do that through the Node.js API (or through REST, for that matter). With that in mind, I see two choices:
insert a triple that includes the document's URI as the subject or object, then make a request for that document (as #grtjn suggested)
make a REST API extension (using either JavaScript or XQuery) that calls cts:search with cts:triple-range-query as part of the query; call that extension from Node
I'd recommend doing it in two stages:
Run a sparql that will return document uris.
Run a document search to return those documents, optionally further constrained with extra criteria.
For this you will need to embed triples in your documents listing the document uri of the documents themselves.
HTH!
suppose I have a data structure in firebase real time database like
{ "donors" :
"uid1" : { "name" : "x", "bloodGroup" : "A+", "location" : "some Place"},
"uid2" : { "name" : "y", "bloodGroup" : "A-", "location" : "some place"},
...
...
}
now if I have millions of donor records like this. how could I filter them based on bloodGroup location and fetching say 100 records from server at a time using angularfire2.
I have found this page which was really helpful to me when using queries to query my firebase data:
https://howtofirebase.com/collection-queries-with-firebase-b95a0193745d
A very simple example would be along the lines of:
this.donorsData = af.database.list('/donors', {
query: {
orderByChild: 'bloodGroup',
equalTo: 'A+',
}
});
Not entirely sure how to fetch 100 records, then another 100, I am using datatables in my app, which fetches all my data and using the datatables for pagination.
I'm using mongify to migrate a mysql database into mongodb.
Doing that, 2 questions appeared:
1- How can i declare my translation file in order to have a embedded array of ids that references to the objects (that are stored in a different collection and can be retrieved through populate), instead of just embedding as json objects.
2- Embedded objects can have an unique id as objects in colections do?. On other projects i've used that approach to query for embedded objects, but if that id is not present i should use a different field.
Unfortunately the first request isn't possible with Mongify at the moment, it requires a custom script to do that.
I could give you more details if you want to send me your translation file (Make sure to remove any sensitive data).
As for number two, the embedded object will get a unique ID. You don't need to do anything special.
Hope that answers your questions.
from mongify isn't possible but in mongodb you can transform data as follows:
//find posts has array of objects
db.getCollection('posts').find({'_tags.0': {$exists: true}}).forEach( function (post) {
var items = [];
var property = '_tags';
post[property].forEach(function(element){
if(element._id !== undefined){
items.push(element._id);
}
});
if(items.length>0){
post[property] = items;
db.posts.update({_id:post._id},post);
}
});
Source Document:
{
"_id" : ObjectId("576aa0389863482f64051c81"),
"id_post" : 130155,
"_tags" : [
{
"_id" : ObjectId("576a9efd9863482f64000044")
},
{
"_id" : ObjectId("576a9efd9863482f6400004b")
},
{
"_id" : ObjectId("576a9efd9863482f64000052")
},
{
"_id" : ObjectId("576a9efd9863482f6400005a")
}
]
}
Final Document:
{
"_id" : ObjectId("576aa0389863482f64051c81"),
"id_post" : 130155,
"_tags" : [
ObjectId("576a9efd9863482f64000044"),
ObjectId("576a9efd9863482f6400004b"),
ObjectId("576a9efd9863482f64000052"),
ObjectId("576a9efd9863482f6400005a")
]
}