I created an Azure Search Index from CosmosDB which has a location data.
public Point GeoLocation { get; set; }
when I set this to null the indexer throws below error and does not index anything.
"errorMessage": "Value '' is not a valid GeoJson point."
Is there an empty/null value for Microsoft.Azure.Documents.Spatial.Point that does not cause an error for this case?
Thanks.
Related
I have the following class:
public class ProcessInstance
{
[AutoIncrement]
public int Id { get; set; }
[Reference]
public ProcessDefinition ProcessDefinition { get; set; }
public int ProcessDefinitionId { get; set; }
// and more...
}
Then running the following, which looks fine to me:
var q = db.From<ProcessInstance>().Where(inst => inst.ProcessDefinition.Id == id
&& Sql.In(inst.Status, enProcessStatus.READY, enProcessStatus.ACTIVE));
return db.Exists(q);
When I inspect the last command text SQL from the "db" object, it's wrong:
SELECT 'exists'
FROM "ProcessInstance"
WHERE (("Id" = #0) AND "Status" IN (#1,#2))
LIMIT 1
Note that it's filtering on Id instead of ProcessDefinition.Id, which of course is wrong. Don't know why it's doing that -- at least I'd appreciate getting an error instead of just a wrong result.
However, I've found how to fix it: Use ProcessDefinitionId: Where(inst => inst.ProcessDefinitionId == id gives the correct SLQ:
SELECT 'exists'
FROM "ProcessInstance"
WHERE (("ProcessDefinitionId" = #0) AND "Status" IN (#1,#2))
LIMIT 1
Why didn't the first one work? Why is there no error?
OrmLite is designed for providing a typed api around an SQL Expression so that it should be intuitive to determine the SQL generated from a typed Expression. It doesn’t support magic behavior such as querying any nested objects as attempted with the reference complex type property, I.e. you can only query direct column properties as done in your 2nd query.
I have a Azure Table that is storing a Customer object with a nested address object as per following.
public class Customer {
public Guid Id { get; set; }
public string Name { get; set; }
public Address Address { get; set; }
}
public class Address {
public string AddressLine1 { get; set; }
public string AddressLine2 { get; set; }
public string City { get; set; }
public string Postcode { get; set; }
}
The Customer object gets stored in a Azure Table with columns like this:
Id
Name
Address_AddressLine1
Address_AddressLine2
Address_City
Address_Postcode
Child object gets flattened and gets columns at the same level as Table Storage doesn't support nested objects.
I want to migrate this to Cosmos DB SQL API. What's the best way to migrate this data so that I end up with a nested json document instead of a flat one with these underscore columns?
I want to migrate this data so that it looks something like this in Cosmos:
{
Id: 2fca57ec-8c13-4f2c-81c7-d6b649ca6296,
Name: "John Smith",
Address: {
AddressLine1: '123 Street',
AddressLine2: '',
City: 'City',
Postcode: '1234'
}
}
I have tried using Cosmos Data Migration tool (deprecated?) and Azure Data factory but couldn't figure out how to convert the Address_* columns to a nested Address object instead of ending up as flat attributes in the json document.
Is there a way to easily map it to a nested child object Or will I have to write custom code to do the migration?
Unfortunately, there is no out of the box solution for this kind of migration.
Easier option would be to write custom code to loop through the TableEntities, construct the document object and add the item to your Cosmos container.
There is no straightforward solution offered by Microsoft (Azure Storage Explorer) to overcome this challenge but leveraging a third-party tool like Cerebrata (https://cerebrata.com/) could help you migrate your data from Azure Table Storage to Cosmos DB SQL API in a simple copy/paste model.
This way you can also avoid spending a good amount of time on custom coding and also view your migrated data in a table format rather than a complicated JSON format.
Disclaimer: It’s purely based on my experience
I am having problem retrieving records from azure storage table when the record is inserted from the portal itself. The table structure is fairly simple:
package com.nielsen.batchJobsManager.storage.entities;
import com.microsoft.azure.storage.table.TableServiceEntity;
public class BatchJobConfigEntity extends TableServiceEntity {
public BatchJobConfigEntity(String jobPrefix, String configName) {
this.partitionKey = jobPrefix;
this.rowKey = configName;
}
public BatchJobConfigEntity() {
}
public String configValue;
public void setConfigValue(String configValue) {
this.configValue = configValue;
}
public String getConfigValue() {
return this.configValue;
}
}
I am just trying to fetch the configValue stored in the table but I am having no luck, as you can see from the screen shot. However I have noticed that if I add the record using java application "TableOperation.insertOrMerge" then it works but I just do not understand why it should matter!
Ok found the solution just trying random stuff! I hope this will come handy for folks who are facing the same issue. So turns out the propertyName must follow camel case but with first character capitalized. So :
had to be changed to :
Only after inserting like that I was able to get the configValue from table entity object correctly.
I have a collection where I am storing the timestamp and its latest location with the following class:
public class TrackingInfo
{
[JsonProperty("id")]
public string Id { get; set; }
[JsonProperty("_partition_key")]
public string _PartitionKey { get; set; }
[JsonProperty("asset_id")]
public string AssetId { get; set; }
[JsonProperty("unix_timestamp")]
public double UnixTimestamp { get; set; }
[JsonProperty("timestamp")]
public string Timestamp { get; set; }
[JsonProperty("location")]
public Point Location { get; set; }
}
which is partitioned by _PartitionKey which contains a construct like this:
tracking._PartitionKey = $"tracking_{tracking.AssetId.ToLower()}_{DateTime.Today.ToString("D")}";
Looks like there is no way to do a Group by on the collection.
Can someone please help me create a SQL document query to find the latest entry for each AssetId and its Location and Timnestamp when the data was recorded.
Update 1:
what if I change the _PartitionKey to represent per day something like below:
tracking._PartitionKey = $"tracking_{DateTime.Today.ToString("D")}";
would it make it easier to get all assets and its latest tracking record?
As per my comment, my suggestion would be to solve your problem differently.
Assumption: You have a large number of assetIds and don't know the values beforehand:
Have one document that represents the latest state of your asset
Have another document that represents the location events of your asset
Update the first document whenever there is a new location event
You can put both types of documents in the same collection or separate them - both approaches have benefits. I would probably separate them.
Then do a query "what assets are within 1km of xxx" (Querying spatial types)
Sidenote: It might be a good idea to use the assetId as partitionKey instead of your combined key. Using such a key is very bad for queries
If you only have very few assetIds, you can use those to only find the latest updates by using and ordering by the timestamp field. This will only return the last item
Cosmos DB doesn't support group by feature,you could vote up this.
Provide a third-party package [documentdb-lumenize for your reference which supports group by feature,it has .net example:
string configString = #"{
cubeConfig: {
groupBy: 'state',
field: 'points',
f: 'sum'
},
filterQuery: 'SELECT * FROM c'
}";
Object config = JsonConvert.DeserializeObject<Object>(configString);
dynamic result = await client.ExecuteStoredProcedureAsync<dynamic>("dbs/db1/colls/coll1/sprocs/cube", config);
Console.WriteLine(result.Response);
You could group by assetId column and get the max timestamp.
i was diagnosing an issue where i had json or jsv objects being received in web form or query variables and was getting an Index was outside the bounds of the array exception being thrown by servicestack. This is a rest client in another product sending this to my servicestack rest services.
I narrowed it down to de-serializing a form variable with an empty string as the value instead of json.
this is a simple test case that does the same thing. I would have expected null being returned?
v3.9.26 servicestack.text
`
class simpleDTO {
public string FirstName { get; set; }
public string LastName { get; set; }
}
[TestMethod]
public void TestMethod1()
{
var json = "";
var o = JsonSerializer.DeserializeFromString(json, typeof(simpleDTO));
Assert.IsNull(o);
}`
That issue was fixed in this commit: 6ea6f235dc and should be included in the next ServiceStack.Text release.