How to get the event id while DEM reads snapshot data ? [autosar][vector] - autosar

How to get the event id while DEM reads snapshot data ? [autosar][vector]
I don't get the event id for corresponding shanpshot read DID function call.

Based on the DEM specification container DemDataElementUsePort is boolean type. But if you use MICROSAR (Vector) this option is enum and it allows to configure interface for this specific DemDataElementClass as port or function with additoinal EventId parameter.
Dem containers hierarchy for snapshot data configuration

Related

I want to create azure function which requests data from request body using with block but how I can implement it in digital twins table

WITH will be used to define variables that can be used in the set functionality
WITH T.name, T.description
The Query will have to support the variables being defined, for example, if using a “SELECT * FROM DigitalTwins….” Where T is not declared, name and description should still exist as properties of the Twin. Use if declaring the Response set as T or other letters is usually done when performing joins. From a general perspective, the logic should be to validate that the variables declared exist as properties in the response elements, regardless of there being a T. or other letter in front. Split on . and take the last element for verification.
WITH T.name as name, C.description as description - Allow to cast variables to other names same as in the DT SELECT query.
Put it all together:
SELECT * FROM DIGITALTWINS T WHERE (IS_OF_MODEL('dtmi:com:adt:telemetry:pointIO;1') AND T.units='cubic_feet_per_minute') UPDATE WITH name, pointType SET T.displayName=name, T.brickTags=pointType
Above will update all twins returned in the query with two constant property values.

Access CosmosDB from Azure Function (without input binding)

I have 2 collections in CosmosDB, Stocks and StockPrices.
StockPrices collection holds all historical prices, and is constantly updated.
I want to create Azure Function that listens to StockPrices updates (CosmosDBTrigger) and then does the following for each Document passed by the trigger:
Find stock with matching ticker in Stocks collection
Update stock price in Stocks collection
I can't do this with CosmosDB input binding, as CosmosDBTrigger passes a List (binding only works when trigger passes a single item).
The only way I see this working is if I foreach on CosmosDBTrigger List, and access CosmosDB from my function body and perform steps 1 and 2 above.
Question: How do I access CosmosDB from within my function?
One of the CosmosDB binding forms is to get a DocumentClient instance, which provides the full range of operations on the container. This way, you should be able to combine the change feed trigger and the item manipulation into the same function, like:
[FunctionName("ProcessStockChanges")]
public async Task Run(
[CosmosDBTrigger(/* Trigger params */)] IReadOnlyList<Document> changedItems,
[CosmosDB(/* Client params */)] DocumentClient client,
ILogger log)
{
// Read changedItems,
// Create/read/update/delete with client
}
It's also possible with .NET Core to use dependency injection to provide a full-fledged custom service/repository class to your function instance to interface to Cosmos. This is my preferred approach, because I can do validation, control serialization, etc with the latest version of the Cosmos SDK.
You may have done so intentionally, but just mentioning to consider combining your data into a single container partitioned by, for example, a combination of record type (Stock/StockPrice) and identifier. This simplifies things and can be more cost/resource efficient relative to multiple containers.
Ended up going with #Noah Stahl's suggestion. Leaving this here as an alternative.
Couldn't figure out how to do this directly, so came up with a work-around:
Add function with CosmosDBTrigger on StockPrices collection with Queue output binding
foreach over Documents from the trigger, serialize and add to the Queue
Add function with QueueTrigger, CosmosDB input binding for Stocks collection (with PartitionKey and Id set to StockTicker), and CosmosDB output binding for Stocks collection
Update Stock from CosmosDB input binding with values from the QueueTrigger
Assign updated Stock to CosmosDB output binding parameter (updates record in DB)
This said, I'd like to hear about more straightforward ways of doing this, as my approach seems like a hack.

Optionally generate output with an Azure Function

I currently have a Timer triggered Azure Function that checks a data endpoint to determine if any new data has been added. If new data has been added, then I generate an output blob (which I return).
However, returning output appears to be mandatory. Whereas I'd only like to generate an output blob under specific conditions, I must do it all of the time, clogging up my storage.
Is there any way to generate output only under specified conditions?
If you have the blob output binding set to your return value, but you do not want to generate a blob, simply return null to ensure the blob is not created.
You're free to execute whatever logic you want in your functions. You may need to remove the output binding from your function (this is what is making the output required) and construct the connection to blob storage in your function instead. Then you can conditionally create and save the blob.

Is it possible to generate a unique BlobOutput name from an Azure WebJobs QueueInput item?

I have a continuous Azure WebJob that is running off of a QueueInput, generating a report, and outputting a file to a BlobOutput. This job will run for differing sets of data, each requiring a unique output file. (The number of inputs is guaranteed to scale significantly over time, so I cannot write a single job per input.) I would like to be able to run this off of a QueueInput, but I cannot find a way to set the output based on the QueueInput value, or any value except for a blob input name.
As an example, this is basically what I want to do, though it is invalid code and will fail.
public static void Job([QueueInput("inputqueue")] InputItem input, [BlobOutput("fileoutput/{input.Name}")] Stream output)
{
//job work here
}
I know I could do something similar if I used BlobInput instead of QueueInput, but I would prefer to use a queue for this job. Am I missing something or is generating a unique output from a QueueInput just not possible?
There are two alternatives:
Use IBInder to generate the blob name. Like shown in these samples
Have an autogenerated in the queue message object and bind the blob name to that property. See here (the BlobNameFromQueueMessage method) how to bind a queue message property to a blob name
Found the solution at Advanced bindings with the Windows Azure Web Jobs SDK via Curah's Complete List of Web Jobs Tutorials and Videos.
Quote for posterity:
One approach is to use the IBinder interface to bind the output blob and specify the name that equals the order id. The better and simpler approach (SimpleBatch) is to bind the blob name placeholder to the queue message properties:
public static void ProcessOrder(
[QueueInput("orders")] Order newOrder,
[BlobOutput("invoices/{OrderId}")] TextWriter invoice)
{
// Code that creates the invoice
}
The {OrderId} placeholder from the blob name gets its value from the OrderId property of the newOrder object. For example, newOrder is (JSON): {"CustomerName":"Victor","OrderId":"abc42"} then the output blob name is “invoices/abc42″. The placeholder is case-sensitive.
So, you can reference individual properties from the QueueInput object in the BlobOutput string and they will be populated correctly.

How to use Codename one Storage?

I am trying to port my LWUIT application to Codename one.
I have used RMS in LWUIT and now obviously I have to transform this to Storage.
I don't understand how the Storage class works in Codename one and the documentation for codename one has nothing about either.
1) What is the structure of a storage file?
--> In J2ME RecordStore , you have records bunched together like a table. Every row, corresponds to a record. Each record has a unique record ID and you can access the record with this record id. Every record can have some data stored in it.
How does this map to Storage class?
2)I wish to store some records in my storage, how do i do it?
The documentation says:
static Storage getInstance()
Returns the storage instance or null if the storage wasn't initialized using a call to init(String) first.
--> In LWUIT it was something like Storage.init(storageName). ; However there is no init in codename one!!!. How do I open a Storage in Codename one??
3)If i try to open a storage file which does not exist, what will happen (RMS gives an exception)?
The easiest way to think about Storage is as a flat file system (without directories/folders).
When running on top of RMS this file system abstraction is mapped to the RMS database seamlessly for you.
Notice that init() for Storage in Codename One is no longer necessary, under LWUIT it only performed basic initialization and the name was usually ignored.
The Storage class has several methods:
InputStream createInputStream(String name)
Creates an input stream to the given storage source file
OutputStream createOutputStream(String name)
Creates an output stream to the storage with the given name
boolean exists(String name)
Returns true if the given storage file exists
String[] listEntries()
Lists the names of the storage files
You can use these to just store and check if data exists. However you can also store complex objects in storage without using input/output streams by using these two methods:
Object readObject(String name)
Reads the object from the storage, returns null if the object isn't there
boolean writeObject(String name, Object o)
Writes the given object to storage assuming it is an externalizable type or one of the supported types
So to simulate something like byte[] storage you can do something like this:
Vector p = new Vector();
byte[] myData = ...;
p.addElement(myData);
p.addElement(additionalData);
Storage.getInstance().writeObject("myStore", p);
Then just read it as:
Vector p = (Vector)Storage.getInstance().read("myStore");
// p will be null if nothing was written

Resources