Explanation of AUTOSAR BswMLinScheduleIndication container content - autosar

I am new to AUTOSAR and I am quite puzzled by the content of the BswMLinScheduleIndication configuration container. The issue is that this container includes not only a reference to LIN channel handle, but also a reference to LIN schedule table handle. I don't understand that since this container corresponds to mode request source of BswM_LinSM_CurrentSchedule() function. Description of the function states "Function called by LinSM to indicate the currently active schedule table for a specific LIN channel.", so naturally i conclude that currently active schedule table handle is the mode value, but in this case reference to LIN schedule table handle must belong to BswMModeValue container, isn't it? If LIN schedule table handle is not mode value, than what is?
Unfortunately AUTOSAR_EXP_ModeManagementGuide doesn't cover LIN issues.
Thank you in advance for your time and attention. Sorry for my bad english. I understand that my question can be mishaped, please forgive for that, since sometimes it's difficult for newbie event to formulate a right one.

Check the LinSM and the LinIf SWS, which describe the change of the Schedule Tables of a LIN Master (and only the LIN Master). The LinIf switches between a RUN_CONTINOUS and RUN_ONCE schedule table.
Why LinIf needs schedule tables I can not tell. I never had a usage for LIN at work yet. Hope it still helps.

Related

How would I make an Use Case Diagram without user interaction? [duplicate]

Should a batch scheduled process (for example, a nightly process) be modeled as a Use Case? it is something the system should do, but there is not an Actor "using" the feature, because it is scheduled.
Any suggestions?
Thanks!
We've defined a 'Scheduler' actor to model that scenario. The Scheduler usually has its own set of use cases which are batch jobs, or executables that need to run regularly, etc. For example, the Use Case can be written like "The Use Case begins when the current time is on the hour" for a job that runs 24 times a day. We try not to include too many of these cases because it is too easy to get bogged down into implementation details. We wait until really important activities have to be timed, like monthly close procedures for the accounting department. They don't mention any software specifics (like the name of the scheduling software), just that the Use Case is triggered by the Scheduler actor on a given day and/or time.
First attempt:
Time can be actor in your use case.
But as you said it is strange as an primary actor.
You can think a human alternative.
So ask yourself:
System automatically do a batch scheduled process but: when? how? ...
So WHO will tell the system when? how ? to do you scheduled process? Is there a role which configure a batch scheduled process? If so..
Second Attempt:
There is a good article at IBM site Dear Dr. Use Case: Is the Clock an Actor?
And you can check similiar question at Is TIME an actor in a use case?
The system (O.S.) its the "actor":
http://en.wikipedia.org/wiki/Actor_%28UML%29
In U.M.L, an "Actor" is not just a person, can be a process or the O.S., you just add an stereotype, indicating its "system".

Nodejs performing task on fixed time in the future

I have stumbled upon a difficult type of problem for me. So, lets say we have an API, which creates events in the future, for example, after two weeks from this moment. During this time, we can post comments, add photos, etc. on this event. After those two week pass, I want to close this event and change it's type from 'OPEN' to 'CLOSED'. How should I achieve this?
I have tried agenda library for this task, but it seems that it is for different purpose of tasks - scheduled tasks. Are there any other options or other practices to do this?
I am using postgres database, if that's helpful.

ADF pipeline going into queue state

I have a Copy activity where the source and destination are both Blobs.
When i tried the copy pipeline previously,it ran successfully.
But currently it is going into queue state for a long time i.e. 30 minutes.
Can i know the reason behind it?
This is not an answer/solution to the problem. Since i cannot comment yet, had to put it in the Answer section. It could be delay in assigning the compute resources. Please check the details.
You can check the details by hovering mouse pointer between Name and Type beside Copy.

EventSourcing race condition

Here is the nice article which describes what is ES and how to deal with it.
Everything is fine there, but one image is bothering me. Here it is
I understand that in distributed event-based systems we are able to achieve eventual consistency only. Anyway ... How do we ensure that we don't book more seats than available? This is especially a problem if there are many concurrent requests.
It may happen that n aggregates are populated with the same amount of reserved seats, and all of these aggregate instances allow reservations.
I understand that in distributes event-based systems we are able to achieve eventual consistency only, anyway ... How to do not allow to book more seats than we have? Especially in terms of many concurrent requests?
All events are private to the command running them until the book of record acknowledges a successful write. So we don't share the events at all, and we don't report back to the caller, without knowing that our version of "what happened next" was accepted by the book of record.
The write of events is analogous to a compare-and-swap of the tail pointer in the aggregate history. If another command has changed the tail pointer while we were running, our swap fails, and we have to mitigate/retry/fail.
In practice, this is usually implemented by having the write command to the book of record include an expected position for the write. (Example: ES-ExpectedVersion in GES).
The book of record is expected to reject the write if the expected position is in the wrong place. Think of the position as a unique key in a table in a RDBMS, and you have the right idea.
This means, effectively, that the writes to the event stream are actually consistent -- the book of record only permits the write if the position you write to is correct, which means that the position hasn't changed since the copy of the history you loaded was written.
It's typical for commands to read event streams directly from the book of record, rather than the eventually consistent read models.
It may happen that n-AggregateRoots will be populated with the same amount of reserved seats, it means having validation in the reserve method won't help, though. Then n-AggregateRoots will emit the event of successful reservation.
Every bit of state needs to be supervised by a single aggregate root. You can have n different copies of that root running, all competing to write to the same history, but the compare and swap operation will only permit one winner, which ensures that "the" aggregate has a single internally consistent history.
There are going to be a couple of ways to deal with such a scenario.
First off, an event stream would have the current version as the version of the last event added. This means that when you would not, or should not, be able to persist the event stream if the event stream is not at the version when loaded. Since the very first write would cause the version of the event stream to be increased, the second write would not be permitted. Since events are not emitted, per se, but rather a result of the event sourcing we would not have the type of race condition in your example.
Well, if your commands are processed behind a queue any failures should be retried. Should it not be possible to process the request you would enter the normal "I'm sorry, Dave. I'm afraid I can't do that" scenario by letting the user know that they should try something else.
Another option is to start the processing by issuing an update against some table row to serialize any calls to the aggregate. Probably not the most elegant but it does cause a system-wide block on the processing.
I guess, to a large extent, one cannot really trust the read store when it comes to transactional processing.
Hope that helps :)

Sharepoint StateMachine : Handling multiple responses to multiple created tasks

I created a StateMachine workflow for sharepoint and at one state, I create multiple tasks using a replicator. The number of tasks created is variable.
I need to handle the OnTaskChanged event for all the tasks I created which seems impossible as one event handler can only be associated with one task.
I can use a restrictive number of tasks which can be created and handled by a specific number of handlers but I am considering that as a last resort or create a sequential workflow as a last resort.
Please do let me know if this is even supported or if there are any workarounds.
Reference Link: http://social.msdn.microsoft.com/Forums/en-US/sharepointworkflow/thread/a174ac5f-03ed-4e27-998b-bbdb7d01d09b/
It won't work for the reasons you laid out. The workaround is to restructure your state machine workflow as a sequential workflow (which may not be possible) or to switch to item event receivers (which may not work for you). I've actually blogged about this topic: Workflow Nuttiness vol. 1
Hilariously, I just checked the MSDN forums link you provided, and sure enough, I'm in that thread, asking "so, uh, I guess we all rewrite to sequential workflows?" And there's no better answer in that thread either :)

Resources