How can i protect an NvIndex from being recreated, without the need of another NvIndex - tpm

TL;DR: (How) is it possible to ensure that the name of an NvIndex changes every time it is created without the need of any other object?
I want to create a key inside the TPM (v2) that should be protected by an TPM_NT_PIN_FAIL NvIndex.
I know how do to that, but i think that an attacker might be able to delete the NvIndex and recreate it with the pinCount set to zero or with a very high pinLimit.
As far as i know recreating an NvIndex does not change his name, so a policy relying on the NvIndex would still be satisfied.
I had an idea how to make sure that the name of the NvIndex changes every time it is created but this requires a second NvIndex.
This is my current idea:
Create an NvIndex as TPM_NT_COUNTER
Increment the counter
Read the counters value
Create a Policy to allow read access to the NvIndex but allow only one write:
Policy Session A: Policy_CC NV_READ
Policy Session B: Policy_Nv the value of the counter index must be equal to its current value, Policy_NvWritten the TPM_NT_PIN_FAIL NvIndex must not have the TPMA_NV_WRITTEN flag set, Policy_CC NV_WRITE
Policy to restrict access to the NvIndex: Policy_Or Policy Session A, Policy Session B
Create the NvIndex with a password and the policy
Write pinCount and pinLimit to the NvIndex
Increment the counter
It may occur that many of such keys need to be created, so the need of two NvIndexes for each key could become a problem.

Related

Anylogic - create Resources and add to ResourcePool

I'm having difficulty finding the documentation I need to work with Resources and resourcePools - there is no "resource API documentation" that I can find.
I would like to programatically create static resources (of a custom type) and then add these resources to a resourcePool. When creating the resources, I'd like to be able to specify their property values prior to adding them to the resourcePool. In my mind the code would look something like this:
Room myRoom;
myRoom = new Room("redRoom", 20);
addTo_myResourcePool(myRoom);
myRoom = new Room("greenRoom", 10);
addTo_myResourcePool(myRoom);
Does anyone know if there are ways to achieve this end?
This is a bit of a blind spot in AnyLogic. It can only be done indirectly:
Create an empty agent population with your agent type
Tell the resource pool to use that agent type
set the resource pool capacity as you need. The pool will create agents for your in that population (if the capacity is larger than the current number of resources)
If you want to manually create a resource, you must call myResourcePool.set_Capacity(myResourcePool.getCapacity()+1)
Destroying 1 resource works vice-versa.
Also, make sure to "destroy resources on capacity decrease" so the agents are destroyed from the population
There's a trick I use for this, that works sometimes, but I wouldn't generalize it as a final solution...I only do it when i have a few different unit characteristics, which seems to be your case.
step 1: create a population of resourcePools... each resource pool will correspond to one kind of agent characteristics, and all resourcePools use the same resource (agent) type
step 2: in the on new unit of the resource pool, you will use the index of the resourcePool population to generate the unit with particular characteristics... then you can just do something like resourcePool.get(i).set_capacity(whatever) in order to generate a resource unit with the exact characteristics you want
step 3: when you seize the resource, you will use alternatives... each resourcePool from the resourcePool population will be 1 option among the alternatives to use... you will need to create a function that retourns a ResourcePool[][]
step 4: you will use conditions to select the unit based on its characteristics (customize resource selection)
One option is to create a population of your resource agent, which I assume is of type room based on your code.
Then you have a function that will add a new agent to the population and return it to the caller.
And now you only need to add this to the new resource unit call in the resource pool object
Do not change the "Add Units to" option since we are already doing this in the function.
I tested this in a small model by having two buttons to increase and decrease the capacity during execution using
resourcePool.set_capacity(max(0, resourcePool.size() + 1));
and
remove_myResource(myResource.get(myResource.size()-1));
resourcePool.set_capacity(max(0, myResource.size()));
There are several related points here.
AnyLogic is designed such that you only change the agents in a resource pool via changing the capacity of the pool. This can be done directly via the pool's set_capacity function (if the pool's capacity is set to be defined "Directly"), but is more commonly done indirectly by linking the pool's capacity to a schedule (to represent shift patterns or any other pre-defined changes over time in capacity).
Resource pools will keep agents by default when the capacity decreases (i.e., they exist but are 'unavailable'), but you can set it to delete them. What you'll want will depend on what the resources and any resource-specific attributes/state represent.
If you need to address/access the agents in the resource pool independently from their use as seized/released resources you would have the pool store them in a custom population (rather than put them in the default population along with all other agents not created in a custom population) via the "Add units to" setting. Then you can access that population explicitly (e.g., loop through it) whenever you need to. Otherwise this aspect is not necessary.
If your resource pool is of custom agents with parameters (or other state that needs initialising in a particular way), the standard approach is to have direct/indirect capacity increases create the unit (with default parameter values) and then do any follow-on initialisation in the "On new unit" action (where you have access to the newly-created unit via the agent keyword).
You can alternatively do it via a dynamic expression for the "New resource unit" (as in Jaco's answer) but this doesn't particularly gain you anything (though it's equally valid). Yes, it's a bit more 'object-oriented', but there are loads of barriers AnyLogic puts up to doing things in a proper OO way anyway — as you say, the more 'obvious' OO approach would just be to have methods on the resource pool which add/remove agents. Basically, the standard 'design pattern' on blocks which create agents, like a Resource Pool or a Source block, is to let it create them with default info and then customise it in the appropriate action.
You could also use the "On startup" action of the resource pool agent as well / instead, but normally you would be using information from the process-containing agent to determine what state the new resource agent should have, which makes the "On startup" action less useful.
Finally(!), if you have an unchanging set of resource pool agents and it's easier to initialise them together (e.g., because there is some database input data for all of them that can thus just be looped through once), then just do that in your "On startup" action of the agent containing the resource pool (which runs after the resource pool has initialised and created default-state agents). That will require them in a custom population so you can loop through just those agents. But your question seemed to imply you were concerned about dynamic additions of resource agents.
But, as some of my other comments indicated, there are various subtleties/restrictions in how resource pools work (really based on them currently being designed as a pool where the explicit identity of each individual is not fully 'controllable') which mean that what you actually need may go beyond this.
A couple of examples:
If you wanted your resources to be explicit individual staff with info tracked across shifts (e.g., with their state changing to reflect experience or the history of tasks they've conducted), you have the complication that you can't control which resources (if you didn't delete them on capacity decrease) 'come back' when the capacity changes, so you may need more complex workarounds such as having separate resource pools for each distinct shift and seizing from resource sets including all such pools — only one of them (the active shift one) will have non-zero capacity at any given time unless the shifts overlap.
If capacity decreases (representing end-of-shift), you can't control / determine which resource agents are chosen to go off-shift. Plus there are issues regarding resource agents which are still active finishing a task before they actually 'leave' (given that shift-ends weren't set to pre-empt) — because they still exist until they finish, if you increase the capacity again whilst they're working AnyLogic won't create a new resource agent (it will just add this existing agent back as an 'active' agent in the pool). So that makes some other things harder...

How to properly use the LockableCurrency trait?

I am wondering how does the LockableCurrency work? Or more specifically, what are the WithdrawReasons? Is it just a marker or the value specified here is important for actually releasing the lock?
My use case is that I want to lock funds for transfer for a certain time and then either transfer those funds or release the lock. So should I just use WithdrawReasons:all()?
And as a side note - I thought I could use a substring(hash(AccountId)) for the lock identifier, is it a good idea create the lock per each account this way?
If you want to only disallow transfers, then you should use a lock that only prohibits withdraw reasons transfer, aka. WithdrawReason::Transfer. Although, be aware that it is likely that a user can find a way to get around this, as they can tip the block author or pay for transaction fees with the locked funds, so if they happen to collude with a block author, they can effectively trick the system.
It is likely that what you actually want is WithdrawReason::all().
And as a side note - I thought I could use a substring(hash(AccountId)) for the lock identifier, is it a good idea create the lock per each account this way?
I wouldn't do that. Each lock is already linked to an account, and the API for adding and removing locks already asks for an account to operate on. So using the account hash as key is duplicate in my opinion (could also have bad performance impact). You should follow the convention within substrate use a uniquee identifier from the pallet as your lock identifier (simply: name of the pallet will do). This will make sure that the locks created by this pallet will not be accidentally removed by another pallet.

API response data best practise - nodejs

For a certain internal endpoint I am working on for a Nodejs API, I have been asked to dynamically change the value of a property status based on the value of a property visibility of the same object just before sending down the response.
So for example lets say I have an object that represents a user's profile. The user can have visibility Live or Hidden but status can be IDLE, CREATING, UPDATING.
What's been asked of me is that when I send down the object response containing those two properties I override the status value with another based on the current value of visibility - so if visibility is LIVE then I should set status to ACTIVE, if visibility is HIDDEN then status should be INACTIVE (two status values that do not exist internally in the database or in the list of enums for this object) and then also if status is not IDLE I should change it's value to BUSY
So not only am I changing it's value based on the value of visibility but I'm also changing it's value based on it's own value not being a value!
I am just wondering if this is good practice for an API in any way (apart from some weird extra layer of complexity, and so much inconsistency as the client will later ask for the same object based on status too, which means a reverse mapping)?
status doesn't mean the same thing for different users, having the same name may be confusing but not a problem if well documented.
If the mapping become too complex, you can always persist the two values, but then you will have to keep them in sync.

Data Repository using MemoryCache

I built a homebrew data entity repository with a factory that defines retention policy by type (e.g. absolute or sliding expiration). The policy also specifies the cache type as httpcontext request, session, or application. A MemoryCache is maintained by a caching proxy in all 3 cache types. Anyhow, I have a data entity service tied to the repository which does the load and save for our primary data entity. The idea is you use the entity repository and don't need to care if the entity is cached or retrieved from it's data source (db in this case).
An obvious assumption would be that you would need to synchronise the load/save events as you would need to save the cached entity before loading the entity from it's data source.
So I was investigating a data integrity issue in production today... :)
Today I read there can be a good long gap between the entity being removed from the MemoryCache and the CacheItemRemovedCallback event firing (default 20 seconds). The simple lock I had around the load and save data ops was insufficient. Furthermore the CacheItemRemovedCallback was in it's own context outside of HttpContext making things interesting. It meant I needed to make the callback function static as I was potentially assigning a disposed instance to the event.
So once I realised there was was the possibility of a gap whereby my data entity no longer existed in cache but might not have been saved to it's data source might explain the 3 corrupt orders out of 5000. While filling out a long form it would be easy to perform work beyond the policy's 20 minute sliding expiration on the primary data entity. That means if they happen to submit at the same moment of expiration an interesting race condition between the load (via request context) and save (via cache expired callback) emerges.
With a simple lock it was the roll of the dice, would save or load win? Clearly we need a save before the next load from the data source (db). Ideally when an item expires from the cache it is atomically written to it's data source. with the entity gone from the cache but the expired callback not yet fired a load operation can slip in. In this case the entity will not be found in the cache so will default to load from the data source. However, as the save operation may not have commenced resulting in data integrity corruption and will likely clobber your now saved cached data.
To accomplish synchronisation I need a named signalling lock so I settled on EventWaitHandle. A named lock is created per user which is < 5000. This allows the Load to wait on a signal from the expired event which Saves the entity (whose thread exists in its own context outside HttpContext). So in the save it is easy to grab the existing name handle and signal the Load to continue once the Save is complete.
I also have a redundancy where it times out and logs each 10 seconds block by the save operation. As I said, the default is meant to be 20 seconds between an entity being removed form MemoryCache and it being conscious of it to fire the event which in turn saves the entity.
Thank you to anyone who followed my ramblings through all that. Given the nature of the sync requirements was the EventWaitHandle lock the best solution?
For completeness I wanted to post what I did to address the issue. I made multiple changes to the design to create a tidier solution which did not require a named sync object and allowed me to use a simple lock instead.
First the data entity repository is a singleton which was stored in the request cache. This front end of the repository is detached from the cache's themselves. I changed it to reside in the session cache instead which becomes important below.
Second I changed the event for the expired entity to route through the data entity repository above.
Third I changed the MemoryCache event from RemovedCallback to UpdateCallback**.
Last, we tie it all together with a regular lock in the data entity repository which is is the user's session and the gap-less expiry event routing through the same allowing the lock to cover load and save (expire) operations.
** These events are funny in that A) you can't subscribe to both and B) UpdateCallback is called before the item is removed from the cache but it is not called when you explicitly remove the item (aka myCache.Remove(entity) won't call event but UpdateCallback will). We made the decision if the item was being forcefully removed from the cache that we didn't care. This happens when the user changes company or clears their shopping list. So these scenarios won't fire the event so the entity may never be saved to the DB's cache tables. While it might have been nice for debugging purposes it wasn't worth dealing with the limbo state of an entity's existence to use the RemovedCallback which had 100% coverage.

Prevent certain optionset changes in CRM via plugin

Is it possible to have a plugin intervene when someone is editing an optionset?
I would have thought crm would prevent the removal of optionset values if there are entities that refer to them, but apparently this is not the case (there are a number of orphaned fields that refer to options that no longer exist). Is there a message/entity pair that I could use to check if there are entities using the value that is to be deleted/modified and stop it if there are?
Not sure if this is possible, but you could attempt to create a plugin on the Execute Method, and check the input parameters in the context to determine what the Request Type that is being processed is. Pretty sure you'll be wanting to look for either UpdateAttributeRequest for local OptionSets, or potentially UpdateOptionSetRequest for both. Then you could run additional logic to determine what values are changing, and ensuring the database values are correct.
The big caveat to this, is if you even have a moderate amount of data, I'm guessing you'll hit the 2 minute limit for plugin execution and it will fail.

Resources