I have an Azure Functions app whose name is longer than 32 characters. This app has a production slot and a staging slot. Their default host ids will hence cause collisions, as explained in HostID Truncation can cause collisions.
I hence would like to set AzureFunctionsWebHost:hostId (or AzureFunctionsWebHost__hostId) with a unique value on my two slots to avoid this collision. Should this configuration value be slot-sticky or not?
As mentioned in Azure-function-host documentation we can explicitly specify host id's for your app settings as below:
AzureFunctionsWebHost__hostId (Windows and Linux)
AzureFunctionsWebHost:hostId (Windows only)
If there are any other restrictions that can be satisfied from the HostIdValidator
From Doc: An easy way to generate an ID would be to take a GUID, remove the dashes and make it lower case, e.g. 1835D7B5-5C98-4790-815D-072CC94C6F71 => 1835d7b55c984790815d072cc94c6f71
Most of the information was covered from the GIT discussion which you've shared.
HostID Truncation can cause collisions GitHub#2015
The host ID value should be unique for all apps/slots you're running
Actually this is true only when they share the same storage account. From my understanding, it means that when you use the same storage across all slots of your function for an app name with more than 32 characters (with an identical value for those 32 leading char), you must define a slot-stiky hostId.
Related
I am finding that all of my Azure App Setting keys are getting truncated to 64 characters in my application when I try to access them as environment variables. I have only found one website that documents this behavior. Every other website claims I should have a 10KB limit for the key/value together, so I am very confused about this rule to begin with.
My question is how can I work around this limitation? I use these app settings the way I thought Microsoft intended me to as configuration keys, but 64 characters is no where near long enough. I never encountered any kind of limitations like this with PCF.
64 Character Limit:
64 Character Limit Documentation
10 KB Limit:
10 KB Limit Documentation
Edit:
I got a question asking why my keys would be 64 characters, so let me explain. The .Net Core Configuration components allow you to declare configuration in json files and then override or augment those values using App Settings. It doesn't take too many levels of nested configuration objects to go over the 64 character limit.
Json File Containing:
{
"ParentConfiguration": {
"NestedConfiguration": "Value1"
}
}
Can override with App Setting named "ParentConfiguration__NestedConfiguration"
I know that AAD application ID is unique in one directory (tenant). It is a guid and apparently should be unique in whole world but collisions may be. The question is: does Azure while generation AAD application ID validate whether it is unique across all others directories or not?
If you look at the official document for application property you would know application id is
The unique identifier for the application that is assigned to an
application by Azure AD. Not nullable. Read-only
How Azure Application Id Generated Uniquely:
Application Id (GUID) break down like this:
60 bits of timestamp,
48 bits of computer identifier,
14 bits of uniquifier, and
six bits are fixed
Total of 128 bits.
The goal of this algorithm is to use the combination of time and location (“space-time coordinates” for the relativity geeks out there) as the uniqueness key.
However, there’s a possibility that, for example, two GUIDs are generated in rapid succession from the same machine, so close to each other in time that the timestamp would be the same. That’s where the uniquifier comes in.
When time appears to have stood still (if two requests for a GUID are made in rapid succession) or gone backward (if the system clock is set to a new time earlier than what it was), the uniquifier is incremented so that GUIDs generated from the “second time it was five o’clock” don’t collide with those generated “the first time it was five o’clock”.
Once you see how it all works, it’s clear that you can’t just throw away part of the GUID since all the parts (well, except for the fixed parts) work together to establish the uniqueness. This is how all that works.
Note: Even sometimes network address also considered for GUID.
Our software runs on Linux and we need to create a mapping between Linux device name(something like /dev/sda1) and VolumeGUID as it appears in Windows, since we are examining Windows disks/partitions.
We get this information from MountedDevices Windows Registry subkey.
Problem occurs on Windows 2016, where Volume{GUID}s are no longer listed in MountedDevices subkey.
I managed to figure out, that Volume{GUID} is not a random GUID anymore(which is probably why they do not have to be stored it registry anymore), but it gets composed from data in the partition table.
In case of GPT, the VolumeGUID is actually a GPT partition GUID, which is great, because I can easily reconstruct those VolumeGUIDs.
In case of MBR, it is something like:
\?\Volume{46e21ed5-0000-0000-0000-100000000000}\
\?\Volume{46e21ed5-0000-0000-0000-104000000000}\
\?\Volume{46e21ed5-0000-0000-0000-108000000000}\
\?\Volume{46e21ed5-0000-0000-0000-20c000000000}\
...
Where 46e21ed5 is actually a disk signature, but I'm not sure what other fields mean. It looks like there's a partition offset (0x400 = 1024, and each partition is 1024MB in the provided example), but something does not add up for the last partition which has 20C00.
Does anyone have more information on this subject and how these volume GUIDs get composed? (Google does not find any information on this subject)
Regards
I am using snmpwalk to derive a subtree of management values. One of the lines reads, for example,
iso.3.6.1.2.1.25.1.5.0 = Gauge32: 10
but what does it mean? What device/function, ... corresponds to the OID iso.3.6.1.2.1.25.1.5.0, and what does the number '10' stand for?
How to find out completly general for ANY OID (not just for this exemplary one)?
You can use the snmptranslate command:
$ snmptranslate iso.3.6.1.2.1.25.1.5.0
HOST-RESOURCES-MIB::hrSystemNumUsers.0
Or you can do the lookup in reverse with -On:
$ snmptranslate -On HOST-RESOURCES-MIB::hrSystemNumUsers.0
.1.3.6.1.2.1.25.1.5.0
(Note that the iso. in the first look-up means the same as the .1. that the reverse translate shows)
As noted in the comments, yes, you need the MIB installed in order to do these lookups, see your device vendor for the MIB file download. From what I've seen with a generic install of net-snmp, you get most of what you already are looking for.
iso.3.6.1.2.1.25.1.5.0 OID provide number of logged in Users info.
Gauge32 is OID type, Gauge32 you can expect the data increases and decreases based on what real world information it tries to provide
10 means number of users currently logged in your system.
I am implementing VSS Hardware provider for ZFS based iSCSI Target. We have implemented AreLunSupported, precommitsnapshot and commitsnapshot etc functions and till this point it is working fine. But after this it is failing with "VSS_E_NO_SNAPSHOTS_IMPORTED" error in LocateLun method. and I think we are not filling Target LUN information properly.
My questions are:
How to find serial number of target LUN ? Do I need to mount newly created snapshot and then get the serial number ?
Do we need to fill interconnect, storage identifier information also or can I just pass NULL for these.
Q: How to find serial number of target LUN ? Do I need to mount newly created snapshot and then get the serial number ?
No, you should not mount the snapshot at this point. You should use an out-of-band mechanism to directly communicate with your storage (I'm assuming your 'ZFS based iSCSI target' is coming from a NAS box), probably a REST API call, to figure out the serial number of the snapshot.
Let me elaborate some more on serial number of the snapshot:
VSS expects the 'shadow copy' to be a concrete, real volume, similar to the primary volume (in your case an iSCSI target)
Since you are using ZFS snapshots, without dwelling much into your exact implementation, you have 2 options to obtain the serial number for a concrete LUN:
a. If your storage allows exposing a ZFS snapshot directory as a iSCSI target, the create that iSCSI target and use its Page83 identifier
b. If not, create a ZFS clone using the ZFS snapshot and expose that as an iSCSI target and use its Page83 identifier
Q: Do we need to fill interconnect, storage identifier information also
or can I just pass NULL for these.
For all practical purposes, it usually suffices to simply copy the VDS_LUN_INFORMATION for the original source LUN and only edit the m_szSerialNumber field with that of the target LUN (assuming that the product ID, vendor ID etc. all will remain the same)
This link explains in detail what is expected out of a VSS Hardware Provider implementation: https://msdn.microsoft.com/en-us/library/windows/desktop/aa384600(v=vs.85).aspx
Unique Page 83 Information
Both the original LUN and the newly created shadow copy LUN must have
at least one unique storage identifier in the page 83 data. At least
one STORAGE_IDENTIFIER with a type of 1, 2, 3, or 8, and an
association of 0 must be unique on the original LUN and the newly
created shadow copy LUN.
Bonus chatter (Answer ends at this point):
Now, #2(b) above might raise eyebrows since you are creating a clone ahead-of-time and it is not yet being used. The reason for this is, the above steps need to be performed in IVssHardwareSnapshotProvider::FillInLunInfo and this same VDS_LUN_INFORMATION contents are passed later to IVssHardwareSnapshotProvider::LocateLuns (VSS is trying to tell you to locate the LUNs that you earlier told it were the shadow copy LUNs). Hence, regardless of whether you will be using the clone or not in future, you must have the concrete LUN (iSCSI target) created upfront.
A silver lining to this is: if you are sure that the workflow of the VSS Requestor will never mount the shadow copy, then you can get away with this by faking some (valid) info in VDS_LUN_INFORMATION during IVssHardwareSnapshotProvider::FillInLunInfo. For this to work, you will have to create a 'transportable' shadow copy (the VSS requestor uses the VSS_CTX_FILE_SHARE_BACKUP | VSS_VOLSNAP_ATTR_TRANSPORTABLE flags). The only use-case for such a shadow copy would be to perform a hardware-resync on it, in which the VSS Hardware Provider implements the IVssHardwareSnapshotProvider::ResyncLuns method and performs a ZFS snapshot rollback in it.