How to work around Azure App Setting Key Length Limitation - azure

I am finding that all of my Azure App Setting keys are getting truncated to 64 characters in my application when I try to access them as environment variables. I have only found one website that documents this behavior. Every other website claims I should have a 10KB limit for the key/value together, so I am very confused about this rule to begin with.
My question is how can I work around this limitation? I use these app settings the way I thought Microsoft intended me to as configuration keys, but 64 characters is no where near long enough. I never encountered any kind of limitations like this with PCF.
64 Character Limit:
64 Character Limit Documentation
10 KB Limit:
10 KB Limit Documentation
Edit:
I got a question asking why my keys would be 64 characters, so let me explain. The .Net Core Configuration components allow you to declare configuration in json files and then override or augment those values using App Settings. It doesn't take too many levels of nested configuration objects to go over the 64 character limit.
Json File Containing:
{
"ParentConfiguration": {
"NestedConfiguration": "Value1"
}
}
Can override with App Setting named "ParentConfiguration__NestedConfiguration"

Related

DirectX 12 Ultimate graphics sample generates a D3D12 "CBV Invalid Resource" error

Presently I'm working on updating a Windows 11 DX12 desktop app to take advantage of the technologies introduced by DX12 Ultimate (i.e. mesh shaders, VRS & DXR).
All the official samples for Ultimate compile and run on my machine (Core i9/RTX3070 laptop) so as a first step, I wish to begin migrating as much static (i.e. unskinned) geometry over from the conventional (IA-vertex shader) rendering pipeline over to the Amplification->Mesh shader pipeline.
I'm naturally using code from the official samples to facilitate this, and in the process I've encountered a very strange issue which only triggers in my app, but not in the compiled source project.
The specific problem relates to setting up meshlet instancing culling & dynamic LOD selection. When setting descriptors into the mesh shader SRV heap, my app was failing to create a CBV:
// Mesh Info Buffers
D3D12_CONSTANT_BUFFER_VIEW_DESC cbvDesc{};
cbvDesc.BufferLocation = m.MeshInfoResource->GetGPUVirtualAddress();
cbvDesc.SizeInBytes = MeshletUtils::GetAlignedSize<UINT>(sizeof(MeshInfo)); // 256 bytes which is correct
device->CreateConstantBufferView(&cbvDesc, OffsetHandle(i)); // generates error
A CBV into the descriptor range couldn't be generated because the resource's GPU address range was created with only 16 bytes:
D3D12 ERROR: ID3D12Device::CreateConstantBufferView:
pDesc->BufferLocation + SizeInBytes - 1 (0x0000000008c1f0ff) exceeds
end of the virtual address range of Resource
(0x000001BD88FE1BF0:'MeshInfoResource', GPU VA Range:
0x0000000008c1f000 - 0x0000000008c1f00f). [ STATE_CREATION ERROR
#649: CREATE_CONSTANT_BUFFER_VIEW_INVALID_RESOURCE]
What made this frustrating was the code is identical to the official sample, but the sample was compiling without issue. But after many hours of trying dumb things, I finally decided to examine the size of the MeshInfo structure, and therein lay the solution.
The MeshInfo struct is defined in the sample's Model class as:
struct MeshInfo
{
uint32_t IndexSize;
uint32_t MeshletCount;
uint32_t LastMeshletVertCount;
uint32_t LastMeshletPrimCount;
};
It is 16 bytes in size, and passed to the resource's description prior to its creation:
auto meshInfoDesc = CD3DX12_RESOURCE_DESC::Buffer(sizeof(MeshInfo));
ThrowIfFailed(device->CreateCommittedResource(&defaultHeap, D3D12_HEAP_FLAG_NONE, &meshInfoDesc, D3D12_RESOURCE_STATE_COPY_DEST, nullptr, IID_PPV_ARGS(&m.MeshInfoResource)));
SetDebugObjectName(m.MeshInfoResource.Get(), L"MeshInfoResource");
But clearly I needed a 256 byte range to conform with D3D12_CONSTANT_BUFFER_DATA_PLACEMENT_ALIGNMENT, so I changed meshInfoDesc to:
auto meshInfoDesc = CD3DX12_RESOURCE_DESC::Buffer(sizeof(MeshInfo) * 16u);
And the project compiles successfully.
So my question is, why isn't this GPU virtual address error also occurring in the sample???
PS: It was necessary to rename Model.h/Model.cpp to MeshletModel.h/MeshletModel.cpp for use in my project, which is based on the DirectX Tool Kit framework, where Model.h/Model.cpp files already exist for the DXTK rigid body animation effect.
The solution was explained in the question, so I will summarize it here as the answer to this post.
When creating a constant buffer view on a D3D12 resource, make sure to allocate enough memory to the resource upon creation.
This should be at least 256 bytes to satisfy D3D12_CONSTANT_BUFFER_DATA_PLACEMENT_ALIGNMENT.
I still don't know why the sample code on GitHub could compile without this requirement. Without having delved into the sample's project configuration in detail, it's possible that D3D12 debug layer errors are being dealt with differently, but that's purely speculative.

Should AzureFunctionsWebHost:hostId be slot-sticky?

I have an Azure Functions app whose name is longer than 32 characters. This app has a production slot and a staging slot. Their default host ids will hence cause collisions, as explained in HostID Truncation can cause collisions.
I hence would like to set AzureFunctionsWebHost:hostId (or AzureFunctionsWebHost__hostId) with a unique value on my two slots to avoid this collision. Should this configuration value be slot-sticky or not?
As mentioned in Azure-function-host documentation we can explicitly specify host id's for your app settings as below:
AzureFunctionsWebHost__hostId (Windows and Linux)
AzureFunctionsWebHost:hostId (Windows only)
If there are any other restrictions that can be satisfied from the HostIdValidator
From Doc: An easy way to generate an ID would be to take a GUID, remove the dashes and make it lower case, e.g. 1835D7B5-5C98-4790-815D-072CC94C6F71 => 1835d7b55c984790815d072cc94c6f71
Most of the information was covered from the GIT discussion which you've shared.
HostID Truncation can cause collisions GitHub#2015
The host ID value should be unique for all apps/slots you're running
Actually this is true only when they share the same storage account. From my understanding, it means that when you use the same storage across all slots of your function for an app name with more than 32 characters (with an identical value for those 32 leading char), you must define a slot-stiky hostId.

Modify Header server: ArangoDB

Something that seems easy, but I don't find the way to do that. Does it possible to change the header sent in a response
server: ArangoDB
by something else (in order to be less verbose and more secure) ?
Also, I need to store a large string (very long url + lot of informations) in a document, but what is the max length of a joi.string ?
Thx,
The internal string limit in V8 (the JavaScript engine used by ArangoDB) is around 256 MB in the V8 version used by ArangoDB. Thus 256 MB will be the absolute maximum string length that can be used from JavaScript code that's executed in ArangoDB.
Regarding maximum URL lengths as mentioned above: URLs should get too long because very long URLs may not be too portable across browsers. I think in practice several browser will enforce some URL max length limits of around 64 K, so URLs should definitely not get longer than this value. I would recommend using much shorter URLs though and passing hugh payloads in the HTTP request body instead. This also means you may need to change from HTTP GET to HTTP POST or HTTP PUT, but its at least portable.
Finally regarding the HTTP response header "Server: ArangoDB" that is sent by ArangoDB in every HTTP response: starting with ArangoDB 2.8, there is an option to turn this off: --server.hide-product-header true. This option is not available in the stable 2.7 branch yet.
No, there currently is no configuration to disable the server: header in ArangoDB.
I would recommend prepending an NGiNX or similar HTTP-Proxy to achieve that (and other possible hardening for your service).
The implementation of server header can be found in lib/Rest/HttpResponse.cpp.
Regarding Joi -
I only found howto specify a string length in joi - not what its maximum could be.
I guess the general javascript limit for strings should be taken into account.
However, it rather seems that you shouldn't exceed the limit of 2000 chars for URLs which thereby should be the limit.

winnovative pdf - out of memory and other random exceptions

we have an asp.net 3.5 application that allows users to generate many charts and export them via pdf. This works fine for smaller pdfs (less than 100 pages), but when we do larger ones, we get random errors. some of the errors we have seen are:
--System.OutOfMemoryException
--Could not render the HTML string. Could not get image from html string. Try to set LoadHtmlConcurrencyLevel = 1..
--Clone error. Out of memory..
--Timeout waiting for conversion to finish.
--System.OutOfMemoryException: Out of memory.
at System.Drawing.Image.FromStream(Stream stream, Boolean useEmbeddedColorManagement, Boolean validateImageData)
at System.Drawing.Image.FromStream(Stream stream)
If I run the big report multiple times, i usually get different exceptions. Sometimes I can get IIS to crash and I have to do a iisreset to get the application back up.
Here is the code we run. We create the PDF document with the charts (png images) and then export it out to a byte array and put it in a memorystream. We pass the memory stream to a function that rotates some of the images, etc and then call the doc.save method to export it.
Dim mainPageBytes() As Byte = PDF.GetBytes
Dim stream As New System.IO.MemoryStream(mainPageBytes)
Dim existingDoc As New PDFCreator.Document(stream)
Dim doc As PDFCreator.Document = GetDocument(mainPageBytes, GetChartingPageNumbers(PDF.ConversionSummary), pageOrientation, user, existingDoc)
doc.Save(response, True, Me.DocumentName)
IIS has limits on the scripts that run on it, both for memory and runtime. Presumably your script is going over the runtime and/or memory limits. These can be set in the IIS configuration settings, but they're generally there for a reason (to prevent a single script from eating up all the memory on the server, or to prevent a script from running forever in an infinite loop which you would have no way of exiting short of restarting IIS.)
Turn on debugging (which disables those limits) and determine how much memory your scripts are actually using when they crash by outputing queryObj("PeakWorkingSetSize") to a log file.
Do you run the converter in a 64-bit process as recommended in product documentation? You can check the deployment requirements in our online documentation. In 32-bit mode the available memory for .NET is quite limited. In IIS you have to have to make sure that the 32-bit applications flag is false.
Also, in order to reduce the memory usage when converting HTML pages with many and large images you can set the ImagesScalingEnabled property to false. You can find a complete sample code for this feature in set images scaling and JPEG compression level demo.

Large file upload into WSS v3

I'd built an WSSv3 application which upload files in small chunks; when every data piece arrives, I temporarly keep it into a SQL 2005 image data type field for performance reasons**.
Problem come when upload ends; I need to move data from my SQL Server to Sharepoint Document Library through WSSv3 object model.
Right now, I can think two approaches:
SPFileCollection.Add(string, (byte[])reader[0]); // OutOfMemoryException
and
SPFile file = folder.Files.Add("filename", new byte[]{ });
using(Stream stream = file.OpenBinaryStream())
{
// ... init vars and stuff ...
while ((bytes = reader.GetBytes(0, offset, buffer, 0, BUFFER_SIZE)) > 0)
{
stream.Write(buffer, 0, (int)bytes); // Timeout issues
}
file.SaveBinary(stream);
}
Are there any other way to complete successfully this task?
** Performance reasons: if you tries to write every chunk directly at Sharepoint, you'll note a performance degradation as file grows up (>100Mb).
I ended with following code:
myFolder.Files.Add("filename",
new DataRecordStream(dataReader,
dataReader.GetOrdinal("Content"), length));
You can find DataRecordStream implementation here. It's basically a Stream whos read data from a DbDataRecord through .GetBytes
This approach is similar to OpenBinaryStream()/SaveBinary(stream), but it's doesnt keeps all byte[] in memory while you transfer data. In some point, DataRecordStream will be accessed from Microsoft.SharePoint.SPFile.CloneStreamToSPFileStream using 64k chunks.
Thank you all for valuable infos!
The first thing I would say is that SharePoint is really, really not designed for this. It stores all files in its own database so that's where these large files are going. This is not a good idea for lots of reasons: scalability, cost, backup/restore, performance, etc... So I strongly recommend using file shares instead.
You can increase the timeout of the web request by changing the executionTimeout attribute of the httpRuntime element in web.config.
Apart from that, I'm not sure what else to suggest. I haven't heard of such large files being stored in SharePoint. If you absolutely must do this, try also asking on Server Fault.
As mentioned previously, storing large files in Sharepoint is generally a bad idea. See this article for more information: http://blogs.msdn.com/joelo/archive/2007/11/08/what-not-to-store-in-sharepoint.aspx
With that said, it is possible to use external storage for BLOBs, which may or may not help your performance issues -- Microsoft released a half-complete external BLOB storage provider that does the trick, but it unfortunately works at the farm level and affects all uploads. Ick.
Fortunately, since you can implement your own external BLOB provider, you may be able to write something to better handle these specific files. See this article for details: http://207.46.16.252/en-us/magazine/2009.06.insidesharepoint.aspx
Whether or not this would be worth the overhead depends on how much of a problem you're having. :)

Resources