subsonic - Offset and length were out of bounds for the array intermittent error - subsonic

inherited a website which uses subsonic 2.0 and gets an intermittent error of "Offset and length were out of bounds for the array" . If we were to restart the app or recycle the app pool, the issue would go away. I suspect it has something to do with subsonic caching the table schema based on the error log below. Has anyone experience this issue and can suggest a fix?
System.ArgumentException
Offset and length were out of bounds for the array or count is greater than the number of elements from index to the end of the source collection.
System.Exception: Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.ArgumentException: Offset and length were out of bounds for the array or count is greater than the number of elements from index to the end of the source collection.
at System.Array.BinarySearch[T](T[] array, Int32 index, Int32 length, T value, IComparer1 comparer)
at System.Collections.Generic.SortedList2.IndexOfKey(TKey key)
at System.Collections.Generic.SortedList`2.ContainsKey(TKey key)
at SubSonic.DataService.GetSchema(String tableName, String providerName, TableType tableType)
at SubSonic.DataService.GetTableSchema(String tableName, String providerName)
at SubSonic.Query..ctor(String tableName)
at G05.ProductController.GetProductByColorName(Int32 productId, String colorName) in C:\Projects\G05\Code\BusinessLogic\ProductController.vb:line 514

Strange that it's intermittent . How are the objects being generated? Is it using the .abp file? If so, I'd recommend running the files through the subcommander to hard generate the classes. That way the generation of the objects isn't ever executed on production environment.

Related

Nodejs (Infinispan) : Does Infinispan put method returns null for key inserted in cache for first time?

I have been reviewing the infinispan documentation and overloaded put method returns the value being replaced, or null if nothing is being replaced.
I am using overloaded put method with nodejs and it's not returning expected data, getting undefined.
how can I achieve this with nodejs?
Looked at the documentation, need assistance to understand the behavior with Nodejs
Documentation Link : https://docs.jboss.org/infinispan/9.2/apidocs/org/infinispan/commons/api/BasicCache.html#put-K-V-
V put(K key,
V value,
long lifespan,
TimeUnit unit)
An overloaded form of put(Object, Object), which takes in lifespan parameters.
Parameters:
key - key to use
value - value to store
lifespan - lifespan of the entry. Negative values are interpreted as unlimited lifespan.
unit - unit of measurement for the lifespan
Returns:
the value being replaced, or null if nothing is being replaced.
Looked at the documentation, need assistance to understand the behavior with Nodejs
From https://github.com/infinispan/js-client/blob/main/lib/infinispan.js#L327 it looks like put's third argument opts can have property previous that makes it return the old value, so try:
const oldValue = client.put('key', 'value', { previous: true })

Azure Cognitive Search int to Edm.String field issues

I'm having trouble trying to added data to my Azure Cognitive Search index. The data is being read from SQL Server tables with a python script. The script sends it to the index using the SearchIndexClient from the azure search sdk.
The problem is when sending Python "int" values into a search index field of type Edm.String. The link below seems to indicate that this should be possible. Any number type is allowed to go into a Edm.String.
https://learn.microsoft.com/en-us/rest/api/searchservice/data-type-map-for-indexers-in-azure-search#bkmk_sql_search
However I get this error:
Cannot convert the literal '0' to the expected type 'Edm.String'.
Am I misunderstanding the docs? Is the python int different than the SQL Server int through the Azure Search SDK?
I'm using pyodbc to connect to an Azure Synapse db. Retrieving the rows with cursor loop. This is basically what I'm doing...
search_client = SearchIndexClient(env.search_endpoint,
env.search_index,
SearchApiKeyCredential(env.search_api_key),
logging_enable=True)
conn = pyodbc.connect(env.sqlconnstr_synapse_connstr, autocommit=True)
query = f"SELECT * FROM [{env.source_schema}].[{source_table}]"
cursor = conn.cursor()
cursor.execute(query)
source_table_columns = [source_table_column[0] for source_table_column in cursor.description]
rows = []
for source_table_values in cursor.fetchmany(MAX_ROWS_TO_FETCH):
source_table_row = dict(zip(source_table_columns,
source_table_values))
rows.append(source_table_row)
upload = search_client.upload_documents(documents=rows)
If the row contains a row with an int value and the search index table field is Edm.String, we get the error.
Cannot convert the literal '0' to the expected type 'Edm.String'.
Thank you for providing the code snippet. The data type mapping link is applicable when using an Indexer to populate an Index.
Indexers provide a convenient mechanism to load documents into an Index from a source datasource. They perform the mapping outlined here by default or can take in an optional fieldMappings.
In the case of the code snippet where an index is being updated manually, when there is a type mismatch between source & target, that would be handled by casting/converting etc. by the user. In the code snippet after you have the dictionary, you can convert the int into a string using str() before uploading the batch in to the Index
source_table_row[column_name] = str(source_table_row[column_name])
This is a python sample that creates an indexer to update an index

DBUnit with HSQLDB: String column too short

I have an entity with the following attribute
#Lob
#NotNull
private String myContent;
Now, in my production setup I use a CLOB for representation in the database since the content can be several thousands of chars. However, for unit tests an in-memory HSQLDB is used. During the unit test I get this error
Caused by: org.hsqldb.HsqlException: data exception: string data, right truncation
at org.hsqldb.error.Error.error(Unknown Source)
As far as my research revealed, the reason should be that DBUnit creates a 255 char column for the string, automatically. And in my case it is not long enough for the content I insert. So, what could I do about this?
Try something like this:
#Column(columnDefinition = "VARCHAR", length = 65535)
#Lob
#NotNull
private String myContent;
That should cause a larger column to be created.

Azure StorageException when using emulated storage (within documented constraints)

Our application performs several batches of TableBatchOperation. We ensure that each of these table batch operations has
100 or fewer table operations
table operations for one entity partition key only
Along the lines of the following:
foreach (var batch in batches)
{
var operation = new TableBatchOperation();
operation.AddRange(batch.Select(x => TableOperation.InsertOrReplace(x)));
await table.ExecuteBatchAsync(operation);
}
When we use emulated storage we 're hitting a Microsoft.WindowsAzure.Storage.StorageException - "Element 99 in the batch returned an unexpected response code."
When we use production Azure, everything works fine.
Emulated storage is configured as follows:
<add key="StorageConnectionString" value="UseDevelopmentStorage=true;" />
I'm concerned that although everything is working OK in production (where we use real Azure), the fact that it's blowing up with emulated storage may be symptomatic of us doing something we shouldn't be.
I've run it with a debugger (before it blows up) and verified that (as per API):
The entire operation is only only 492093 characters when serialized to JSON (984186 bytes as UTF-16)
There are exactly 100 operations
All entities have the same partition key
See https://learn.microsoft.com/en-us/dotnet/api/microsoft.windowsazure.storage.table.tablebatchoperation?view=azurestorage-8.1.3
EDIT:
It looks like one of the items (#71/100) is causing this to fail. Structurally it is no different to the other items, however it does have some rather long string properties - so perhaps there is an undocumented limitation / bug?
EDIT:
The following sequence of Unicode UTF-16 bytes (on a string property) is sufficent to cause the exception:
r e n U+0019 space
114 0 101 0 110 0 25 0 115 0 32 0
(it's the bytes 25 0 115 0 i.e. unicode end-of-medium U+0019 which is causing the exception).
EDIT:
Complete example of failing entity:
JSON:
{"SomeProperty":"ren\u0019s ","PartitionKey":"SomePartitionKey","RowKey":"SomeRowKey","Timestamp":"0001-01-01T00:00:00+00:00","ETag":null}
Entity class:
public class TestEntity : TableEntity
{
public string SomeProperty { get; set; }
}
Entity object construction:
var entity = new TestEntity
{
SomeProperty = Encoding.Unicode.GetString(new byte[]
{114, 0, 101, 0, 110, 0, 25, 0, 115, 0, 32, 0}),
PartitionKey = "SomePartitionKey",
RowKey = "SomeRowKey"
};
According to your description, I also can reproduce the issue that you mentioned. After I tested I found that the special
Unicode Character 'END OF MEDIUM' (U+0019) seems that not supported by Azure Storage Emulator. If replace to other unicode is possible, please try to use another unicode to instead of it.
we also could give our feedback to Azure storage team.

How to check if two JNI Arrays point to the same memory location?

I have two arrays:
auto inputArray = reinterpret_cast<jbyteArray>(mainEnv->NewGlobalRef(imageDataArray));
auto output = reinterpret_cast<jfloatArray>(mainEnv->NewGlobalRef(data));
When I try:
auto input = env->GetByteArrayElements(inputArray, nullptr);
I'm getting this error:
"JNI DETECTED ERROR IN APPLICATION: attempt to get byte primitive array elements with an object of type float[]"
My guess is "inputArray" (byte array) point to the same memory location of "output" (float array).
How can I check that?
You can tell if two object references point to the same object with the JNI IsSameObject function.
The error message is telling you that you're calling GetByteArrayElements on a float[]. Getting the array object's class (with GetObjectClass) would let you query the class of the object at the point it's passed to native code, so you can confirm that the arrays have the types you expect. From there you can narrow your focus and figure out where things are going wrong.

Resources