Logging long JSON gets trimmed in azure application insights - azure

My goal is to log the users requests by using the azure application insights, the requests are being converted into JSON format and then saved.
Sometimes the user request can be very long and it gets trimmed in the azure application insight view which result in not-valid JSON.
Underneath CustomDimensions it looks like:
I'm using the Microsoft.ApplicationInsights.TelemetryClient namespace.
This is my code:
var properties = new Dictionary<string, string>
{
{ "RequestJSON", requestJSON }
};
TelemetryClientInstance.TrackTrace("some description", SeverityLevel.Verbose, properties);
I'm refer this overload:
public void TrackTrace(string message, SeverityLevel severityLevel, IDictionary<string, string> properties);

As per Trace telemetry: Application Insights data model, for Custom Properties, the Max value length is 8192.
In your case, it exceeds the limitation.
I can think of 2 solutions:
1.Write the requestJSON into message field when using TrackTrace method. The trace message Max length is 32768 characters, it may meet your need.
2.Split the requestJSON into more than 1 custom properties, when it's length is larger than 8192. For example, if the length of the requestJSON is 2*8192, then you can add 2 custome properties: property RequestJSON_1 stores the first 8192 data, and property RequestJSON_2 stores the left 8192 data.
When using solution 2, you can easily use Kusto query to join property RequestJSON_1 and property RequestJSON_2 together, so you get the completed / valid json data.

Related

Having REST API respect the segmented key for a field

I want to do a GET request against the Default/18.200.001/StockItem endpoint (we are about to upgrade them so I would love to know if this is different in new versions)
They have an Inventory ID with the following segmented key AAA-AAA.##.##. When I do a GET the field is returned as AAAAAA####. Is there any way to get the API to respect the designated segmented key when querying data?
As commented on the original ticket, the web service will only ever return the value, not respecting segment. I implemented a cache extension on InventoryItem that manually calculates the segment I want, and returned it thru the web service api definition
#region UsrLSInventoryIDWithKey
[PXString]
[PXUIField(DisplayName = "DisplayName")]
public string UsrLSInventoryIDWithKey
{
[PXDependsOnFields(typeof(InventoryItem.inventoryCD))]
get
{
if ((Base.InventoryCD?.Length ?? 0) < 11) return Base.InventoryCD;
return Base.InventoryCD?.Substring(0, 6) + "-" + Base.InventoryCD?.Substring(5, 5);
}
}

Pagination of the result of a JdbcPagingItemReader limited to the first page

I'm missing some details how to execute the pagination of a SQL Select (of almost 100.000 record)in Spring Batch.
My batch has no parallelism, neither partitioning, neither remote chunking.
It has only execute one query , process every record and writes the result in a CSV file.
It 'snt any custom class of ItemReader or InputStream.
In my class BatchConfig I have my input Bean that prepares the JDBCPagingItemReader
#StepScope
#Bean(name = "myinput")
public JdbcPagingItemReader<MyDTO> input(DataSource dataSource, PagingQueryProvider queryProvider, {other jobparams)){...}
inside I call a method of an object that set the JDBCPagingItemReader to return
public JdbcPagingItemReader<MyDTO> myMethod(/**various params: dataSource, size of the pagination, queryProvider **/){
JdbcPagingItemReader<MyDTO> databaseReader = new JdbcPagingItemReader<MyDTO>();
databaseReader.setDataSource(dataSource);
databaseReader.setPageSize(Integer.parseInt(size));
Map<String, Object> params = new HashMap<String, Object>();
//my jobparams are putted in the params
databaseReader.setParameterValues(params);
databaseReader.setRowMapper(new MyMapper());
databaseReader.setQueryProvider(queryProvider);
return databaseReader;
}
Another class declares the queryProvider
public SqlPagingQueryProviderFactoryBean queryProvider(DataSource dataSource) {
SqlPagingQueryProviderFactoryBean queryProvider = new SqlPagingQueryProviderFactoryBean();
queryProvider.setDataSource(dataSource);
queryProvider.setSelectClause(select().toString());
queryProvider.setFromClause(from().toString());
queryProvider.setWhereClause(where().toString());
queryProvider.setSortKeys(this.sortBy());// I declare only 1 field in descending order
return queryProvider;
}
At this point, I have 2 questions:
I verified that using the same pageSize and modifying the sorting field, the number of record in the final CSV file changes: I read that the sorting field has to be a primary key but my select is about a views, not a physical table: is the primary key in sortby() mandatory in this case?
I verified that the method databaseReader.setPageSize() limit the number of the read record by my SELECT, but I expected a pagination that read all the data. Now the batch read only the first page of result and does'nt move forward.
My idea is to use the partition but I see that is a bit over-engineerized and I'm thinking to neglet some point in my code: do you have sime suggest, please?
I read this question (Spring Batch: JdbcPagingItemReader pagination) and the solution of #Mahmoud Ben Hassine, but unfortunately I can't test in my enviroment because the lack critical mass of datain db.

How to extend the core customer table?

I created a custom table with additional settings for customers. Next I added a field to the customer core table in which I'd like to store the id choice per customer. I extended with EntityExtensionInterface the customerDefinition :
public function extendFields(FieldCollection $collection): void
{
$collection->add(
(new OneToOneAssociationField(
'customerSetting',
'customer_setting',
'id',
WdtCustomerSettingsDefinition::class,
true
))->addFlags(new Inherited())
);
}
public function getDefinitionClass(): string
{
return CustomerDefinition::class;
}
When I manually manipulate the customer table, with an id from my custom table in the added field, I can retrieve and use the settings from my custom table in the storefront.
For the backend I created a single select to the entity custom_table,
<sw-entity-single-select entity="wdt_customer_settings" v-model="customer.extensions.customerSetting.id" >
</sw-entity-single-select>
and with the manually 'injected' id from the custom table, this choice indicates indeed 'selected' However, after changing to another choice and saving results in an error: Customer could not be saved.
What am I missing?
You should look always to the ajax responses. There is the explict error which is occured. Do you added some boilerplate code to check that your extensions is always available? Otherwise it would cause issues on new entities

OMS log search doesnot display all the columns present in WADETWEventTable azure diagnostic table

I have a cusotm event source which has special properties like message, componentName,Priority.
[The custom etw event source properties gets converted into azure WADETWEventTable table columns] .
My idea is to view logs stored in azure tables by using Microsoft operations management suite (OMS). I can see the logs but it doesn't display all the columns.
But OMS doesnot display these columns. I am using below code/configuration-
[EventSource(Name = "CustomEtw.OperationTrace")]
public sealed class CustomEventSource : EventSource
{
public static CustomEventSource log = new CustomEventSource();
#region [Custom Event Source]
[Event(1, Level = EventLevel.Informational)]
public void Info(string message, string componentName, bool priority)
{
WriteEvent(1, message, componentName, priority);
}
[Event(2, Level = EventLevel.Warning)]
public void Warning(string warningData)
{
WriteEvent(2, warningData);
}
}
Above custom event source logs data on to ETW stream and the same is visible on azure diagnostic table i.e.WADETWEventTable. This Azure table has data in message,ComponentName and Priority columns as well, but OMS doesn't display these columns when we search thru log search.
Please help, am I missing any configuration that need to b done at OMS side?
Why OMS displays only few columns?

Data tracking in DocumentDB

I was trying to keep the history of data (at least one step back) of DocumentDB.
For example, if I have a property called Name in document with value "Pieter". Now I am changing that to "Sam", I have to maintain the history , it was "Pieter" previously.
As of now I am thinking of a pre-trigger. Any other solutions ?
Cosmos DB (formerly DocumentDB) now offers change tracking via Change Feed. With Change Feed, you can listen for changes on a particular collection, ordered by modification within a partition.
Change feed is accessible via:
Azure Functions
DocumentDB (SQL) SDK
Change Feed Processor Library
For example, here's a snippet from the Change Feed documentation, on reading from the Change Feed, for a given partition (full code example in the doc here):
IDocumentQuery<Document> query = client.CreateDocumentChangeFeedQuery(
collectionUri,
new ChangeFeedOptions
{
PartitionKeyRangeId = pkRange.Id,
StartFromBeginning = true,
RequestContinuation = continuation,
MaxItemCount = -1,
// Set reading time: only show change feed results modified since StartTime
StartTime = DateTime.Now - TimeSpan.FromSeconds(30)
});
while (query.HasMoreResults)
{
FeedResponse<dynamic> readChangesResponse = query.ExecuteNextAsync<dynamic>().Result;
foreach (dynamic changedDocument in readChangesResponse)
{
Console.WriteLine("document: {0}", changedDocument);
}
checkpoints[pkRange.Id] = readChangesResponse.ResponseContinuation;
}
If you're trying to make an audit log I'd suggest looking into Event Sourcing.Building your domain from events ensures a correct log. See https://msdn.microsoft.com/en-us/library/dn589792.aspx and http://www.martinfowler.com/eaaDev/EventSourcing.html

Resources