I am using log4net and logs look like this. Pretty standard stuff.
Now I want to get the id, after I log an event, so that I may pass it on to the user of my app in case of an error.
The following always gives me null of id1 as well as id2.
var id1 = LogicalThreadContext.Properties["Id"];
var id2 = GlobalContext.Properties["Id"];
So how to get the id of a log?
I would suspect this is not possible using the built-in AdoNetAppender. The instructions at https://logging.apache.org/log4net/release/config-examples.html#MS%20SQL%20Server talk about setting ID as an identity column.
That means that the ID would not be known until the record is actually written to the database and with buffering that could be long after you have logged the event.
You could create your own appender, but it might be easier just to add an extra column for your own reference and maybe set this to a new Guid before logging.
Related
I've been working on a python3 script that is given an Entity Id as a command line argument. I need to create a query or some other way to retrieve the entire entity based off this id.
Here are some things I've tried (self.entityId is the id provided on the commandline):
entityKey = self.datastore_client.key('Asdf', self.entityId, namespace='Asdf')
query = self.datastore_client.query(namespace='asdf', kind='Asdf')
query.key_filter(entityKey)
query_iter = query.fetch()
for entity in query_iter:
print(entity)
Instead of query.key_filter(), i have also tried:
query.add_filter('id', '=', self.entityId)
query.add_filter('__key__', '=', entityKey)
query.add_filter('key', '=', entityKey)
So far, none of these have worked. However, a generic non-filtered query does return all the Entities in the specified namespace. I have been consulting the documentation at: https://googleapis.dev/python/datastore/latest/queries.html and other similar pages of the same documentation.
A simpler answer is to simply fetch the entity. I.e. self.datastore_client.get(self.datastore_client.key('Asdf', self.entityId, namespace='asdf'))
However, given that you are casting both entity.key.id and self.entityId, you'll want to check your data to see if you are key names or ids. Alternatives to the above are:
You are using key ids, but self.entityid is a string self.datastore_client.get(self.datastore_client.key('Asdf', int(self.entityId), namespace='asdf'))
You are using key names, and entityId is an int self.datastore_client.get(self.datastore_client.key('Asdf', str(self.entityId), namespace='asdf'))
I've fixed this problem myself. Because I could not get any filter approach to work, I ended up doing a query for all Entities in the namespace, and then did a conditional check on entity.key.id, and comparing it to the id passed on the commandline.
query = self.datastore_client.query(namespace='asdf', kind='Asdf')
query_iter = query.fetch()
for entity in query_iter:
if (int(entity.key.id) == int(self.entityId)):
#do some stuff with the entity data
It is actually very easy to do, although not so clear from the docs.
Here's the working example:
>>> key = client.key('EntityKind', 1234)
>>> client.get(key)
<Entity('EntityKind', 1234) {'property': 'value'}>
I want to create a DataSore through ssoadm.jsp because I use endpoint url in order to automatize process of configuration.
[localhost]/ssoadm.jsp?cmd=create-datastore
I put:
domain name (previously created with default coniguration): myDomain
data store name: myDataStore
type of DataStore: LDAPv3
Attribut values: LDAPv3=org.forgerock.openam.idrepo.ldap.DJLDAPv3Repo
Then I got something like: Attribute name "LDAPv3" doesn't match with service schema. What am I supposed to put in those fields "Attribut values" pls? An example is given:
"sunIdRepoClass=com.sun.identity.idm.plugins.files.FilesRepo"
PS: I dont want to create datastore from [Localhost]/realm/IDRepoSelectType because there is jato.pageSession that i can't automaticly get.
PS2: it is my first time asking a question on Stackoverflow, sorry if my question didn't fit with the expectation. I tried my best.
ssoadm.jsp?cmd=list-datastore-types
shows the list of user data store types
Every user data store type has specific attributes to be set. Unfortunately those are not explicitly documented. The service attributes are defined in the related service definition XML template, which is loaded (after potential tag swapping) into the OpenAM configuration data store during initial configuration. For the user data stores you can find them in OPENAM_CONFIGURATION_DIRECTORY/template/xml/idRepoService.xml
E.g. for user data store type LDAPv3 the following service attributes are defined
sunIdRepoClass
sunIdRepoAttributeMapping
sunIdRepoSupportedOperations
sun-idrepo-ldapv3-ldapv3Generic
sun-idrepo-ldapv3-config-ldap-server
sun-idrepo-ldapv3-config-authid
sun-idrepo-ldapv3-config-authpw
openam-idrepo-ldapv3-heartbeat-interval
openam-idrepo-ldapv3-heartbeat-timeunit
sun-idrepo-ldapv3-config-organization_name
sun-idrepo-ldapv3-config-connection-mode
sun-idrepo-ldapv3-config-connection_pool_min_size
sun-idrepo-ldapv3-config-connection_pool_max_size
sun-idrepo-ldapv3-config-max-result
sun-idrepo-ldapv3-config-time-limit
sun-idrepo-ldapv3-config-search-scope
sun-idrepo-ldapv3-config-users-search-attribute
sun-idrepo-ldapv3-config-users-search-filter
sun-idrepo-ldapv3-config-user-objectclass
sun-idrepo-ldapv3-config-user-attributes
sun-idrepo-ldapv3-config-createuser-attr-mapping
sun-idrepo-ldapv3-config-isactive
sun-idrepo-ldapv3-config-active
sun-idrepo-ldapv3-config-inactive
sun-idrepo-ldapv3-config-groups-search-attribute
sun-idrepo-ldapv3-config-groups-search-filter
sun-idrepo-ldapv3-config-group-container-name
sun-idrepo-ldapv3-config-group-container-value
sun-idrepo-ldapv3-config-group-objectclass
sun-idrepo-ldapv3-config-group-attributes
sun-idrepo-ldapv3-config-memberof
sun-idrepo-ldapv3-config-uniquemember
sun-idrepo-ldapv3-config-memberurl
sun-idrepo-ldapv3-config-dftgroupmember
sun-idrepo-ldapv3-config-roles-search-attribute
sun-idrepo-ldapv3-config-roles-search-filter
sun-idrepo-ldapv3-config-role-search-scope
sun-idrepo-ldapv3-config-role-objectclass
sun-idrepo-ldapv3-config-filterrole-objectclass
sun-idrepo-ldapv3-config-filterrole-attributes
sun-idrepo-ldapv3-config-nsrole
sun-idrepo-ldapv3-config-nsroledn
sun-idrepo-ldapv3-config-nsrolefilter
sun-idrepo-ldapv3-config-people-container-name
sun-idrepo-ldapv3-config-people-container-value
sun-idrepo-ldapv3-config-auth-naming-attr
sun-idrepo-ldapv3-config-psearchbase
sun-idrepo-ldapv3-config-psearch-filter
sun-idrepo-ldapv3-config-psearch-scope
com.iplanet.am.ldap.connection.delay.between.retries
sun-idrepo-ldapv3-config-service-attributes
sun-idrepo-ldapv3-dncache-enabled
sun-idrepo-ldapv3-dncache-size
openam-idrepo-ldapv3-behera-support-enabled
It might be best that you create an user data store instance via console and then use ssoadm.jsp?cmd=show-datastore to list the properties. You would get a long list of attriutes ... to much to show here.
When you create the data store, make sure you specify the password for the bind DN using property
sun-idrepo-ldapv3-config-authpw=PASSWORD
I'm using Azure App Insight as a logging tool and store log data by the following code:
private void SendTrace(LoggingEvent loggingEvent)
{
loggingEvent.GetProperties();
string message = "TestMessage";
var trace = new TraceTelemetry(message)
{
SeverityLevel = SeverityLevel.Information
};
trace.Properties.Add("TetstKey", "TestValue");
var telemetryClient = new TelemetryClient();
telemetryClient.Context.InstrumentationKey = this.InstrumentationKey;
telemetryClient.Track(trace);
}
everything works well. I see logged record in App insight as well as in App insight analytics (in trace table). My custom attributes are written in special app insight row section - customDimensions. For example, the above code will add new attribute with "TestKey" key and "TestValue" value into customDimensions section.
But when I try to write some big text (for example JSON document with more then 15k letters) I still can do it without any exceptions, but the writable text will be cut off after some document length. As the result, the custom attribute value in customDimensions section will be cropped too and will have only first part of document.
As I understand there is the restriction for max text length which is allowed to be written in app insight custom attribute.
Could someone know how can I get around with this?
The message has the highest allowed limit of 32768. For items in the property collection, value has max limit of 8192.
So you can try one of the following options:
Use message field to the fullest by putting the big text there.
Split the data into multiple, and add to properties collection separately.
eg:
trace.Properties.Add("key_part1", "Bigtext1_upto8192");
trace.Properties.Add("key_part2", "Bigtext2_upto8192");
Reference: https://github.com/MicrosoftDocs/azure-docs/blob/master/includes/application-insights-limits.md
We are trying to log some lengthy message using AppInsights trackEvent() message. But it is not logging into AppInsights and not giving any error.
Please help me in logging lengthy string.
Please let us know the max limit for the trackEvent()
if you want to log messages then you should be using the trackTrace methods of the AI SDK, not trackEvent. trackTrace is intended for long messages and has a huge limit: (32k!) See https://github.com/Microsoft/ApplicationInsights-dotnet/blob/develop/Schema/PublicSchema/MessageData.bond#L13
trackEvent is intended for named "events" like "opened file" or "clicked retry" or "canceled frobulating", where you might want to make charts, and track usage of a thing over time.
you can attach custom properties (string key, string value) and custom metrics (string key, double value) to anything. and if you set the operationId field on things in the sdk, anything with the same operationId can be easily found together via queries or visualized in the Azure Portal or in Visual Studio:
There are indeed limitation regarding the length. For example, the limit of the Name property of an event is 512 characters. See https://github.com/Microsoft/ApplicationInsights-dotnet/blob/master/src/Core/Managed/Shared/Extensibility/Implementation/Property.cs#L23
You can split it on substrings and put in Properties collection, each collection value length is 8 * 1024. I got this as a tip when I asked for it. See https://social.msdn.microsoft.com/Forums/en-US/84bd5ade-0b21-47cc-9b39-c6c7a292d87e/dependencytelemetry-sql-command-gets-truncated?forum=ApplicationInsights. Never tried it myself though
I'm just getting started with Subsonic 3.0 ActiveRecord and am trying to implement a batch query like the one in the SubSonic docs. I'm using a batch so I can query a User and a list of the users Orders in one shot.
When I call the BatchQuery.Queue() method, adding my "select user" query, SubSonic throws the following exception:
System.InvalidOperationException : Can't decide which property to consider the Key - you can create one called 'ID' or mark one with SubSonicPrimaryKey attribute
The code is as follows:
var db = new MyDB();
var userQuery = from u in db.Users //gets user by uid
where u.uid == 1
select u;
var provider = ProviderFactory.GetProvider();
var batch = new BatchQuery(provider);
batch.Queue(userQuery); //exception here
//create and add "select users orders" query here...
First things first - Why this error? My SubSonic Users object knows it's PK. "uid" is the PK in the database and the generated code reflects this. And I thought SubSonicPrimaryKey attribute was for the SimpleRepository? Is this way of batching not for ActiveRecord?
I could ask a number of other questions, but I'll leave it at that. If anyone can help me figure out what is going on and how to issue 2 batched queries I'd be grateful!
Edit - after further investigation
I ran through the source code with the debugger. Adam is correct - the ToSchemaTable() method in Objects.cs is apparently building out my schema and failing to find a PK. At the very end, it tries to find a column property named "ID" and flags this as the PK, otherwise it throws the exception. I added a check for "UID" and this works!
Still... I'm confused. I'm admittedly a bit lost after peeling back layer after layer of the source, but it seems like this portion of code is trying to build up a schema for my table and completely ignoring my generated User class - which quite nicely identifies which column/property is the PK! It doesn't seem quite right that I'd be required to name all keys "ID" w/ ActiveRecord.
I think the answer you're looking for is that this is a really stupid bug on my part. I'm hoping to push another build next week and if you could put this on the issue list I'd really appreciate it. My apologies...
SubSonic expects your primary key to be called Id so it's getting confused. SubSonicPrimaryKey is for simple repository but I assume where that exception is being thrown is shared between the different templates. If you rename your PK to Id or id or ID your query will work.