How to split data in nlog.config? - nlog

I have 2 IPs like the below format in aspnet-request-ip field in nlog:
1.2.3.4, 1.1.1.1:2000
but I just want to get the first IP. How can I do this?
The above IPs are fake.

Not a big fan of RegEx, but here goes one solution:
${replace:inner=${aspnet-request-ip}:regex=true:searchFor=,.*:replaceWith=}
It scans for comma, and replaces the comma (and everything after the comma) with blank string.
"1.2.3.4, 1.1.1.1:2000" becomes "1.2.3.4"
"1.2.3.4" becomes "1.2.3.4"
See also: https://github.com/NLog/NLog/wiki/Replace-Layout-Renderer
Alternative one can also implement a custom HttpContext Layout Renderer

Related

How to setup Solr search across REST urls

We are implementing an API Portal and have a field named basePath to hold the base part of the api's rest url. Currently the field is defined as a string mapped to solr.StrField but we have search problems with this.
The problem right now is that in order to find an API by the basePath, we need double quote the value in the search. For example name:"/v1/api-proxy/generator" We cannot use name:/v1/api-proxy/* to see other apis that might have clashing urls. We know we have other urls like '/v1/api-proxy/validator' but something like name:/v1/api-proxy/* doesn't return any hits.
I am guessing a first step is to change away from 'string' to text or text_general, but how can search and find other hits that closely match the provided basePath?

Azure Search - regex search

I am trying to configure Azure Search to find some strings that have special characters, for example
ABC*DEF
When I look for a the full term using "ABC*DEF", it works perfectly.
The problem comes if I want to use a regex term:
When I use a partial term, like /(.*)ABC(.*)/, the result has no problem
When I use a partial term, like /(.*)DEF(.*)/, the result has no problem
But when I try to look for something like /(.*)C\*D(.*)/, the result is empty.
I am using a standard analyzer. I tried also the keyword analyzer but that way the regex search doesn't work at all.
Any suggestions?
You won't be able to create a regex expression that matches ABC*DEF using the standard analyzer.
If you run "ABC\*DEF" through the analyzer api using "standard" analyzer, you will see that ABC*DEF gets divided into 2 tokens at indexing time -> "ABC" and "DEF". Regex expression are not analyzed, however, they need to match a token that exist in the index.
Since ABC\*DEF does not exist in the index (only "ABC" and "DEF" exist), you won't be able to find it using the expression you are searching for.
Using the "keyword" analyzer will keep the whole field as a single token, so if the field "only" contained the expression ABC\*DEF, then the regex expression would work on it, however, if ABC\*DEF is part of a larger paragraph of text, then that's probably not what you want to use.
Your best bet is to create a custom analyzer that tokenizes your text in the way that preserves the special characters that are relevant to your use case.
If you're searching for special chars, why don't you discard normal chars?
[^\w]

Solr exact search with a hyphen

I am trying to search for a term in Solr in the Title that contains only the string 1604-04. But the results come back with anything that contains 1604 or 04. What would the syntax be to force solr to search on the exact string of 1604-04?
You can also use Classic Tokenizer.The Classic Tokenizer preserves the same behavior as the Standard Tokenizer with the following exceptions:-
Words are split at hyphens, unless there is a number in the word, in which case the token is not split and
the numbers and hyphen(s) are preserved.
This means if someone searches for 1604-04 then this Tokenizer won't break search string into two tokens.
If you want exact matches only, use a string field or a text field with a KeywordTokenizer as the tokenizer. These will keep your tokens intact as one single entry, and won't break it up into multiple tokens.
The difference is that if you use a Textfield with a KeywordTokenizer, you can still apply other filters, such as a LowercaseFilter, while a string field will store anything verbatim without any further processing possible.
Your analyzer is splitting "1604-04" into two terms, "1604" and "04". You've received answer on how to change your analysis to stop doing that.
Changing your analysis my not be the best solution (can't be entirely sure based on what you've written). Using a phrase query would be the usual way to do this. You can use a phrase query by wrapping it in quotes:
field:"1604-04"
This will still analyze and split it into two terms, but it will look for those terms in sequence. So, that query would match "1604-04" and "1604 04", but not "1604 some other stuff 04".

Web API - GET - passing in multiple parameters in this scenario

When I have this URI and pass in the PlayerCode: 12345, everything is good.
https://abc.com/teams/players/12345
But when I have a list of 9000 player codes how do I pass the specific list of order code list for a GET operation?
While this question -asked before,here - suggests "an" answer I am not sure if it is "the" answer. I am not sure if I should be going for something like :
https://abc.com/teams/players/?PlayerCodes=12345,23456,34567,45678....
and then have custom model binders to cater to the above.
Does passing in 9000 comma separated values in a URI make sense?
What would be the optimal solution for this scenario?
unfortunately, when you get into the realm of big numbers like 9000, query string parameters will not be sufficient. I assume you are running your solution in IIS or IIS express, both of which have character limits on the query string of somewhere around 2048. In this scenario you can either choose to do an HTTP POST and post a body of the playerId's for the players you need to retrieve, or you could rework your architecture a bit and break your GET calls up into acceptable sizes.

ElasticSearch incorrectly indexing and querying on non-alphanumeric characters

My ElasticSearch index is not correctly indexing and querying non-alphanumeric characters. Specifically, dots and dashes are causing problems.
If I index a document with the name "O.K. Corral," it should match queries for "OK Corral". Similarly, if I index "Whiskey A Go-Go," I'd like it to match "Whiskey A GoGo" and "Whiskey A Go Go".
Right now, only queries with the correct dots and dashes will return these documents.
I'm hoping the solution will also solve any potential problems with other non-alphanumeric characters, like commas and apostrophes.
It sounds like a job for ElasticSearch token filters, but I haven't been able to find one that does what I'm looking for. Also, I would like to do this within ElasticSearch -- I don't want to write custom string manipulations to normalize data before it gets to my ES index.
Thanks for your help!
You might want to have a look at the Word Delimiter Token Filter. It will at least do what you want with "Whiskey A GoGo" and "Whiskey A Go-Go,". You can check its behaviour in advance using the analyze api.

Resources