I'm trying to get IIS logs statistics with LogParser, and I have to group numbers by the values that a certain argument, named 't', can assume on the query string. We have to handle scenarios like this one:
.../?t=act&t=fcst&t=be
where the same argument is specified more that once, and when I do the count I would like to have such an url to count 1 for each value that t assumes on the query string. Using:
EXTRACT_VALUE(cs-uri-query,'t')
only the first occurrence of t (=act) is processed, so my counts does not get increased for values fcst and be.
Is there a way to handle such a case without further post-processing?
How about SELECT SUM(STRCNT(cs-uri-query, 't=')) ?
Related
I want get all values of dropdown and want to store them somewhere. from follwing NASDAQ site https://www.nasdaq.com/symbol/ge/historical i want get all values of Timeframe and want to somewhere so that i can use those values one b one in loop and get the values of stock for all timeframe. Click below image screenshot
It's not that easy to get each of the values, but it's not impossible. First you can get all the values in a Data Item as text. If you spy the element, you will notice that the attribute Value contains what you want. So you will need to use a read stage and get this specific attribute's value (you can ignore the PDF elements):
Doing so will give you the following:
The problem with this is that you cannot use this in a loop. One way around would be to split on space:
And the resulting collection (I called it Split Values) will look like this:
But it's not quite there yet. You should however be able to use this collection to get the collection you need (or use it directly).
If you use it directly, I would say it should look like this:
Empty? has the expression [Split Values.words]="" (notice the last row is blank)
Value is number has the expression IsNumber([Split Values.words])
Set Current Item as Number has expression [Split Values.words] with store value Current Item.
Append word to Current Item has expression [Current Item]&" "&[Split Values.words] with store value Current Item.
The main thrust is "how do I take a list of regex strings, contained in a text file, and find all matches on a specific database column taking into account exclusions/whitelist".
Sample Text file:
[Bad 192 address]=192\.168\.(1|2)\.\d{1,3} / (192\.168\.(1|2)\.1)
[Bad 172 address]=172\.(?:1[6-9]|2[0-9]|3[01])\.\d{1,3}\.\d{1,3} / (172\.(?:1[6-9]|2[0-9]|3[01])\.\d{1,3}\.1)
[Bad 10 address]=10\.\d{1,3}\.\d{1,3}\.\d{1,3} / (10\.0\.(0|120|250)\.1
In the above example, I have the name of the regex match in brackets, then the raw regex match, and finally the filter I wish to apply to this regex match in parenthesis separated from the original regex match with a "/".
I would like to point to a file containing regex matches structured similarly, and running them against a column within a table, or number of tables, in a database. In this example the idea is to find all private IP space matches that aren't whitelisted, and outputting the matches along with the associated signature's name. So a hit might look like "[Bad 192 address] 192.168.1.58".
Initially I was iterating through the file line-by-line, splitting each rule into an array of 3 items (the sig, the regex, and the filter), assigning them within a function to variables that I can work with and sending a SQL SELECT query per rule, trying to discard the whitelisted values from the returned matches but this isn't working reliably and is giving me terrible performance. For context, I need to be able to potentially iterate through 1 million rows, but I might be able reduce that value out-of-band into thousands of unique values.
Mainly concerned with the mechanism for taking defined regexe strings in File A, running them against Database B, discarding any whitelisted values, and spitting out the string match as well as the associated "signature".
I have this string to match some text using VLOOKUP.
=CONCATENATE(VLOOKUP(D10,Clients!A1:F10034,2),", ",VLOOKUP(D10,Clients!A1:F30034,3),", ",VLOOKUP(D10,Clients!A1:F10034,4),", ",VLOOKUP(D10,Clients!A1:F10034,5))
When it runs into a match that has a full stop in it, the match returns the first result that matches what it has before the full stop.
Eg if the lookup tries to match "C.B.A Solutions" and there is "C Tyres" & "C.B.A Solutions" inside of "Clients!" it will match "C Tyres" because it comes up first.
Your VLOOKUPS are missing the 'optional' forth argument. Note that this forth argument isn't 'optional' unless your data is sorted in ascending order and a match is guaranteed (i.e. your lookup term will always exist in the lookup database). Is that the case here? If so, amend your question to clarify. If not, add 'False' in as a forth argument.
Note that your formula is very inefficient. Most of the work that VLOOKUP does is in locating a matching row in the key column where the lookup term is. Better to relegate that computationally expensive task to a dedicated MATCH function in its own column, and then to feed that result to four INDEX functions.
Put this in a separate column:
=MATCH(D10,Clients!A1:F10034,0)
Then point some INDEX functions at the answer, instead of using computationally expensive VLOOKUP functions:
=CONCATENATE(INDEX(Clients!C1:C30034,[Match Output]),", ",INDEX(Clients!D1:D30034,[Match Output]),,", ",INDEX(Clients!E1:E30034,[Match Output]),)
Replace [Match Output] with the cell containing the output of the MATCH function.
Google INDEX and MATCH vs VLOOKUP for why this matters.
Note that sorting your lookup lists and then ommittig that forth argument and using something called the Double VLOOKUP trick (that handles missing values in your lookup list) will be many thousands of times faster again. See my post at the following link for more:
http://dailydoseofexcel.com/archives/2015/04/23/how-much-faster-is-the-double-vlookup-trick/
I am using loadrunner 12.5. In the below value I need to correlate and get the value 1aqeid!None (the None will also be filled with numbers so its dynamic)
Example:
1. {id:'1aqeid!None!123456',paramName:'jsessionId'};
2. {id:'1aqeid!zxsjfn12536782ldfj!123456',paramName:'jsessionId'};
I need to get only the below value
1. 1aqeid!None
2. 1aqeid!zxsjfn12536782ldfj
web_reg_save_param("ID","LB=id:'","RB=!","ORD=1",LAST);
I am not able to find the solution.
"LB={id:'",
"RB=',paramName:'jsessionID'",
"ORD=ALL",
LAST
This will leave you with:
1aqeid!None!{some value you do not need}
You have a number of options at this point. You could use strtok() and split your string with a '!' as a separator, you could parse the string to find the location of the second instance of the '!' in the character array and then take a substring using strncpy() before that as your value, or..... The point here is that you can collect more than you need and then trim down based upon a known separator in the data.
Working on postgres SQL.
I have a table with a column that contains values of the following format:
Set1/Set2/Set3/...
Seti can be a set of values for each i. They are delimited by '/'.
I would like to show distinct entries of the form set1/set2 and that is - I would like to trim or truncate the rest of the string in those entries.
That is, I want all distinct options for:
Set1/Set2
A regular expression would work great: I want a substring of the pattern: .*/.*/
to be displayed without the rest of it.
I got as far as:
select distinct column_name from table_name
but I have no idea how to make the trimming itself.
Tried looking in w3schools and other sites as well as searching SQL trim / SQL truncate in google but didn't find what I'm looking for.
Thanks in advance.
mu is too short's answer is fine if the the lengths of the strings between the forward slashes is always consistent. Otherwise you'll want to use a regex with the substring function.
For example:
=> select substring('Set1/Set2/Set3/' from '^[^/]+/[^/]+');
substring
-----------
Set1/Set2
(1 row)
=> select substring('Set123/Set24/Set3/' from '^[^/]+/[^/]+');
substring
--------------
Set123/Set24
(1 row)
So your query on the table would become:
select distinct substring(column_name from '^[^/]+/[^/]+') from table_name;
The relevant docs are http://www.postgresql.org/docs/8.4/static/functions-string.html
and http://www.postgresql.org/docs/8.4/static/functions-matching.html.
Why do you store multiple values in a single record? The preferred solution would be multiple values in multiple records, your problem would not exist anymore.
Another option would be the usage of an array of values, using the TEXT[] array-datatype instead of TEXT. You can index an array field using the GIN-index.
SUBSTRING() (like mu_is_too_short showed you) can solve the current problem, using an array and the array functions is another option:
SELECT array_to_string(
(string_to_array('Set1/Set2/Set3/', '/'))[1:2], '/' );
This makes it rather flexible, there is no need for a fixed length of the values. The separator in the array functions will do the job. The [1:2] will pick the first 2 slices of the array, using [1:3] would pick slices 1 to 3. This makes it easy to change.
If they really are that regular you could use substring; for example:
=> select substring('Set1/Set2/Set3/' from 1 for 9);
substring
-----------
Set1/Set2
(1 row)
There is also a version of substring that understands POSIX regular expressions if you need a little more flexibility.
The PostgreSQL online documentation is quite good BTW:
http://www.postgresql.org/docs/current/static/index.html
and it even has a usable index and sensible navigation.
If you want to use .*/.* then you'd want something like substring(s from '[^/]+/[^/]+'), for example:
=> select substring('where/is/pancakes/house?' from '[^/]+/[^/]+');
substring
-----------
where/is
(1 row)