LDAP custom attribute cannot be searched? - attributes

I have some special custom attributes with my ldap setup. I have a custom attribute called "GroupCode". I have bunch of entries with this special attribute that I was able to write to the ldap database. Lets say that I have an entry with attribute "xyz" and another attribute with "wasd". I search with the filter "(GroupCode=xyz)" and "(GroupCode=wasd)" neither one of these search return anything. However, if I change the filter to "(GroupCode=*)", then it would return all the entries that have the GroupCode attribute. I have examined the attribute properties, and it looks normal, the apache directory studio shows it to be of "String" value, do I don't know why it isn't searching with the filter I provided. My knowledge with ldap structure is fairly limited as it is fairly complexed. Anyone have any idea, please let me know. Much appreciated. Thanks.

can you see if you can formulate the same search criteria into an ldapsearch command in the command line?
ldapsearch -H ldap://LDAP_SERVER -D LDAP_AUTH_LOGIN -b LDAP_BASE -w PW -x "CRITERIA"
if so, then you can then also experiment with your criteria.
ldapsearch -H ldap://LDAP_SERVER -D LDAP_AUTH_LOGIN -b LDAP_BASE -w PW -x "(GroupCode=xyz)"

One possible reason for your issue is that you forgot to specify the EQUALITY and SUSBSTR properties of your custom attribute.
Here is an example for a custom attribute called sAMAccountName:
attributeTypes: ( 1.2.840.113556.1.4.221
NAME 'sAMAccountName'
EQUALITY caseIgnoreMatch
SUBSTR caseIgnoreSubstringsMatch
SYNTAX '1.3.6.1.4.1.1466.115.121.1.15'
SINGLE-VALUE )

Related

Is there a way to list all categories in perluniprops?

perluniprops lists the Unicode properties of the version of Unicode it supports. For Perl 5.32.1, that's Unicode 13.0.0.
You can obtain a list of the characters that match a category using Unicode::Tussle's unichars.
unichars '\p{Close_Punctuation}'
And the help:
$ unichars --help
Usage:
unichars [*options*] *criterion* ...
Each criterion is either a square-bracketed character class, a regex
starting with a backslash, or an arbitrary Perl expression. See the
EXAMPLES section below.
OPTIONS:
Selection Options:
--bmp include the Basic Multilingual Plane (plane 0) [DEFAULT]
--smp include the Supplementary Multilingual Plane (plane 1)
--astral -a include planes above the BMP (planes 1-15)
--unnamed -u include various unnamed characters (see DESCRIPTION)
--locale -l specify the locale used for UCA functions
Display Options:
--category -c include the general category (GC=)
--script -s include the script name (SC=)
--block -b include the block name (BLK=)
--bidi -B include the bidi class (BC=)
--combining -C include the canonical combining class (CCC=)
--numeric -n include the numeric value (NV=)
--casefold -f include the casefold status
--decimal -d include the decimal representation of the code point
Miscellaneous Options:
--version -v print version information and exit
--help -h this message
--man -m full manpage
--debug -d show debugging of criteria and examined code point span
Special Functions:
$_ is the current code point
ord is the current code point's ordinal
NAME is charname::viacode(ord)
NUM is Unicode::UCD::num(ord), not code point number
CF is casefold->{status}
NFD, NFC, NFKD, NFKC, FCD, FCC (normalization)
UCA, UCA1, UCA2, UCA3, UCA4 (binary sort keys)
Singleton, Exclusion, NonStDecomp, Comp_Ex
checkNFD, checkNFC, checkNFKD, checkNFKC, checkFCD, checkFCC
NFD_NO, NFC_NO, NFC_MAYBE, NFKD_NO, NFKC_NO, NFKC_MAYBE
Other than reading the list of categories from the webpage, is there a way to programmatically get all the possible \p{...} categories?
From the comments, I believe you are trying to port a Perl program using \p regex properties to Python. You don't need a list of all categories (whatever that means); you just need to know what Code Points each of the property used by the program matches.
Now, you could get the list of Code Points from the Unicode database. But a much simpler solution is to use Python's regex module instead of the re module. This will give you access to the same Unicode-defined properties that Perl exposes.
The latest version of the regex module even uses Unicode 13.0.0 just like the latest Perl.
Note that the program uses \p{IsAlnum}, a long way of writing \p{Alnum}. \p{Alnum} is not a standard Unicode property, but a Perl extension. It's the union of Unicode properties \p{Alpha} and \p{Nd}. I don't know know if the regex module defines Alnum identically, but it probably does.

Can you grep/search case-insensitive in SchemaCrawler?

Is there a flag or option that will allow SchemaCrawler to search database objects and ignore case?
The following example will filter out stored procedures that start with "API" even though they are desired output:
--routines=.*api_Insert.*
Jared,
That is a good idea - I will add a --ignore-case option to the SchemaCrawler grep command. Meanwhile, you can try out a regular expression like
--routines=.*[Aa][Pp][Ii]_Insert.*
or
--routines=.*(api|API)_Insert.*
and see if that works.
Sualeh, SchemaCrawler.

Excel Import of custom mandatory field doesn't work [Hybris 6.7.0]

I'm using Hybris version 6.7.0 and I stuck with the following problem:
When I trying to perform importing products from excel file. It gives me the following error ->
I've checked the excel file and there is, of course, field "Subscription Term*", it is mandatory that's why there is an asterisk there. Good to mention that this field is custom, so I write custom translator to it and exporting part works fine, but in importing part when I did debugging I found strange fact:
This WorkbookMandatoryColumnsValidator validator calls the method findColumnIndex(typeSystemSheet, sheet, this.prepareSelectedAttribute(mandatoryField)); from DefaultExcelTemplateService this method returns -1 and the validation does not passed. I dig into this method and there is such line of code:
String attributeDisplayName = this.findAttributeDisplayNameInTypeSystemSheet(typeSystemSheet, selectedAttribute); which returns "Subscription Term" string as you can see without an asterisk.
I've checked the other mandatory fields e.g. "Catalog version*^" it returns with 2 symbols after it.
The thing is that "Subscription Term" and "Subscription Term*" after string equality operation returns false and the validation fails as you can see here:
attributeDisplayName.equals(this.getCellValue(headerRow.getCell(i))).
Of course the second value is taken from the excel file where the asterisk sign presents.
If I remove an asterisk from excel file then I receive: Unknown attributes of type ISku error in WorkbookTypeCodeAndSelectedAttributeValidator validator:
The asterisk should be presented in excel file, I've just checked what would be...
It doesn't help me at all to understand what really happens.
I can't understand one thing: What is the source of "Subscription Term" string? Why without an asterisk? Is it predefined constant somewhere?
From debug I couldn't figure out from which source that string comes from.
I do not know for sure but I expect that string( i.e Subscription Term) to come from a localization file based on backoffice current session language ( e.g {extensionName}-locales_en.properties if the current language is en).
Try to search after "Subscription Term" in all properties files.
Maybe, if the attribute is mandatory(i.e optional="false" in items.xml) then Hybris will add to its name an "*" when performing the import.
Check whether you provided read and write permission to that attribute for that user.
Check with admin user before doing that. If there is no issue with admin user, then only permission issue with the user.

usergrid set type column

Is it possible to insert set type data (column) in usergrid (as cassandra support set type column). I tried
curl -XPOST http://localhost:8080/<org>/<app>/<collection>
-d '{"name":"1974", "category":{"a","b","c"}}'
but it reply json_parse error.
Response to answer: I knew that payload in above request isn't valid JSON, I only tried to tell that is there any way I could make set type column (I need to prevent duplicate entries on single column record). with square brackets, It create list type column which don't prevent duplicate entries.
One of core member reply that on current Version (1.0), usergrid don't support set type column.
It certainly is - but your payload isn't valid JSON; in JSON, you use square brackets to specify an Array: [].
Try instead:
curl -X POST http://localhost:8080/<org>/<app>/<collection>
-d '{"name":"1974", "category":["a","b","c"]}'
# ^ ^

Trying to obtain memberof detail from linux ldapsearch command

I am trying run an LDAP query from a Linux machine (CentOS 5.8) to a Windows LDAP server and want to get 'memberof' detail for a user. In this example, the Domain is cm.loc and the user is admin1#cm.loc. Here is the ldapsearch syntax I am using. It returns an error.
Can someone point me in the right direction with what the correct syntax should be using ldapsearch to query for memberof detail for a particular account?
Here is what I am using that returns error; "ldap search ext bad search filter 7"
Where is my syntax wrong?
ldapsearch –x –h 192.168.1.20 –b 'DC=cm,DC=loc' -s base –D 'admin1#cm.loc' -W '(&(objectCategory=Group)(|(memberOf=group1)(memberOf=group2)…))'
Thank You
memberOf is an attribute with DN syntax. group1 is not a DN.
The syntax looks OK, you need to use the full DN syntax for the memberOf query, and it's still memberOf=, not memberOf: - if you use the colon syntax then you'll get the bad search filter error.
The next thing is that you must escape the search string according to the specifications of RFC4515. This generally means that the following characters in the search string terms: \, *, (, and ) must be escaped using \5c, \2a, \28, \29 respectively, otherwise you get the same error - bad search term. This is on top of the escaping that the ldap server may have applied to the DN already.

Resources