[13.5.2]Create DataStore through ssoadm.jsp -> Atttribut dont match with 'service schema' - openam

I want to create a DataSore through ssoadm.jsp because I use endpoint url in order to automatize process of configuration.
[localhost]/ssoadm.jsp?cmd=create-datastore
I put:
domain name (previously created with default coniguration): myDomain
data store name: myDataStore
type of DataStore: LDAPv3
Attribut values: LDAPv3=org.forgerock.openam.idrepo.ldap.DJLDAPv3Repo
Then I got something like: Attribute name "LDAPv3" doesn't match with service schema. What am I supposed to put in those fields "Attribut values" pls? An example is given:
"sunIdRepoClass=com.sun.identity.idm.plugins.files.FilesRepo"
PS: I dont want to create datastore from [Localhost]/realm/IDRepoSelectType because there is jato.pageSession that i can't automaticly get.
PS2: it is my first time asking a question on Stackoverflow, sorry if my question didn't fit with the expectation. I tried my best.

ssoadm.jsp?cmd=list-datastore-types
shows the list of user data store types
Every user data store type has specific attributes to be set. Unfortunately those are not explicitly documented. The service attributes are defined in the related service definition XML template, which is loaded (after potential tag swapping) into the OpenAM configuration data store during initial configuration. For the user data stores you can find them in OPENAM_CONFIGURATION_DIRECTORY/template/xml/idRepoService.xml
E.g. for user data store type LDAPv3 the following service attributes are defined
sunIdRepoClass
sunIdRepoAttributeMapping
sunIdRepoSupportedOperations
sun-idrepo-ldapv3-ldapv3Generic
sun-idrepo-ldapv3-config-ldap-server
sun-idrepo-ldapv3-config-authid
sun-idrepo-ldapv3-config-authpw
openam-idrepo-ldapv3-heartbeat-interval
openam-idrepo-ldapv3-heartbeat-timeunit
sun-idrepo-ldapv3-config-organization_name
sun-idrepo-ldapv3-config-connection-mode
sun-idrepo-ldapv3-config-connection_pool_min_size
sun-idrepo-ldapv3-config-connection_pool_max_size
sun-idrepo-ldapv3-config-max-result
sun-idrepo-ldapv3-config-time-limit
sun-idrepo-ldapv3-config-search-scope
sun-idrepo-ldapv3-config-users-search-attribute
sun-idrepo-ldapv3-config-users-search-filter
sun-idrepo-ldapv3-config-user-objectclass
sun-idrepo-ldapv3-config-user-attributes
sun-idrepo-ldapv3-config-createuser-attr-mapping
sun-idrepo-ldapv3-config-isactive
sun-idrepo-ldapv3-config-active
sun-idrepo-ldapv3-config-inactive
sun-idrepo-ldapv3-config-groups-search-attribute
sun-idrepo-ldapv3-config-groups-search-filter
sun-idrepo-ldapv3-config-group-container-name
sun-idrepo-ldapv3-config-group-container-value
sun-idrepo-ldapv3-config-group-objectclass
sun-idrepo-ldapv3-config-group-attributes
sun-idrepo-ldapv3-config-memberof
sun-idrepo-ldapv3-config-uniquemember
sun-idrepo-ldapv3-config-memberurl
sun-idrepo-ldapv3-config-dftgroupmember
sun-idrepo-ldapv3-config-roles-search-attribute
sun-idrepo-ldapv3-config-roles-search-filter
sun-idrepo-ldapv3-config-role-search-scope
sun-idrepo-ldapv3-config-role-objectclass
sun-idrepo-ldapv3-config-filterrole-objectclass
sun-idrepo-ldapv3-config-filterrole-attributes
sun-idrepo-ldapv3-config-nsrole
sun-idrepo-ldapv3-config-nsroledn
sun-idrepo-ldapv3-config-nsrolefilter
sun-idrepo-ldapv3-config-people-container-name
sun-idrepo-ldapv3-config-people-container-value
sun-idrepo-ldapv3-config-auth-naming-attr
sun-idrepo-ldapv3-config-psearchbase
sun-idrepo-ldapv3-config-psearch-filter
sun-idrepo-ldapv3-config-psearch-scope
com.iplanet.am.ldap.connection.delay.between.retries
sun-idrepo-ldapv3-config-service-attributes
sun-idrepo-ldapv3-dncache-enabled
sun-idrepo-ldapv3-dncache-size
openam-idrepo-ldapv3-behera-support-enabled
It might be best that you create an user data store instance via console and then use ssoadm.jsp?cmd=show-datastore to list the properties. You would get a long list of attriutes ... to much to show here.
When you create the data store, make sure you specify the password for the bind DN using property
sun-idrepo-ldapv3-config-authpw=PASSWORD

Related

How to use the sops provider with terraform using an array instead an single value

I'm pretty new to Terraform. I'm trying to use the sops provider plugin for encrypting secrets from a yaml file:
Sops Provider
I need to create a Terraform user object for a later provisioning stage like this example:
users = [{
name = "user123"
password = "password12"
}]
I've prepared a secrets.values.enc.yaml file for storing my secret data:
yaml_users:
- name: user123
password: password12
I've encrypted the file using "sops" command. I can decrypt the file successfully for testing purposes.
Now I try to use the encrypted file in Terraform for creating the user object:
data "sops_file" "test-secret" {
source_file = "secrets.values.enc.yaml"
}
# user data decryption
users = yamldecode(data.sops_file.test-secret.raw).yaml_users
Unfortunately I cannot debug the data or the structure of "users" as Terraform doesn't display sensitive data. When I try to use that users variable for the later provisioning stage than it doesn't seem to be what is needed:
Cannot use a set of map of string value in for_each. An iterable
collection is required.
When I do the same thing with the unencrypted yaml file everything seems to be working fine:
users = yamldecode(file("secrets.values.dec.yaml")).yaml_users
It looks like the sops provider decryption doesn't create an array or that "iterable collection" that I need.
Does anyone know how to use the terraform sops provider for decrypting an array of key-value pairs? A single value like "adminpassword" is working fine.
I think the "set of map of string" part of this error message is the important part: for_each requires either a map directly (in which case the map keys become the instance identifiers) or a set of individual strings (in which case those strings become the instance identifiers).
Your example YAML file shows yaml_users being defined as a YAML sequence of maps, which corresponds to a tuple of objects on conversion with yamldecode.
To use that data structure with for_each you'll need to first project it into a map whose keys will serve as the unique identifier for each instance of the resource. Assuming that the name values are suitably unique, you could project it so that those values are the keys:
data "sops_file" "test-secret" {
source_file = "secrets.values.enc.yaml"
}
locals {
users = tomap({
for u in yamldecode(data.sops_file.test-secret.raw).yaml_users :
u.name => u
})
}
The result being a sensitive value adds an extra wrinkle here, because Terraform won't allow using a sensitive value as the identifier for an instance of a resource -- to do so would make it impossible to show the resource instance address in the UI, and impossible to describe the instance on the command line for commands that need that.
However, this does seem like exactly the use-case shown in the example of the nonsensitive function at the time I'm writing this: you have a collection that is currently wholly marked as sensitive, but you know that only parts of it are actually sensitive and so you can use nonsensitive to explain to Terraform how to separate the nonsensitive parts from the sensitive parts. Here's an updated version of the locals block in my previous example using that function:
locals {
users = tomap({
for u in yamldecode(data.sops_file.test-secret.raw).yaml_users :
nonsensitive(u.name) => u
})
}
If I'm making a correct assumption that it's only the passwords that are sensitive and that the usernames are okay to disclose, the above will produce a suitable data structure where the usernames are visible in the keys but the individual element values will still be marked as sensitive.
local.users then meets all of the expectations of resource for_each, and so you should be able to use it with whichever other resources you need to repeat systematically for each user.
Please note that Terraform's tracking of sensitive values is for UI purposes only and will not prevent this passwords from being saved in the state as a part of whichever resources make use of them. If you use Terraform to manage sensitive data then you should treat the resulting state snapshots as sensitive artifacts in their own right, being careful about where and how you store them.

Active Directory/Ldap Get DataType or Syntax of specific attributes in NodeJS

I am able to query active-directory/ldap to get the user information along with custom attributes. However I would to know the underlying DataType/attributeSyntax for each of those attribute returned.
Another the problem is that the query will not return the attribute itself if it does not contain any value.
So if can get fetch the attributes and their respective DataTypes then it provides me flexibility to set a default value basing on the DataType while preparing the final output object.
Eg:
1. I query AD to find foo user with attributes givenName, mail, myCustom1, myCustom2
{
givenName : "foo foo",
mail : "foo#boo.com",
myCustom1 : "TRUE"
}
but may not contain myCustom2 because it is not holding the value in AD.
get syntax for attributes givenName, mail, myCustom1, myCustom2
{
givenName : unistring,
mail : unistring,
myCustom1 : boolean,
myCustom2 : integer,
}
using above I can map the first result and prepare the final object as
{
givenName : "foo foo"
mail : "foo#boo.com"
myCustom1 : "TRUE"
myCustom2 : //usingHelperFunctionGetDefaultValueFor -> myCustom2
}
Active Director does not return attributes that do not have values, so that's not just the LDAPjs library, that's just how AD works.
Every object has an attribute called allowedAttributes that will show you every valid attribute that the object can potentially have.
If you need it, allowedAttributesEffective will list every attribute that the current user has permissions to modify.
These are both constructed attributes, meaning you have to ask for them specifically, or else you won't get them. For example, when searching, you have the option to specify the attributes you want to get back. If you specify nothing, you will get every non-constructed attribute that has a value. If you want any constructed attributes, you have to add it specifically to that list.
That's just a list of attributes. It won't tell you the type. You have to look to the schema for that, which is more difficult. You have to do a search using the base DN of CN=Schema,CN=Configuration,DC=domain,DC=com, where "domain.com" is the root domain of your forest, which may or may not be the same as the domain you're searching. You could look at the subSchemaSubEntry attribute of any object to find the location of the schema, although it will usually be CN=Aggregate,CN=Schema,CN=Configuration,DC=domain,DC=com (note the added CN=Aggregate).
But anyway, each object in there will have an attribute called ldapDisplayName, which is the name of the attribute as it appears on objects.
So if you want to find details on the givenName attribute, you would search the schema for (ldapDisplayName=givenName). Then the oMSyntax attribute is an enum that will tell you the type. The enum values are shown here. For givenName, that would be 64, which is a Unicode string.
The only benefit to looking up the types like this is if you are expecting your code to be run on any AD environment. If your code will only ever be run in one environment, then you can save coding time and run time by just hard-coding the attributes you are looking for and their types.

Query Google Cloud Datastore to retrieve matching results

I am using Google Cloud Datastore to save my application data. I have to add a query to get all results matching with Name, Brand or Sku.
Query data with one of the field is returning me records but using all fields together returns me error.
Query:
const term = "My Red";
const q = gstore.createQuery(req.params.orgId, "Variant")
.filter('brand', '=', term)
.filter('sku', '=', term)
.limit(10);
Error:
{"msec":435.96913800016046,"error":"no matching index found.
recommended index is:- kind: Variant properties: -
name: brand - name:
sku","data":{"code":412,"metadata":{"_internal_repr":{}},"isBoom":true,"isServer":true,"data":null,"output":{"statusCode":500,"payload":{"statusCode":500,"error":"Internal
Server Error","message":"An internal server error
occurred"},"headers":{}}}} Debug: internal, error
Also, I want to perform OR operation to get matching results as above will return data with AND operation.
Please help me to find correct path to achieve the desired result.
Thanks in advance and let me know if something is not clear.
The error indicates that the composite index required by the respective query is not in Serving state.
That means it's either not created/deployed or it was recently deployed and is still being built.
Composite indexes must be specifically created and deployed in your app.
If you didn't create it you need to do so. The error message indicates the content the index configuration requires. If you're using the development server it might create it automatically, but you still need to deploy it.
See Indexes docs for more details.
If you recently deployed the composite index please note that it can take some significant amount of time until the matching index is built, depending on how many entities of that kind already exist in the Datastore. You can check the status of the index building in the developer console, on the Indexes page

Incremental loading in Azure Mobile Services

Given the following code:
listView.ItemsSource =
App.azureClient.GetTable<SomeTable>().ToIncrementalLoadingCollection();
We get incremental loading without further changes.
But what if we modify the read.js server side script to e.g. use mssql to query another table instead. What happens to the incremental loading? I'm assuming it breaks; if so, what's needed to support it again?
And what if the query used the untyped version instead, e.g.
App.azureClient.GetTable("SomeTable").ReadAsync(...)
Could incremental loading be somehow supported in this case, or must it be done "by hand" somehow?
Bonus points for insights on how Azure Mobile Services implements incremental loading between the server and the client.
The incremental loading collection works by sending the $top and $skip query parameters (those are also sent when you do a query by using the .Take and .Skip methods in the table). So if you want to modify the read script to do something other than the default behavior, while still maintaining the ability to use that table with an incremental loading collection, you need to take those values into account.
To do that, you can ask for the query components, which will contain the values, as shown below:
function read(query, user, request) {
var queryComponents = query.getComponents();
console.log('query components: ', queryComponents); // useful to see all information
var top = queryComponents.take;
var skip = queryComponents.skip;
// do whatever you want with those values, then call request.respond(...)
}
The way it's implemented at the client is by using a class which implements the ISupportIncrementalLoading interface. You can see it (and the full source code for the client SDKs) in the GitHub repository, or more specifically the MobileServiceIncrementalLoadingCollection class (the method is added as an extension in the MobileServiceIncrementalLoadingCollectionExtensions class).
And the untyped table does not have that method - as you can see in the extension class, it's only added to the typed version of the table.

BizTalk: Getting error in Promoted Property

I am getting below error when I run the Orchestration and try to assign value to a promoted property by reading the value of another promoted property.
Error in Suspended Orchestration:
Inner exception: There is no value associated with the property BankProcesses.Schemas.Internal_ID' in the message.
Detail:
I have 2 XSD schemas, 1 for calling a stored procedure and reading its response and another to write it into a flat file. The internal ID returned in the response from SP needs to be passed to a node in another XSD schema to write to a flat file format.
I have promoted an element from the response schema and also promoted an element from the schema to write to flat file. I am assigning the value to promoted propeties as below:
strInternalId = msgCallHeaderSP_Response(BankProcesses.Schemas.Internal_ID);
msgCallSP(BankProcesses.Schemas.Header_Internal_ID) = strInternalId;
But when I run the orchestration I get the error as mentioned above. I have checked the reponse from stored procedure and the reponse XML does contain some value but I am unable to assign that value to another schema. Please advice
Thanks,
Mayur
You can use exists to check the existence of property.
if(BankProcesses.Schemas.Internal_ID exists msgCallHeaderSP_Response)
{
strInternalId = msgCallHeaderSP_Response(BankProcesses.Schemas.Internal_ID);
msgCallSP(BankProcesses.Schemas.Header_Internal_ID) = strInternalId;
}
One scenario that might cause this error is that there is no Header_Internal_ID element in the message you are trying to modify. Can you inspect the message before modification to ensure that there is an element whose value should be changed - drop the message out to a file location, maybe.
If this is the case, then just ensure that you create this element when you instantiate you r message for the first time - even if you initially set it to an empty element.
HTH
To check if the property exists, you can use this syntax:
BMWFS.LS.BizTalk.CFS.BankProcesses.Schemas.Internal_ID exists msgCallHeaderSP_Response
However, if the case is that the source field would always be there, you have to work backwards to find out why the Property is not appearing on the Context.
If it's coming from a Port, is the message passign through an XmlDisassembler Component? If it's coming from another Orchestration, are you actually setting the Property?
The easiest way to look at the Context is to route the Message, msgCallHeaderSP_Response, to a Stopped Send Port. You can then view the Context in BizTalk Administrator.

Resources