Transport Layer Security Elasticsearch configuration - security

Note : My version of Elasticsearch is 7.15.0
I'm new to Elasticsearch , I'm trying to use Kibana alerts , to do that I must create a Rule and a Connector but when I've selected that field I've been got informed to enable Transport Layer Security and API keys to do so I followed the Elastic Transport Layer Security guide instructions where the instructor describe these steps :
Encrypt inter-node communications with Transport Layer Security :
1. Open the $ES_PATH_CONF/elasticsearch.yml file and make the following changes:
a. Add the cluster-name setting and enter a name for your cluster:
cluster.name: my-cluster
b. Add the node.name setting and enter a name for the node. The node name defaults to the host-name of the machine when Elasticsearch starts.
node.name: node-1
c. Add the following settings to enable inter-node communication and provide access to the node’s certificate.
Because you are using the same elastic-certificates.p12 file on every node in your cluster, set the verification mode to certificate:
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
Since the elastic-certificates is not generated automatically during the installation of the Software it must be generated by the elasticsearch-certutil inside the /usr/share/elasticsearch/bin directory :
a. First :
cd /usr/share/elasticsearch/bin
b. run the elastic-certutil to generate the elastic-stack-ca.zip certificate file :
bin/elasticsearch-certutil ca
c. unzip the file to exract the all information and move them to the /etc/elasticsearch directory .
unzip elastic-stack-ca.zip
Now the problem occurs when starting the elasticsearch service :
sudo service elasticsearch restart
Job for elasticsearch.service failed because the control process exited with error code. See "systemctl status elasticsearch.service" and "journalctl -xe" for details.
I tried to see where the error is located by running these two control commands but I did not understand .

Have you checked permissions and owners on the files? Permissions should be at 640 for the files. The owner/group should be root:elasticsearch.

Related

stackdriver logging agent not showing logs read from a custom log file in stackdriver logging viewer on Google cloud platform

I decided to post this question because, I have ran out of debugging ideas, just ideas are golden since I know it can be difficult to help debugging a virtual instance through here (debugging code is hard enough jaja). Anyway, I have created a virtual machine in Compute engine , I created a logs file that I populate, for example, with this command in a python script, let's call it logging.py:
import logging
logging.basicConfig(filename= 'app.log' , level = logging.INFO , format = ' %(asctime)s - %(name) - %(levelname)s - %(message)s')
logging.info('Some message ' + str(type(variable)))
everytime I use python3 logging.py , the app.log is effectively populated. ( Logging.py and app.log are in the same directory the /home/username/ folder )
I want stackdriver to show this log in the logging viewer everytime it's written, so , I installed the stackdriver agent as follows, in the virtual machine command line:
$ curl -sSO https://dl.google.com/cloudagents/install-logging-agent.sh
$ sudo bash install-logging-agent.sh
No errors that I see are delivered here, in fact, you can see here the messages obtained
Messags on the stackdriver viewer:
After this, I proceed to create a .conf file that I create in /etc/google-fluentd/config.d/app.conf
with this parameters
<source>
type tail
format none
path /home/username/app.log
pos_file /var/lib/google-fluentd/pos/app.pos
read_from_head true
tag whatever-tag
</source>
After that is created, I launch sudo service google-fluentd restart.
Aftert I execute, python3 logging.py , no logs are added to stack drivers logging viewer.
So, where might Have I gone wrong?
Things I have tried/checked:
-Have more than 13 gygabytes of RAM available
-If I run logger "some message" on the command line, I effectively add a log with "some message" to the log viewer
-If I run
ps ax | grep fluentd
I obtain :
3033 ? Sl 0:09 /opt/google-fluentd/embedded/bin/ruby /usr/sbin/google-fluentd --log /var/log/google-fluentd/google-fluentd.log --no-supervisor
3309 pts/0 S+ 0:00 grep --color=auto fluentd
-Both my user, and the service account I use, have logger admin permission in IAM roles.
-This is the documentation I have based myself on:
https://cloud.google.com/logging/docs/agent/troubleshooting?hl=es-419
https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/list?hl=es-419
https://cloud.google.com/logging/docs/agent/configuration?hl=es-419
https://medium.com/google-cloud/how-to-log-your-application-on-google-compute-engine-6600d81e70e3
https://cloud.google.com/logging/docs/agent/installation
-If I run sudo service google-fluentd status , the agent appears active.
-My instance hass access, to all the apis. It's an n1-standard-4 (4 vCPUs, 15 GB of memory) using ubuntu linux 18:04
So, what else can I check to debug this? I'm out of ideas here , hope I'm not being an idiot here :(
Based on my understanding, I think that you looking for the following fluentd resource types:
generic_node
“A generic node identifies a machine or other computational resource for which no more specific resource type is applicable. The label values must uniquely identify the node.”
generic_task
“A generic task identifies an application process for which no more specific resource is applicable, such as a process scheduled by a custom orchestration system. The label values must uniquely identify the task.”
The source of my information has been found here
This document explain how to send logs from your application in different ways:
Cloud Logging API
Cloud Logging Agent
Generic fluentd
As you mentioned having installed fluentd, let me provide more focused documentation about Cloud Logging Agent. I also found some python Client Library documentation that you may be interested.
Finally, I found a nginx/apache use-case guide that you may use as reference.
For some reason, if I change the directory to which both the .conf file points, and the directory where the logg is to /var/logs/ , being the final path as /var/logs/app.logs, it does work correctly. Possibly there is a configuration issue, causing the logging agent to only capture logs in specific predetermined folders, or a permissions issue that stops it from working if the log is in the username directory.
I found this solution, however, by chance(random testing basically.
). Did not find anything in the main articles that are supposed to teach me how to configure the logging agent, that could point me in the right direction, being those articles this ones,
https://cloud.google.com/logging/docs/agent/troubleshooting?hl=es-419 https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/list?hl=es-419 https://cloud.google.com/logging/docs/agent/configuration?hl=es-419 https://medium.com/google-cloud/how-to-log-your-application-on-google-compute-engine-6600d81e70e3 https://cloud.google.com/logging/docs/agent/installation
If I needed it to work in my username directory, it's not clear just by checking this articles how to do it,what configuration file I would need to change or where to start, so I recommend to google to improve that aspect of the docs.
This documentation you have sent https://docs.fluentd.org/quickstart is pretty interesting, maybe I can find the explanation there, thank you for your help.

Alfresco solr/search stops working after installing records management

I am using alfresco 5.2.3 enterprise with solr6 search services.
Everything works fine when I deploy our application custom code inside the alfresco-platform jar and alfresco-share jar.
Now, when I install alfresco records management amp file, the search stops working. I am not able to search even a single document or folder.
RM amp version: alfresco-rm-enterprise-repo-2.7.0.amp and alfresco-rm-enterprise-share-2.7.0.amp
There are three different instances: repo (where alfresco.war sits), share (where share.war and ADF sits) and index server (where indexes are maintained).
I install alfresco-rm-enterprise-repo-2.7.0.amp on repo, and alfresco-rm-enterprise-share-2.7.0.amp on share. And restart the servers. RM installation is successful without any errors. But search is not at all working after this.
Is it possible that after RM installation, some indexes are corrupted, and we need to conduct reindexing ? Can that resolve this issue ?
NOTE: The versions of alfresco and RM are already in the supported stack as per the alfresco documentation link: https://docs.alfresco.com/5.2/concepts/supported-platforms-ACS.html
Any help would be appreciated.
Finally, the problem is resolved.
The keystore, truststore certificate files were the culprit.
New keystore, truststore files were required to be generated as the communication between ACS and Index server was not happening and resulting into GetModelsDiff 403 error in the logs.
Additionally, we ensured the following settings were put up in ACS and index server files:
ACS alfresco-global.properties:
alfresco.host=alfresco-dev-repo.domain.com
alfresco.port=443
alfresco.protocol=https
share.host=alfresco-dev-repo.domain.com
share.port=443
share.protocol=https
db.ssl_params=&useSSL=true&requireSSL=true&verifyServerCertificate=true&trustCertificateKeyStoreUrl=file:///opt/alfresco-content-services/alf_data/keystore/ssl.truststore&trustCertificateKeyStoreType=JCEKS&trustCertificateKeyStorePassword=kT9X6oe68t
db.url=jdbc:mysql://${db.host}/${db.name}?${db.params}${db.ssl_params}
index.subsystem.name=solr6
dir.keystore=${dir.root}/keystore
solr.host=alfresco-dev-index.domain.com
solr.port.ssl=8983
solr.port=80
solr.secureComms=https
#ssl encryption
encryption.ssl.keystore.location=${dir.keystore}/ssl.keystore
encryption.ssl.keystore.type=JCEKS
encryption.ssl.keystore.keyMetaData.location=${dir.keystore}/ssl-keystore-passwords.properties
encryption.ssl.truststore.location=${dir.keystore}/ssl.truststore
encryption.ssl.truststore.type=JCEKS
encryption.ssl.truststore.keyMetaData.location=${dir.keystore}/ssl-truststore-passwords.properties
Solr Configuration:
solr.in.sh file:
SOLR_PORT=8983
SOLR_SSL_KEY_STORE=/opt/alfresco-search-services/solrhome/keystore/ssl.keystore
SOLR_SSL_KEY_STORE_PASSWORD=kT9X6oe68t
SOLR_SSL_TRUST_STORE=/opt/alfresco-search-services/solrhome/keystore/ssl.truststore
SOLR_SSL_TRUST_STORE_PASSWORD=kT9X6oe68t
SOLR_SSL_NEED_CLIENT_AUTH=true
SOLR_SSL_WANT_CLIENT_AUTH=false
alfresco core > solrcore.properties AND archive core > solrcore.properties
alfresco.secureComms=https
data.dir.root=/opt/alfresco-search-services/solrhome/
alfresco.port.ssl=8443
alfresco.encryption.ssl.keystore.passwordFileLocation=ssl-keystore-passwords.properties
alfresco.encryption.ssl.truststore.passwordFileLocation=ssl-truststore-passwords.properties
alfresco.baseUrl=/alfresco
alfresco.host=alfdevhostname.domain.com
alfresco.encryption.ssl.keystore.provider=
alfresco.encryption.ssl.truststore.type=JCEKS
alfresco.encryption.ssl.truststore.provider=
alfresco.encryption.ssl.keystore.type=JCEKS
alfresco.encryption.ssl.keystore.location=ssl.keystore
alfresco.port=80
alfresco.version=5.2.3
alfresco.encryption.ssl.truststore.location=ssl.truststore
No need of touching the files under this location:
/opt/alfresco-search-services/solrhome/templates/rerank/conf
And finally the most important part:
Latest/Updated Certificate files placed under:
/opt/alfresco-search-services/solrhome/keystore
And the same certificate files placed under:
/opt/alfresco-search-services/solrhome/alfresco/conf
and
/opt/alfresco-search-services/solrhome/archive/conf
and on ACS server:
/opt/alfresco-content-services/alf_data/keystore
On top of it, if the issue is still not getting resolved, you can try the following:
Set solr.secureComms=none in alf-global, and alfresco.secureComms=none in archive core and alfresco core, and restart both entities to see if the normal HTTP connection is working without SSL or HTTPS
Validate with infra/netwk team is certificates installed r correct or not
Try pointing directly the IP address of alfresco and solr to each other, instead of host name –as it might be coming through LB
Try Telnet solr host from alfresco repo server, and also vice-versa
Put -Djavax.net.debug=all under alfresco > tomcat/scripts/ctl.sh and see if you get any useful information
Check not just the alfresco.log, solr.log, see access-logs if you can find 404 or 200 status responses. OR curl on solr machine against the URL that is logged in localhost-access logs.
Starting/stopping solr with root user – ideally should be another dedicated user for solr
Ideally certificates should be copied from alfresco (alf_data/keystore) to solr server, not from solr to alfresco server. But if not working, you can try the other way around.
The alfresco.host, share.host, alfresco.port, share.port in alf-global should match with properties in solrhome/alfresco/conf/solrcore.properties + solrhome/archive/conf/solrcore.properties
Try putting debugger on i.e debug statements on from alfresco repo side as well as solr side to capture any unknown or hidden exceptions/errors.
You can also check the solr-admin console page from browser and check the logs from there.
I faced similar issue on Alfresco 6.2.2 with alfresco-insight-engine 2.0.0. Multiple errors like below I had faced one by one after changing the configurations :-
If certificates are not matching between ACS, Solr OR between ACS, Solr and AWS OR certificates generated are incorrect OR certificates compatible only with particular java version OR certificates not added to truststore correctly, then you may get:
javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException ,
unable to find valid certification path to requested target ,
Caused by: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
I checked the configuration (certificate) was imported correctly at AWS side. And no restriction was applied at AWS side.
But, finally I was able to resolve with the following combination:
Alfresco side
Server.xml:
<Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol"
SSLEnabled="true" maxThreads="150" scheme="https"
keystoreFile="/app/tomcat/keystores/ssl.keystore"
keystorePass="pwd" keystoreType="JCEKS"
secure="true" connectionTimeout="240000"
truststoreFile="/app/tomcat/keystores/ssl.truststore"
truststorePass="pwd" truststoreType="JCEKS"
clientAuth="false" sslProtocol="TLS" />
alfresco-global.properties:
index.subsystem.name=solr6
solr.secureComms=https
solr.port=8984
solr.port.ssl=8984
solr.host=domainname
alfresco.context=alfresco
alfresco.host=host
alfresco.port=8443
alfresco.protocol=https
#
share.context=share
share.host=host
share.port=8443
share.protocol=https
#ssl encryption
encryption.ssl.keystore.location=/app/tomcat/keystores/ssl.keystore
encryption.ssl.keystore.type=JCEKS
encryption.ssl.keystore.keyMetaData.location=/app/tomcat/keystores/ssl-keystore-passwords.properties
encryption.ssl.truststore.location=/app/tomcat/keystores/ssl.truststore
encryption.ssl.truststore.type=JCEKS
encryption.ssl.truststore.keyMetaData.location=/app/tomcat/keystores/ssl-truststore-passwords.properties
solr side
solr.in.sh
SOLR_SOLR_HOST=domainname
SOLR_ALFRESCO_HOST=domainname
SOLR_SSL_CUSTOM="-Dsolr.ssl.checkPeerName=false -Dsolr.allow.unsafe.resourceloading=true"
SOLR_OPTS="$SOLR_SSL_CUSTOM"
SOLR_PORT=8984
SOLR_HOST=domainname
SOLR_SSL_KEY_STORE=/app/alfresco-insight-engine/solrhome/keystore/ssl.repo.client.keystore
SOLR_SSL_KEY_STORE_PASSWORD=pwd
SOLR_SSL_KEY_STORE_TYPE=JCEKS
SOLR_SSL_TRUST_STORE=/app/alfresco-insight-engine/solrhome/keystore/ssl.repo.client.truststore
SOLR_SSL_TRUST_STORE_PASSWORD=pwd
SOLR_SSL_TRUST_STORE_TYPE=JCEKS
SOLR_SSL_NEED_CLIENT_AUTH=false
SOLR_SSL_WANT_CLIENT_AUTH=true
solrcore.properties (both cores)
alfresco.encryption.ssl.truststore.location=ssl.repo.client.truststore
alfresco.encryption.ssl.keystore.provider=
alfresco.encryption.ssl.truststore.type=JCEKS
alfresco.host=ip-10-233-4-126.ap-east-1.compute.internal
alfresco.encryption.ssl.keystore.location=ssl.repo.client.keystore
alfresco.encryption.ssl.truststore.provider=
alfresco.port.ssl=8443
alfresco.encryption.ssl.truststore.passwordFileLocation=ssl-truststore-passwords.properties
alfresco.port=8080
alfresco.encryption.ssl.keystore.type=JCEKS
alfresco.secureComms=https
alfresco.encryption.ssl.keystore.passwordFileLocation=ssl-keystore-passwords.properties
solrcore.properties (under rerank/conf)
alfresco.host=domainname
alfresco.port=8080
alfresco.port.ssl=8443
alfresco.secureComms=https
alfresco.encryption.ssl.keystore.type=JCEKS
alfresco.encryption.ssl.keystore.provider=
alfresco.encryption.ssl.keystore.location=ssl.repo.client.keystore
alfresco.encryption.ssl.keystore.passwordFileLocation=ssl-keystore-passwords.properties
alfresco.encryption.ssl.truststore.type=JCEKS
alfresco.encryption.ssl.truststore.provider=
alfresco.encryption.ssl.truststore.location=ssl.repo.client.truststore
alfresco.encryption.ssl.truststore.passwordFileLocation=ssl-truststore-passwords.properties
The alfresco keystore files (used/pointed to by Alfresco) are under /app/tomcat/keystores.
And solr keystore files (used/pointed to by solr) are under /app/alfresco-insight-engine/solrhome/keystore.
NOTE: We have copied the solr keystores files to following locations also: /app/alfresco-insight-engine/solrhome/alfresco/conf , /app/alfresco-insight-engine/solrhome/archive/conf , /app/alfresco-insight-engine/solrhome/templates/rerank/conf
NOTE: If it's just a certificate not added to truststore cacerts, then you can add the certificate to the cacerts using this link: Error - trustAnchors parameter must be non-empty
Other points which can be checked if above does not work:
Check if java version is a supported one (in supported stack) and certificates are correctly getting added to the truststore.
Check the java version from alfresco's admin summary page and verify if certificates get added into the correct java
Check if solr host, port and ssl port is correctly picked up. Verify this location - http://domainname/alfresco/s/enterprise/admin/admin-searchservice , as port might be picked up from here which might not match with the one in alfresco-global.properties file. In case of mismatching properties between alf-global and admin-searchservice URL, you may get “Connection refused” error in alfresco logs when alfresco tries to connect to solr.
If JKS type of certi has become obsolete, try generating PKCS12 or JCEKS type certi.
When solr is running on 8983 (http) as well as 8984 (https/ssl), you may get error "Unsupported or unrecognized SSL message". Try stopping one which is not used.
If https with 8984 solr url is not accessible from browser, then try importing the correct certificate at AWS, and also try adding following entry in /app/alfresco-insight-engine/solr/server/etc/jetty-ssl.xml file: FALSE

Resource temporarily unavailable. Authentication by key failed (Error -18). (Error #35)

I'm using EC2 Amazon Web Service to launch my server using NodeJS, MongoDB.
I completed to save and load the data using my android application through NodeJS server and MongoDB but when I tried to check the data using RoboMongo (Robo 3T), the error occurred.
Resource temporarily unavailable. Authentication by key (path of the .pem key) failed (Error -18). (Error #35)
Robomongo 1
Robomongo 2
Error dialog
This is what I did in Robomongo.
These are the result of searching the google... I think I did right...
What is wrong?
I Solved the problem myself.
When you have this problem,
1. Check out /etc/mongod.conf
In network interfaces.
bindIP must be 0.0.0.0
not 127.0.0.1
2. Check the SSH User Name.
For an Amazon Linux AMI, the user name is ec2-user.
For the others, Check out the link ! https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html
3. If it didn't help,
try to download what developer uploaded (1.2 - Beta)
https://github.com/Studio3T/robomongo/issues/1189#issuecomment-353279070

WSO2 ESB 4.9.0 fails to start with security vault enabled

I'm using wso2esb 4.9.0 and try to configure the security vault to encrypt passwords, following what is described in the official guide
I modified (commented out) lines in file secret-conf.properties and specified secret providers classes.
I let the default values (especially password and JKS for testing)
I run tool ciphertool from bin folder
Passwords in cipher-text.properties have been encrypted
and references in configuration files have been modified with attribute svns:secretAlias="[cipher-text.key]"
I restarted the server, entered the store/key password, and got the following error :
org.h2.jdbc.JdbcSQLException: Wrong user name or password [8004-140]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:327)
at org.h2.message.DbException.get(DbException.java:167)
at org.h2.message.DbException.get(DbException.java:144)
at org.h2.message.DbException.get(DbException.java:133)
at org.h2.engine.Engine.validateUserAndPassword(Engine.java:277)
at org.h2.engine.Engine.getSession(Engine.java:133)
at org.h2.engine.Session.createSession(Session.java:122)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:241)
at org.h2.engine.SessionRemote.createSession(SessionRemote.java:219)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:111)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:95)
at org.h2.Driver.connect(Driver.java:73)
at org.apache.tomcat.jdbc.pool.PooledConnection.connectUsingDriver(PooledConnection.java:278)
at org.apache.tomcat.jdbc.pool.PooledConnection.connect(PooledConnection.java:182)
at org.apache.tomcat.jdbc.pool.ConnectionPool.createConnection(ConnectionPool.java:701)
at org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:635)
at org.apache.tomcat.jdbc.pool.ConnectionPool.getConnection(ConnectionPool.java:188)
at org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection(DataSourceProxy.java:128)
at org.wso2.carbon.user.core.claim.dao.ClaimDAO.getDialectCount(ClaimDAO.java:158)
at org.wso2.carbon.user.core.common.DefaultRealm.populateProfileAndClaimMaps(DefaultRealm.java:429)
at org.wso2.carbon.user.core.common.DefaultRealm.init(DefaultRealm.java:105)
at org.wso2.carbon.user.core.common.DefaultRealmService.initializeRealm(DefaultRealmService.java:230)
at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:96)
at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:109)
at org.wso2.carbon.user.core.internal.Activator.startDeploy(Activator.java:68)
at org.wso2.carbon.user.core.internal.BundleCheckActivator.start(BundleCheckActivator.java:61)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(BundleContextImpl.java:711)
at java.security.AccessController.doPrivileged(Native Method)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:702)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:683)
at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:381)
at org.eclipse.osgi.framework.internal.core.AbstractBundle.resume(AbstractBundle.java:390)
at org.eclipse.osgi.framework.internal.core.Framework.resumeBundle(Framework.java:1176)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.resumeBundles(StartLevelManager.java:559)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.resumeBundles(StartLevelManager.java:544)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.incFWSL(StartLevelManager.java:457)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.doSetStartLevel(StartLevelManager.java:243)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.dispatchEvent(StartLevelManager.java:438)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.dispatchEvent(StartLevelManager.java:1)
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:340)
[2016-08-31 12:11:46,829] ERROR - Activator Cannot start User Manager Core bundle
org.wso2.carbon.user.core.UserStoreException: Cannot initialize the realm.
at org.wso2.carbon.user.core.common.DefaultRealmService.initializeRealm(DefaultRealmService.java:240)
at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:96)
I checked both files ./repository/conf/datasources/master-datasources.xml and ./repository/conf/security/cipher-text.properties, the ciper key matches.
Can you tell me what i've missed ?
In-order to enable secure vault, you need to execute ./cipher-tool.sh (for linux and for windows, it is cipher-tool.bat) with the parameter -Dconfigure which will encrypt the values in cipher-text.properties, add the alias to each conf file using the xpath mentioned in cipher-tool.properies and create the secret-conf.properties file. The newly created secret-conf.properties will contain the values for secretRepositories.file.location, etc...

Enabling wsadmin Console assistance

I enabled the "Log command assistance commands" option in Websphere > console preferences.
The documentation says the following :
Specifies whether to log all the command assistance wsadmin data to a file. This file is saved to ${LOG_ROOT}/server/commandAssistanceJythonCommands_user name.log:
server is the server process where the console runs, such as server1 or adminagent.
server is the server process where the console runs, such as dmgr, server1, adminagent, or jobmgr.
user name is the administrative console user name.
When you manage a profile using an administrative agent, the command assistance log is put in the location of the profile that the administrative agent is managing. The ${LOG_ROOT} variable defines the profile location.
I am not able to find the default value of LOG_ROOT.
The actual value of LOG_ROOT depends on values of other variables. The variables are defined in AdminConsole -> Environment -> WebSphere Variables. Because variables exists at different scopes (cell, node, cluster, server), finding the actual value can be a bit tricky. The ultimate solution is to use wsadmin and AdminOperations.expandVariable operation.
For ND environment:
adminOperations = AdminControl.queryNames('WebSphere:*,type=AdminOperations,process=dmgr').splitlines()[0]
print AdminControl.invoke(adminOperations, 'expandVariable', ['${LOG_ROOT}/commandAssistance_ssdimmanuel.log'])
For standalone WAS (assuming that the server name is 'server1'):
adminOperations = AdminControl.queryNames('WebSphere:*,type=AdminOperations,process=server1').splitlines()[0]
print AdminControl.invoke(adminOperations, 'expandVariable', ['${LOG_ROOT}/commandAssistance_ssdimmanuel.log'])
Advertisement mode
Using WDR library (http://wdr.github.io/WDR/) you could do it in just one simple line:
For ND:
print getMBean1(type='AdminOperations', process='dmgr').expandVariable('${LOG_ROOT}/commandAssistance_ssdimmanuel.log')
For standalone WAS:
print getMBean1(type='AdminOperations', process='server1').expandVariable('${LOG_ROOT}/commandAssistance_ssdimmanuel.log')

Resources