C:\domino-iam-service>npm start
> domino-iam-service#2.2.0 start
> cross-env NODE_ENV=production node iam-server.js
WARNING: NODE_ENV value of 'production' did not match any deployment config file names.
WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
[11:52:41][info][master][master]: IAM version: 2.2.0
Start to unlock config:
? Enter current IAM server password: ********
Config is unlocked.
[11:53:43][info][master][master]: Starts as cluster mode.
[11:53:43][info][stats][master]: IAM StatsClient enabled: false
[11:53:43][info][cluster][master]: Worker 1 is started
[11:53:43][info][cluster][master]: Worker 2 is started
WARNING: NODE_ENV value of 'production' did not match any deployment config file names.
WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
WARNING: NODE_ENV value of 'production' did not match any deployment config file names.
WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
[11:53:49][info][worker][worker-1]: Worker 1 starts to provide service, which process id is: 3752
[11:53:49][info][initServices][worker-1]: Start IAM service on allAddress:9443
[11:53:49][info][worker][worker-2]: Worker 2 starts to provide service, which process id is: 2772
[11:53:49][info][stats][worker-1]: IAM StatsClient enabled: false
[11:53:49][info][initServices][worker-2]: Start IAM service on allAddress:9443
[11:53:50][warn][DBConnector][worker-1]: dbConfig.dominoConfig.credential.CLIENT_KEY_PASSPHRASE setting is empty, it is NOT SECURE.
[11:53:50][info][stats][worker-2]: IAM StatsClient enabled: false
[11:53:50][warn][DBConnector][worker-1]: Please use openssl tool to add passphrase for your client key file.
[11:53:50][warn][DBConnector][worker-2]: dbConfig.dominoConfig.credential.CLIENT_KEY_PASSPHRASE setting is empty, it is NOT SECURE.
[11:53:50][warn][DBConnector][worker-2]: Please use openssl tool to add passphrase for your client key file.
[11:53:50][error][ClusterCache][worker-2]: Error occurred when constructing ClusterCache with error: timeout
[11:53:50][error][ClusterCache][worker-1]: Error occurred when constructing ClusterCache with error: timeout
[11:53:50][info][DBConnector][worker-2]: Domino isn't connected, retry after 30s
[11:53:50][info][DBConnector][worker-1]: Domino isn't connected, retry after 30s
The domino server with only one error message.
0554:0002-0594] 2022/07/14 下午 12:06:17 AMgr: Error executing agent 'DeleteExpiredDocs' in 'iam-store.nsf'. Agent signer 'Domino Template Development/Domino': You are not authorized to perform that operation
Related
Failed to startup domino-iam-services. According to the tutorial https://doc.cwpcollaboration.com/appdevpack/docs/en/iam_setup_prepare_part.html
I have installed the database adpconfig.nsf for HCL Domino AppDev Pack Configuration.
At the access control page, I have granted the read access to IAMAccessors as tutorial.
I have created the self-signed SSL certificates, but failed to startup the demo services, IAM.
enter code here C:\iam>npm start
> domino-iam-service#2.2.0 start
> cross-env NODE_ENV=production node iam-server.js
WARNING: NODE_ENV value of 'production' did not match any deployment config file names.
WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
[17:40:58][info][master][master]: IAM version: 2.2.0
[17:40:58][warn][master][master]: IAM is in pilot mode. Please do not use this in production environment.
[17:40:58][warn][master][master]: To swith to production mode, delete config/local.properties then setup in production mode.
Start to unlock config:
? Enter current IAM server password: ************
Config is unlocked.
[17:41:09][info][master][master]: Starts as single node mode.
[17:41:09][info][initServices][master]: Start IAM service on allAddress:443
[17:41:09][info][stats][master]: IAM StatsClient enabled: false
[17:41:09][error][DBConnector][master]: Failed to obtain certificate content with: EISDIR: illegal operation on a directory, read
[17:41:09][error][adpConfig][master]: Error polling adpconfig. Error: EISDIR: illegal operation on a directory, read
at Object.readSync (node:fs:727:3)
at tryReadSync (node:fs:433:20)
at Object.readFileSync (node:fs:479:19)
at t (C:\iam\iam-server.js:1:57672)
at Object.g [as createCredentialOption] (C:\iam\iam-server.js:1:57795)
at Object.init (C:\iam\iam-server.js:1:9475)
at Object.init (C:\iam\iam-server.js:1:18954)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async e.exports (C:\iam\iam-server.js:1:87837)
at async C:\iam\iam-server.js:1:83053 {
errno: -4068,
syscall: 'read',
code: 'EISDIR'
}
[17:41:09][warn][IAMService][master]: LDAP has not been configured yet! Please go to Admin Service to configure it.`enter code here`
[17:41:09][error][initServices][master]: Exiting.. Error: keystore must be a JSON Web Key Set formatted object
[17:41:09][info][initServices][master]: IAM service is shutdown
C:\iam>
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/KIpue.png
ADP config integration will likely not work in pilot mode. In pilot mode there is no set up connection to the domino server. You will need to configure it for production for that to work.
I have a custom Databricks instance with a Domain name that points to an AWS Load Balancer. When I put that information in using either the HTTP instructions here or the databricks cluster instructions here, I get this response in the DBT CLI:
Connection:
host: https://subdomain.domain.com
port: 443
cluster: 123456-stuff00003
endpoint: None
schema: default
organization: 0
16:40:39.470091 [debug] [MainThread]: Acquiring new spark connection "debug"
16:40:39.471632 [debug] [MainThread]: Using spark connection "debug"
16:40:39.472524 [debug] [MainThread]: On debug: select 1 as id
16:40:39.472953 [debug] [MainThread]: Opening a new connection, currently in state init
Connection test: [ERROR]
1 check failed:
dbt was unable to connect to the specified database.
The database returned the following error:
>Runtime Error
Database Error
failed to connect
Unfortunately, DBT's debugging logs are terrible and I am not entirely sure why it is failing. I do know that when I connect to the cluster via Intellij I have to provide the CA file, the Client Certificate file, and the Client key file, because I am using a self-signed SSL cert (unfortunately, the self signed cert is required). Also, when defining my ~/.databrickscfg file I have to provide the argument insecure = true.
I've encountered this issue recently and I fixed it by installing root certificates by executing the "Install Certificates.command" script in the python home directory used to run dbt.
Laurent
We are trying to scan our docker images using Anchore Engine Jenkins plugin.
Currently we create our application docker images, push it in our own private local registry and then deploy it in our test environments.
Now, we want to setup docker image scanning in our CI/CD process to check for any vulnerabilities.
We have installed Anchore Engine using the recommended Docker-Compose yaml method given in the Documentation link:
https://anchore.freshdesk.com/support/solutions/articles/36000020729-install-on-docker-swarm
Post installation, we installed the
Anchore Container Image Scanner Plugin in Jenkins.
We configured the plugin as mentioned in the document link:
https://wiki.jenkins.io/display/JENKINS/Anchore+Container+Image+Scanner+Plugin
However, the scanning fails. Error Message as follows:
2018-10-11T07:01:44.647 INFO AnchoreWorker Analysis request accepted, received image digest sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8
2018-10-11T07:01:44.647 INFO AnchoreWorker Waiting for analysis of 10.180.25.2:5000/hello-world:latest, polling status periodically
2018-10-11T07:01:44.647 DEBUG AnchoreWorker anchore-engine get policy evaluation URL: http://10.180.25.2:8228/v1/images/sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8/check?tag=10.180.25.2:5000/hello-world:latest&detail=true
2018-10-11T07:01:44.648 DEBUG AnchoreWorker Attempting anchore-engine get policy evaluation (1/300)
2018-10-11T07:01:44.675 DEBUG AnchoreWorker anchore-engine get policy evaluation failed. URL: http://10.180.25.2:8228/v1/images/sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8/check?tag=10.180.25.2:5000/hello-world:latest&detail=true, status: HTTP/1.1 404 NOT FOUND, error: {
"detail": {},
"httpcode": 404,
"message": "image is not analyzed - analysis_status: not_analyzed"
}
NOTE:
In Image TAG 10.180.25.2:5000/hello-world:latest, 10.180.25.2:5000 is our local private registry and hello-world:latest is latest hello-world image available in docker hub which we pulled and pushed in our registry to try out image scanning using Anchore-Engine.
Unfortunately we are not able to find much resource online to try and resolve the above mentioned issue.
Anyone who might have worked on Anchore-Engine, please may I request to have a look and help us resolve this issue.
Also, any suggestions or alternatives to anchore-engine or detailed steps in case we might have missed anything would be really appreciated.
End of the output is as follows:
2018-10-15T00:48:43.880 WARN AnchoreWorker anchore-engine get policy evaluation failed. HTTP method: GET, URL: http://10.180.25.2:8228/v1/images/sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8/check?tag=10.180.25.2:5000/hello-world:latest&detail=true, status: 404, error: {
"detail": {},
"httpcode": 404,
"message": "image is not analyzed - analysis_status: not_analyzed"
}
2018-10-15T00:48:43.880 WARN AnchoreWorker Exhausted all attempts polling anchore-engine. Analysis is incomplete for sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8
2018-10-15T00:48:43.880 ERROR AnchorePlugin Failing Anchore Container Image Scanner Plugin step due to errors in plugin execution
hudson.AbortException: Timed out waiting for anchore-engine analysis to complete (increasing engineRetries might help). Check above logs for errors from anchore-engine
at com.anchore.jenkins.plugins.anchore.BuildWorker.runGatesEngine(BuildWorker.java:480)
at com.anchore.jenkins.plugins.anchore.BuildWorker.runGates(BuildWorker.java:343)
at com.anchore.jenkins.plugins.anchore.AnchoreBuilder.perform(AnchoreBuilder.java:338)
at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
at hudson.model.Build$BuildExecution.build(Build.java:206)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
I also checked status and found below:
docker run anchore/engine-cli:latest anchore-cli --u admin --p admin123 --url http://172.18.0.1:8228/v1 system status
Service analyzer (dockerhostid-anchore-engine, http://anchore-engine:8084): up
Service catalog (dockerhostid-anchore-engine, http://anchore-engine:8082): up
Service policy_engine (dockerhostid-anchore-engine, http://anchore-engine:8087): down (unavailable)
Service simplequeue (dockerhostid-anchore-engine, http://anchore-engine:8083): up
Service apiext (dockerhostid-anchore-engine, http://anchore-engine:8228): up
Service kubernetes_webhook (dockerhostid-anchore-engine, http://anchore-engine:8338): up
Engine DB Version: 0.0.7
Engine Code Version: 0.2.4
It seems service policy engine is down
Service policy_engine (dockerhostid-anchore-engine, http://anchore-engine:8087): down (unavailable)
I also checked the docker logs . I found below error:
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [DEBUG] service (policy_engine) starting in: 4
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [INFO] Registration complete.
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [INFO] Checking feeds client credentials
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [DEBUG] Initializing a feeds client
[service:policy_engine] 2018-10-15 09:37:47+0000 [-] [bootstrap] [DEBUG] init values: [None, None, None, (), None, None]
[service:policy_engine] 2018-10-15 09:37:47+0000 [-] [bootstrap] [DEBUG] using values: ['https://ancho.re/v1/service/feeds', 'https://ancho.re/oauth/token', 'https://ancho.re/v1/account/users', 'anon#ancho.re', 3, 60]
[service:policy_engine] 2018-10-15 09:37:47+0000 [-] [urllib3.connectionpool] [DEBUG] Starting new HTTPS connection (1): ancho.re
[service:policy_engine] 2018-10-15 09:37:50+0000 [-] [bootstrap] [ERROR] Preflight checks failed with error: HTTPSConnectionPool(host='ancho.re', port=443): Max retries exceeded with url: /v1/account/users/anon#ancho.re (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7ffa905f0b90>: Failed to establish a new connection: [Errno 113] No route to host',)). Aborting service startup
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/anchore_manager/cli/service.py", line 158, in startup_service
raise Exception("process exited: " + str(rc))
Exception: process exited: 1
[anchore-policy-engine] [anchore_manager.cli.service/startup_service()] [INFO] service process exited at (Mon Oct 15 09:37:50 2018): process exited: 1
[anchore-policy-engine] [anchore_manager.cli.service/startup_service()] [INFO] exiting service thread
Thanks and Regards,
Rohan Shetty
When images are added to anchore-engine, they are queued for analysis which moves them through a simple state machine that starts with ‘not_analyzed’, goes to ‘analyzing’ and finally ends in either ‘analyzed’ or ‘analysis_failed’. Only when an image has reached ‘analyzed’ will a policy evaluation be possible.
The anchore Jenkins plugin will add an image, then poll the engine for image status/evaluation for the configured number of tries (default 300). Once the image goes to ‘analyzed’ (where policy evaluation is possible), the plugin will then receive a policy evaluation result from the engine.
The plugin will fail the build (by default) if the max retries has been performed and the image has not reached ‘analyzed’, if the image does reach ‘analyzed’ but the policy evaluation is producing a ‘fail’ result (meaning the image didn’t pass your configured policy checks). Note that all build failure behavior can be controlled in the plugin (I.e. there are options to allow the plugin to succeed even if the analysis or image eval fails).
You’ll need to look at the end of the output from your build run (instead of just the beginning from your post), and combined with the information above, it should be clear which scenario is causing the plugin to fail the build.
We have resolved the issue.
Root Cause:
We were not able to establish a successful https connection to URL : https://ancho.re from within the anchore-engine docker container.
As a result the service:policy_engine was not able to start.
https://ancho.re is required to download policy feeds and sync-up periodically. Without these policy anchore-engine won't be able to analyse the docker images.
Solution:
1) We passed a HTTPS_PROXY URL as an environment variable in the docker-compose.yaml of anchore-engine.
We used this proxy URL to bypass restrictions in our environment and establish a connection with https://ancho.re url.
2) Restarted the docker containers.
Finally we got all services up and running including Anchore policy-engine.
FYI:
It takes a while to download all the required Feeds depending on your internet speed.
Lastly, Thanks to the Anchore community for quick responses and support over slack.
Hope this helps.
Warm Regards,
Rohan Shetty
I am trying to initialize fabric-ca following its user guide using this config file but when executing the following command:
fabric-ca-server init --cafiles fabric-ca-server-config.yaml
I am getting the following error:
2018/11/12 22:59:45 [DEBUG] Intializing nonce manager for issuer 'undercroft'
2018/11/12 22:59:45 [DEBUG] Closing server DBs
2018/11/12 22:59:45 [FATAL] Initialization failure: CA name 'undercroft' is used in '/home/paradox/hyperledger/fabric/undercroft/fabric-ca/server/fabric-ca-server-config.yaml' and '/home/paradox/hyperledger/fabric/undercroft/fabric-ca/server/fabric-ca-server-config.yaml'
While I am getting this error if I am using the command line flags of fabric-ca-server I am successfully able to initialize and launch the
This is the full error log
As of now the --cafiles flag is only used when there are multiple CAs, in case of a single ca it will only use the config file at the $FABRIC_CA_SERVER_HOME directory.
Have you try if you change the CN of the fabric-ca-server ? or change the name of the CA ?
I'm running puppet 2.7.26 because that's what the redhat package provides.
I'm trying to serve files that are NOT stored within any puppet modules. The files are maintained in another location on the puppet server, and that is where I need to serve them from.
I have this in my /etc/puppet/fileserver.conf
[files]
path /var/www/cobbler/pub
allow *
And then I have a class file like this:
class etchostfile
(
$hostfile /* declare that this class has one parameter */
)
{
File
{
owner => 'root',
group => 'root',
mode => '0644',
}
file { $hostfile :
ensure => file,
source => "puppet:///files/hosts-${hostfile}.txt",
path => '/root/hosts',
}
}
But when my node calls
class { 'etchostfile' :
hostfile => foo,
}
I get this error
err: /Stage[main]/Etchostfile/File[foo]: Could not evaluate: Error 400
on SERVER: Not authorized to call find on
/file_metadata/files/hosts-foo.txt with {:links=>"manage"} Could not
retrieve file metadata for puppet:///files/hosts-foo.txt: Error 400 on
SERVER: Not authorized to call find on
/file_metadata/files/hosts-foo.txt with {:links=>"manage"} at
/etc/puppet/modules/etchostfile/manifests/init.pp:27
This post
https://viewsby.wordpress.com/2013/04/05/puppet-error-400-on-server-not-authorized-to-call-find/
indicates that this is all I need to do. But I must be missing something.
UPDATE
When I run the master in debug mode, I get no error.
The master responds thusly:
info: access[^/catalog/([^/]+)$]: allowing 'method' find
info: access[^/catalog/([^/]+)$]: allowing $1 access
info: access[^/node/([^/]+)$]: allowing 'method' find
info: access[^/node/([^/]+)$]: allowing $1 access
info: access[/certificate_revocation_list/ca]: allowing 'method' find
info: access[/certificate_revocation_list/ca]: allowing * access
info: access[^/report/([^/]+)$]: allowing 'method' save
info: access[^/report/([^/]+)$]: allowing $1 access
info: access[/file]: allowing * access
info: access[/certificate/ca]: adding authentication any
info: access[/certificate/ca]: allowing 'method' find
info: access[/certificate/ca]: allowing * access
info: access[/certificate/]: adding authentication any
info: access[/certificate/]: allowing 'method' find
info: access[/certificate/]: allowing * access
info: access[/certificate_request]: adding authentication any
info: access[/certificate_request]: allowing 'method' find
info: access[/certificate_request]: allowing 'method' save
info: access[/certificate_request]: allowing * access
info: access[/]: adding authentication any
info: Inserting default '/status' (auth true) ACL because none were found in '/etc/puppet/auth.conf'
info: Expiring the node cache of agent.redacted.com
info: Not using expired node for agent.redacted.com from cache; expired at Thu Aug 13 14:18:48 +0000 2015
info: Caching node for agent.redacted.com
debug: importing '/etc/puppet/modules/etchostfile/manifests/init.pp' in environment production
debug: Automatically imported etchostfile from etchostfile into production
debug: File[foo]: Adding default for selrange
debug: File[foo]: Adding default for group
debug: File[foo]: Adding default for seluser
debug: File[foo]: Adding default for selrole
debug: File[foo]: Adding default for owner
debug: File[foo]: Adding default for mode
debug: File[foo]: Adding default for seltype
notice: Compiled catalog for agent.redacted.com in environment production in 0.11 seconds
info: mount[files]: allowing * access
debug: Received report to process from agent.redacted.com
debug: Processing report from agent.redacted.com with processor Puppet::Reports::Store
and the agent responds thusly:
info: Caching catalog for agent.redacted.com
info: Applying configuration version '1439475588'
notice: /Stage[main]/Etchostfile/File[foo]/ensure: defined content as '{md5}75125a96a68a0ff0d42f91f10dca8336'
notice: Finished catalog run in 0.42 seconds
and the file is properly installed/updated.
So it works when the master is in debug mode, but it errors when the master is in standard (?) mode. I can go back and forth, in and out of debug mode at will, and it works every time in debug mode, and it fails every time in standard mode.
UPDATE 2
Running puppetmasterd from the command line, and everything works.
Running service puppetmaster start or /etc/init.d/puppetmaster start from the command line, and it fails. So at least I'm getting closer.
/etc/sysconfig/puppetmaster is entirely commented out. So as of now, I do not see any difference between just starting puppetmasterd and using the service script.
UPDATE 3
I think it's an SELinux problem.
With SELinux "enforcing" on the master, service puppetmaster restart, and I get the error.
I change SELinux to "Permissive" on the master, and I still get the error.
But now that SELinux is set to Permissive, if I service puppetmaster restart, my files get served properly.
But now that it's working, I set SELinux to Enforcing, and I get a different error:
err: /Stage[main]/Etchostfile/File[foo]: Could not evaluate: Could not
retrieve information from environment production source(s)
puppet:///files/hosts-foo.txt at
/etc/puppet/modules/etchostfile/manifests/init.pp:27
Then I do a service puppetmaster restart and I'm back to the original error.
So the situation changes depending on
how I started the service (puppetmasterd or service)
what SELinux was set to when I started the service
what SELinux is set to when the agent runs.
The closer I get, the more confused I get.
UPDATE 4
I think I found it. Once I started looking at SELinux, I found the policy changes I needed to make (allowing ruby/puppet to access cobbler files) and now it appears to be working...
This turned out to be an SELinux problem. I eventually found this error message
SELinux is preventing /usr/bin/ruby from read access
on the file /var/www/cobbler/pub/hosts-foo.txt .
which led me to the audit2allow rules I needed to apply to allow puppet to access my cobbler files.
I was getting this error with puppet server on ubuntu 20.
Error: /Stage[main]/Dvod_tocr/File[/install/wine-data.tar.gz]: Could not evaluate: Could not retrieve file metadata for puppet:///extra_files/wine-data.tar.gz: Error 500 on SERVER: Server Error: Not authorized to call find on /file_metadata/extra_files/wine-data.tar.gz with {:rest=>"extra_files/wine-data.tar.gz", :links=>"manage", :checksum_type=>"sha256", :source_permissions=>"ignore"}
My fileserver.conf file was in the wrong location. The correct location for this puppet version and on ubuntu 20 is /etc/puppetlabs/puppet/fileserver.conf