When I start to run hyperledger explorer I get this below error. After a search I found that I need to give a path to walletstore path. But I couldn't find where the walletstore path.
docker-compose.yaml file
volumes:
- ./config.json:/opt/explorer/app/platform/fabric/config.json
- ./connection-profile:/opt/explorer/app/platform/fabric/connection-profile
- ./organizations:/tmp/crypto
- walletstore:/opt/explorer/wallet
[2022-05-23T12:31:18.513] [ERROR] FabricGateway - Failed to create wallet, please check the configuration, and valid file paths: {
explorer.mynetwork.com | "errno": -2,
explorer.mynetwork.com | "syscall": "open",
explorer.mynetwork.com | "code": "ENOENT",
explorer.mynetwork.com | "path": "/tmp/crypto/peerOrganizations/org1.example.com/users/User1#org1.example.com/msp/signcerts/cert.pem"
In this case, check the read and write permission of the folder path "/tmp/crypto/peerOrganizations/org1.example.com/"
Related
I'm really stuck here. I inherited a system which stores secrets in a Hashicorp vault, and I'm getting this error, Authentication failed: ldap operation failed: unable to retrieve user bind DN
I am not sure how to resolve this issue, and have been Googling for hours, and trying a lot of things.
I did see the post at ref. [A], but it isn't helpful.
Also the post at ref [B] gives some information about setting the binddn, but in the classic way to frustrate a new user, doesn't say where, how, or give any examples.
Hashicorp Vault v1.6.x
The vault is running on a docker container, on an AWS EC2.
... I have the .pem file, and am able to ssh into the EC2
. I am able to ssh into the docker container with root priv, like so:
... docker exec -it 123abc123abc sh
On the container, some vault commands work; e.g:
... vault version
--> Vault v1.6.0 (123asdf1234adsf1234adsf1234adsf13w4radsf1234asdff)
It is using ldap configuration
When trying to retrieve config and other info, I get this message:
... a. "* missing client token"
How to proceed?
I'm not an expert with this, and would appreciate clear, full, command-line examples.
Thanks for your help.
Sincerely,
Keith
DOCKER COMPOSE FILE
$ cat docker-compose.yml
version: '3'
services:
vault:
image: vault:1.6.0
cap_add:
- IPC_LOCK
environment:
- VAULT_ADDR=http://127.0.0.1:8200
command: vault server -config=/vault/config/config.json
ports:
- 80:8200
volumes:
- vault-data:/vault
- ./config.json:/vault/config/config.json
volumes:
vault-data:ubuntu#ip-192-0-2-1:/home/tarjan-docker
VAULT CONFIG
/vault/config # cat config.json
{
"backend": {
"file": {
"path": "/vault/data"
}
},
"listener": {
"tcp":{
"address": "0.0.0.0:8200",
"tls_disable": 1
}
},
"default_lease_ttl": "30m",
"max_lease_ttl": "30m",
"log_level": "info",
"ui": true
}
A. https://discuss.hashicorp.com/t/ldap-operation-failed-unable-to-retrieve-user-bind-dn/12926
B. https://support.hashicorp.com/hc/en-us/articles/5289574376083-Receiving-ldap-operation-failed-failed-to-bind-as-user-error-when-logging-in-via-LDAP-authentication-method
https://discuss.hashicorp.com/t/authentication-failed-ldap-operation-failed-unable-to-retrieve-user-bind-dn/50123
This seems like a silly question to ask, but I'm struggling to create a compressed archive using ansible then copying that to a remote host. I'm receiving an error that the target directory/file doesn't exist during a copy task. I've verified that he /home/ansible-admin/app/certs directory exists. But from what I can tell, the zip file is never being created.
---
- hosts: example
become: yes
tasks:
- name: Create cert archive
archive:
path:
- /home/ansible-admin/app/certs
dest: /home/ansible-admin/app/app_certs.zip
format: zip
- name: Copy certs to target servers
copy:
src: /home/ansible-admin/app/app_certs.zip
dest: /home/ubuntu/app_certs.zip
owner: ubuntu
group: ubuntu
mode: "0400"
This is the error message I'm consistently getting
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option fatal: [app.example.com]: FAILED! => {"changed": false, "msg": "Could not find or access '/home/ansible-admin/app_certs.zip' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"}
I'm hoping I'm just missing something trivial here. But looking at the docs and the yaml file, I'm not seeing where the issue is. https://docs.ansible.com/ansible/latest/collections/community/general/archive_module.html
From the documentation of the archive module:
The source and archive are on the remote host, and the archive is not
copied to the local host.
I think that is your problem.
Should that path indeed exist on the remote host and create the archive at least the copy will fail as the archive won't be present on the controller.
As tink pointed out, I was trying to archive a directory on the remote host that didn't exist rather than archiving a local directory. I resolved this by added a localhost task to be performed prior to.
- hosts: localhost
tasks:
- name: Create cert archive
archive:
path:
- /home/ansible-admin/app/certs
dest: /home/ansible-admin/app/app_certs.zip
format: zip
Then copying it to the remote servers and extracting the archive worked as expected.
peer lifecycle chaincode install ../asset-transfer-basic/chaincode-external/asset-transfer-basic-external.tgz
Error: chaincode install failed with status: 500 - failed to invoke backing implementation of 'InstallChaincode': could not build chaincode: docker build failed: platform builder failed: Failed to generate a Dockerfile: Unknown chaincodeType: EXTERNAL
This can happen because for one reason or another your external builder is not being executed at all and that causes the peer to try to install the chaincode by the traditional way. For that part of the code type external is not supported, hence the error.
In my case the mistake was that my core.yaml was misformatted because the externalBuilders definition was indented too much to right so it wasn't defined under chaincode definition anymore. And because in reality the external builder wasn't defined at all because of the misformatting it couldn't be executed and then I got exactly the same error.
So, be careful how you format your yaml files.
Provide the required permissions of scripts in the bin (build, release, detect) directory.
chmod 777 -R bin/
And also make sure to map the chaincode directory properly
core.yaml
externalBuilders:
# This path has to be mapped to peer containers with a local chaincode directory
- path: /opt/gopath/src/github.com/hyperledger/external-builder
name: chaincode-external-builder
propagateEnvironment:
- CORE_PEER_TLS_ROOTCERT_FILE
- CORE_PEER_TLS_CERT_FILE
- CORE_PEER_TLS_KEY_FILE
docker-compose.yaml
....
volumes:
# external builder chaincode path (local/path:container/path)
- ../chaincode:/opt/gopath/src/github.com/hyperledger/external-builder
I'm trying to use bitbucket pipelines with my project. I use Nodejs.
When I'm running gcloud app deploy manually from Mac or Windows - it works fine, deploy successfully finishes. But from bitbucket pipelines it fails with Error Response: [13] An internal error occurred.
Here is the stack trace which I've got by running gcloud app deploy --verbosity=debug:
Updating service [default]...
.DEBUG: Operation [apps/my-project/operations/a68a837e-edcf-4987-83db-d9b47f4309ae] not complete. Waiting to retry.
.......DEBUG: Operation [apps/my-project/operations/a68a837e-edcf-4987-83db-d9b47f4309ae] complete. Result: {
"metadata": {
"target": "apps/my-project/services/default/versions/20180915t180908",
"method": "google.appengine.v1.Versions.CreateVersion",
"user": "bitbucket-works#my-project.iam.gserviceaccount.com",
"insertTime": "2018-09-15T18:09:46.693Z",
"endTime": "2018-09-15T18:09:49.655Z",
"#type": "type.googleapis.com/google.appengine.v1.OperationMetadataV1"
},
"done": true,
"name": "apps/my-project/operations/a68a837e-edcf-4987-83db-d9b47f4309ae",
"error": {
"message": "An internal error occurred.",
"code": 13
}
}
failed.
DEBUG: (gcloud.app.deploy) Error Response: [13] An internal error occurred.
Traceback (most recent call last):
File "/tmp/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 839, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/tmp/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 770, in Run
resources = command_instance.Run(args)
File "/tmp/google-cloud-sdk/lib/surface/app/deploy.py", line 90, in Run
parallel_build=False)
File "/tmp/google-cloud-sdk/lib/googlecloudsdk/command_lib/app/deploy_util.py", line 625, in RunDeploy
flex_image_build_option=flex_image_build_option)
File "/tmp/google-cloud-sdk/lib/googlecloudsdk/command_lib/app/deploy_util.py", line 431, in Deploy
extra_config_settings)
File "/tmp/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/appengine_api_client.py", line 207, in DeployService
poller=done_poller)
File "/tmp/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/operations_util.py", line 315, in WaitForOperation
sleep_ms=retry_interval)
File "/tmp/google-cloud-sdk/lib/googlecloudsdk/api_lib/util/waiter.py", line 254, in WaitFor
sleep_ms, _StatusUpdate)
File "/tmp/google-cloud-sdk/lib/googlecloudsdk/api_lib/util/waiter.py", line 316, in PollUntilDone
sleep_ms=sleep_ms)
File "/tmp/google-cloud-sdk/lib/googlecloudsdk/core/util/retry.py", line 229, in RetryOnResult
if not should_retry(result, state):
File "/tmp/google-cloud-sdk/lib/googlecloudsdk/api_lib/util/waiter.py", line 310, in _IsNotDone
return not poller.IsDone(operation)
File "/tmp/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/operations_util.py", line 184, in IsDone
encoding.MessageToPyValue(operation.error)))
OperationError: Error Response: [13] An internal error occurred.
ERROR: (gcloud.app.deploy) Error Response: [13] An internal error occurred.
My bitbucket-pipelines.yml:
pipelines:
default:
- step:
name: Test and deployment
image: node:8.9
script: # Modify the commands below to build your repository.
- npm install
- npm test
- npm build
# Downloading the Google Cloud SDK
- curl -o /tmp/google-cloud-sdk.tar.gz https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-216.0.0-linux-x86_64.tar.gz
- tar -xvf /tmp/google-cloud-sdk.tar.gz -C /tmp/
- /tmp/google-cloud-sdk/install.sh -q
- source /tmp/google-cloud-sdk/path.bash.inc
# Setup
- echo $GCLOUD_CLIENT_SECRET > client-secret.json
- gcloud auth activate-service-account --key-file client-secret.json
- gcloud config set project $GCLOUD_PROJECT
- gcloud -q app deploy --verbosity=debug
My .gcloudignore:
.gcloudignore
.git
.gitignore
# Node.js dependencies:
node_modules/
design/
I found some solution on StackOverflow and Google, but nothing works. I've tried different versions of Google Cloud SDK - but the result is the same.
Thanks for any help in advance.
Finally, I've solved it.
TL;DR:
Add Cloud Build Editor role for your bitbucket member on IAM & admin page.
I've updated GCloud version up to 218 and it started to show more information about the error (not just internal error occurred):
OperationError: Error Response: [13] Permission to create cloud build is denied. 'Cloud Build Editor' role is required for the deployment: https://cloud.google.com/cloud-build/docs/securing-builds/configure-access-control#permissions.
The problem is lack of permissions for bitbucket member on GCloud. Seems like it's a recently added requirement, 'cause no one manual has information about.
To solve the issue you should add Cloud Build Editor role for your bitbucket member on IAM & admin page.
Hope it helps.
I have created a custom module and i would like to keep it within a sub-directory (category) because there are several components that should logically fall under that category. So to segregate things in a better way, i created the following structure.
- hieradata
- manifests
- modules
- infra
- git
- files
- manifests
- init.pp
- install.pp
- configure.pp
- monitoring
- etc
- templates
$ cat modules/infra/git/manifests/init.pp
class infra::git {}
$ cat modules/infra/git/manifests/install.pp
class infra::git::install {
file { 'Install Git':
...
...
}
}
$ cat manifests/site.pp
node abc.com {
include infra::git::install
}
Now on the puppet agent, when i try puppet agent -t, i get the following error:
ruby 2.1.8p440 (2015-12-16 revision 53160) [x64-mingw32]
C:\puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: {"message":"Server Error: Evaluation Error: Error while evaluating a Function Call, Could not find class ::infra::git::install for abc.com at /etc/puppetlabs/code/environments/production/manifests/site.pp:15:2 on node abc.com","issue_kind":"RUNTIME_ERROR"}
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
I have already read this link but that suggests keeping custom module directly under main modules directory, which is not how i would like to structure the directories.
Any help will really be appreciated.