I am trying the Chaincode for Developer samples, and running the docker compose up on the runtime, Terminal 1 - Start the network,
fails
orderer | 2017-11-11 13:48:52.252 UTC [orderer/common/deliver] deliverBlocks -> DEBU 32c [channel: myc] Received seekInfo (0xc420a12e60) start: > stop: > from 172.18.0.3:33048
ERROR: compose.cli.errors.log_timeout_error: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
this is running on ubuntu 16.04. I don't see anything that would stop connections, the compose file looks right. everything was just downloaded yesterday Nov 10.
This is not really a major error in terms of actually running the sample - it's an error occasionally thrown by Docker Compose itself. As the error says, you can simply set the variable prior to running the sample(s):
export COMPOSE_HTTP_TIMEOUT=600
Related
When upgrading a smart contract the following error occurs:
Failed to invoke chaincode name:"lscc", error: timeout expired while starting chaincode couponcontract:8 for transaction
As this is an lscc error could anyone help on how to debug or identify what is causing it to break?
API version:
"dependencies": {
"fabric-contract-api": "~1.4.0",
"fabric-shim": "~1.4.0"
},
Environment:
aws t2.micro
Your chaincode is likely taking too long to start / launch. Try increasing CORE_CHAINCODE_EXECUTETIMEOUT. The default is 30s so try increasing to 60s.
I've found the problem.
As the hole network was installed in the same t2.micro instance, the process was consuming all the CPU when upgrading a contract. So, it was breaking duo the time elapsed was causing a timeout.
I changed it to a t2.Medium(2 cpu) and now it is working like a charm!!
I am trying to setup my own network using HyperledgerFabric. I have completed all the required steps and started the nodes. It worked fine and got the below response.
Then tried to create and join a channel using the commands stated below and i got this error.
2019-02-28 13:43:07.440 UTC [main] InitCmd -> ERRO 001 Cannot run peer because cannot init crypto, folder "/app/HYPERLEDGER/fabric-samples/BaseNetwork/crypto-config/peerOrganizations/citizen.example.com/users/Admin#citizen.example.com/msp" does not exist
Try installing docker version 18.03.1. If that doesn't helps try docker volume prune, clear the containers, images and re-pull.
We are trying to scan our docker images using Anchore Engine Jenkins plugin.
Currently we create our application docker images, push it in our own private local registry and then deploy it in our test environments.
Now, we want to setup docker image scanning in our CI/CD process to check for any vulnerabilities.
We have installed Anchore Engine using the recommended Docker-Compose yaml method given in the Documentation link:
https://anchore.freshdesk.com/support/solutions/articles/36000020729-install-on-docker-swarm
Post installation, we installed the
Anchore Container Image Scanner Plugin in Jenkins.
We configured the plugin as mentioned in the document link:
https://wiki.jenkins.io/display/JENKINS/Anchore+Container+Image+Scanner+Plugin
However, the scanning fails. Error Message as follows:
2018-10-11T07:01:44.647 INFO AnchoreWorker Analysis request accepted, received image digest sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8
2018-10-11T07:01:44.647 INFO AnchoreWorker Waiting for analysis of 10.180.25.2:5000/hello-world:latest, polling status periodically
2018-10-11T07:01:44.647 DEBUG AnchoreWorker anchore-engine get policy evaluation URL: http://10.180.25.2:8228/v1/images/sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8/check?tag=10.180.25.2:5000/hello-world:latest&detail=true
2018-10-11T07:01:44.648 DEBUG AnchoreWorker Attempting anchore-engine get policy evaluation (1/300)
2018-10-11T07:01:44.675 DEBUG AnchoreWorker anchore-engine get policy evaluation failed. URL: http://10.180.25.2:8228/v1/images/sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8/check?tag=10.180.25.2:5000/hello-world:latest&detail=true, status: HTTP/1.1 404 NOT FOUND, error: {
"detail": {},
"httpcode": 404,
"message": "image is not analyzed - analysis_status: not_analyzed"
}
NOTE:
In Image TAG 10.180.25.2:5000/hello-world:latest, 10.180.25.2:5000 is our local private registry and hello-world:latest is latest hello-world image available in docker hub which we pulled and pushed in our registry to try out image scanning using Anchore-Engine.
Unfortunately we are not able to find much resource online to try and resolve the above mentioned issue.
Anyone who might have worked on Anchore-Engine, please may I request to have a look and help us resolve this issue.
Also, any suggestions or alternatives to anchore-engine or detailed steps in case we might have missed anything would be really appreciated.
End of the output is as follows:
2018-10-15T00:48:43.880 WARN AnchoreWorker anchore-engine get policy evaluation failed. HTTP method: GET, URL: http://10.180.25.2:8228/v1/images/sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8/check?tag=10.180.25.2:5000/hello-world:latest&detail=true, status: 404, error: {
"detail": {},
"httpcode": 404,
"message": "image is not analyzed - analysis_status: not_analyzed"
}
2018-10-15T00:48:43.880 WARN AnchoreWorker Exhausted all attempts polling anchore-engine. Analysis is incomplete for sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8
2018-10-15T00:48:43.880 ERROR AnchorePlugin Failing Anchore Container Image Scanner Plugin step due to errors in plugin execution
hudson.AbortException: Timed out waiting for anchore-engine analysis to complete (increasing engineRetries might help). Check above logs for errors from anchore-engine
at com.anchore.jenkins.plugins.anchore.BuildWorker.runGatesEngine(BuildWorker.java:480)
at com.anchore.jenkins.plugins.anchore.BuildWorker.runGates(BuildWorker.java:343)
at com.anchore.jenkins.plugins.anchore.AnchoreBuilder.perform(AnchoreBuilder.java:338)
at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
at hudson.model.Build$BuildExecution.build(Build.java:206)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
I also checked status and found below:
docker run anchore/engine-cli:latest anchore-cli --u admin --p admin123 --url http://172.18.0.1:8228/v1 system status
Service analyzer (dockerhostid-anchore-engine, http://anchore-engine:8084): up
Service catalog (dockerhostid-anchore-engine, http://anchore-engine:8082): up
Service policy_engine (dockerhostid-anchore-engine, http://anchore-engine:8087): down (unavailable)
Service simplequeue (dockerhostid-anchore-engine, http://anchore-engine:8083): up
Service apiext (dockerhostid-anchore-engine, http://anchore-engine:8228): up
Service kubernetes_webhook (dockerhostid-anchore-engine, http://anchore-engine:8338): up
Engine DB Version: 0.0.7
Engine Code Version: 0.2.4
It seems service policy engine is down
Service policy_engine (dockerhostid-anchore-engine, http://anchore-engine:8087): down (unavailable)
I also checked the docker logs . I found below error:
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [DEBUG] service (policy_engine) starting in: 4
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [INFO] Registration complete.
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [INFO] Checking feeds client credentials
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [DEBUG] Initializing a feeds client
[service:policy_engine] 2018-10-15 09:37:47+0000 [-] [bootstrap] [DEBUG] init values: [None, None, None, (), None, None]
[service:policy_engine] 2018-10-15 09:37:47+0000 [-] [bootstrap] [DEBUG] using values: ['https://ancho.re/v1/service/feeds', 'https://ancho.re/oauth/token', 'https://ancho.re/v1/account/users', 'anon#ancho.re', 3, 60]
[service:policy_engine] 2018-10-15 09:37:47+0000 [-] [urllib3.connectionpool] [DEBUG] Starting new HTTPS connection (1): ancho.re
[service:policy_engine] 2018-10-15 09:37:50+0000 [-] [bootstrap] [ERROR] Preflight checks failed with error: HTTPSConnectionPool(host='ancho.re', port=443): Max retries exceeded with url: /v1/account/users/anon#ancho.re (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7ffa905f0b90>: Failed to establish a new connection: [Errno 113] No route to host',)). Aborting service startup
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/anchore_manager/cli/service.py", line 158, in startup_service
raise Exception("process exited: " + str(rc))
Exception: process exited: 1
[anchore-policy-engine] [anchore_manager.cli.service/startup_service()] [INFO] service process exited at (Mon Oct 15 09:37:50 2018): process exited: 1
[anchore-policy-engine] [anchore_manager.cli.service/startup_service()] [INFO] exiting service thread
Thanks and Regards,
Rohan Shetty
When images are added to anchore-engine, they are queued for analysis which moves them through a simple state machine that starts with ‘not_analyzed’, goes to ‘analyzing’ and finally ends in either ‘analyzed’ or ‘analysis_failed’. Only when an image has reached ‘analyzed’ will a policy evaluation be possible.
The anchore Jenkins plugin will add an image, then poll the engine for image status/evaluation for the configured number of tries (default 300). Once the image goes to ‘analyzed’ (where policy evaluation is possible), the plugin will then receive a policy evaluation result from the engine.
The plugin will fail the build (by default) if the max retries has been performed and the image has not reached ‘analyzed’, if the image does reach ‘analyzed’ but the policy evaluation is producing a ‘fail’ result (meaning the image didn’t pass your configured policy checks). Note that all build failure behavior can be controlled in the plugin (I.e. there are options to allow the plugin to succeed even if the analysis or image eval fails).
You’ll need to look at the end of the output from your build run (instead of just the beginning from your post), and combined with the information above, it should be clear which scenario is causing the plugin to fail the build.
We have resolved the issue.
Root Cause:
We were not able to establish a successful https connection to URL : https://ancho.re from within the anchore-engine docker container.
As a result the service:policy_engine was not able to start.
https://ancho.re is required to download policy feeds and sync-up periodically. Without these policy anchore-engine won't be able to analyse the docker images.
Solution:
1) We passed a HTTPS_PROXY URL as an environment variable in the docker-compose.yaml of anchore-engine.
We used this proxy URL to bypass restrictions in our environment and establish a connection with https://ancho.re url.
2) Restarted the docker containers.
Finally we got all services up and running including Anchore policy-engine.
FYI:
It takes a while to download all the required Feeds depending on your internet speed.
Lastly, Thanks to the Anchore community for quick responses and support over slack.
Hope this helps.
Warm Regards,
Rohan Shetty
I am facing couple of issues when running e2e_cli examples.
I was able to complete all the steps(mentioned in documentation) before running this example
LOGS
sudo ./network_setup.sh up sahil
Channel name - sahil
Building configtxgen
Makefile:72: *** "No go in PATH: Check dependencies". Stop.
Generating genesis block
2017/04/19 13:00:16 Loading configuration
2017/04/19 13:00:16 Could not find configtx.yaml in paths of [ ].Try setting ORDERER_CFG_PATH, PEER_CFG_PATH, or GOPATH correctly.
mv: cannot stat 'orderer.block': No such file or directory
Generating channel configuration transaction
2017/04/19 13:00:16 Loading configuration
2017/04/19 13:00:16 Could not find configtx.yaml in paths of [ ].Try setting ORDERER_CFG_PATH, PEER_CFG_PATH, or GOPATH correctly.
mv: cannot stat 'channel.tx': No such file or directory
Starting orderer0
peer0 is up-to-date
peer1 is up-to-date
peer2 is up-to-date
peer3 is up-to-date
Recreating cli
Channel name : sahil
2017-04-19 13:00:18.269 UTC [logging] InitFromViper -> DEBU 001 Setting default logging level to DEBUG for command 'channel'
2017-04-19 13:00:18.269 UTC [msp] GetLocalMSP -> DEBU 002 Returning existing local MSP
2017-04-19 13:00:18.269 UTC [msp] GetDefaultSigningIdentity -> DEBU 003 Obtaining default signing identity
Error connecting: rpc error: code = 14 desc = grpc: RPC failed fast due to transport failure
Error: rpc error: code = 14 desc = grpc: RPC failed fast due to transport failure
Usage:
peer channel create [flags]
!!!!!!!!!!!!!!! Channel creation failed !!!!!!!!!!!!!!!!
================== ERROR !!! FAILED to execute End-2-End Scenario ==================
go is in path
sahil.kapoor#a1dvmcphdlt01:~/work/src/github.com/hyperledger/fabric/examples/e2e_cli $ go version
go version go1.8 linux/amd64
Could not find configtx.yaml in paths of [ ].. This file also present.
GOPATH=/home/sahil.kapoor/work
GOROOT=/usr/local/go
And my farbic folder is inside GOPATH
/work/src/github.com/hyperledger/fabric/examples/e2e_cli
Apparently, there's some issue in latest e2e example which has not been fixed yet. Look at https://jira.hyperledger.org/browse/FAB-3042. You should be able to run it when this gets fixed.
if you want to set up a new network you will need the config files orderer.block and channel.tx
This script will create them for you.
cd $GOPATH/src/github.com/hyperledger/fabric/examples/e2e_cli
chmod +x generateCfgTrx.sh
./generateCfgTrx.sh <channel-ID>
in your case:
./generateCfgTrx.sh sahil
Please note that you have been cloning from the master branch of the git repo.
Followed these steps to get e2e_cli working( the $GOPATH/src/github.com/hyperledger/fabric project folder):
Once you've cloned as per the instruction, perform
git checkout fa3d88cde177750804c7175ae000e0923199735c
Download the docker images by executing the shell script
sh examples/e2e_cli/download-dockerimages.sh
Remake the configtxgen file in the home folder of the project:
make configtxgen
Now you can setup your network:
sh examples/e2e_cli/network_setup.sh
Try this out and let us know if this solution fits right in!
RESULT (run docker ps):
Result Screenshot
If you still have problems, please let us know along with the logs!!!
I experienced the same problem. The solution that worked for me is to run this command before ./byfn.sh -m generate the command is
docker rm $(docker ps -a -q)
This command really clears the docker containers. If you still have this issue let me know.
I'm setting up a tiny cluster in GCE to play around with it but although instances are created some failures prevent to get it working. I'm following the steps in https://cloud.google.com/hadoop/downloads
So far I'm using (as of now) lastest versions of gcloud (143.0.0) and bdutil (1.3.5), freshly installed.
./bdutil deploy -e extensions/spark/spark_env.sh
using debian-8 as image (as bdutil still uses debian-7-backports).
At some point I got
Fri Feb 10 16:19:34 CET 2017: Command failed: wait ${SUBPROC} on line 326.
Fri Feb 10 16:19:34 CET 2017: Exit code of failed command: 1
full debug output is in https://gist.github.com/jlorper/4299a816fc0b140575ed70fe0da1f272
(project id and bucket names changed)
Instances are created, but spark not even installed. Digging a bit I've managed to run spark installation and start hadoop commands in the master after after ssh. But it fails badly when starting the spark-shell:
17/02/10 15:53:20 INFO gcs.GoogleHadoopFileSystemBase: GHFS version: 1.4.5-hadoop1
17/02/10 15:53:20 INFO gcsio.FileSystemBackedDirectoryListCache: Creating '/hadoop_gcs_connector_metadata_cache' with createDirectories()...
java.lang.RuntimeException: java.lang.RuntimeException: java.nio.file.AccessDeniedException: /hadoop_gcs_connector_metadata_cache
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
and not able to import sparkSQL. For what I've read everything should be started automatically.
Up to this point I'm a bit lost and don't know what else to do.
Am I missing any step? Is any of the commands faulty? Thanks in advance.
Update: solved
As pointed out in accepted solution I cloned the repo and cluster was created without issues. When trying to start the spark-shell though it gave
java.lang.RuntimeException: java.io.IOException: GoogleHadoopFileSystem has been closed or not initialized.`
That sounded to me like connectors were not initialized properly, so after running
./bdutil --env_var_files extensions/spark/spark_env.sh,bigquery_env.sh run_command_group install_connectors
it worked as expected.
The last version of bdutil on https://cloud.google.com/hadoop/downloads is a bit stale and I'd instead recommend using the version of bdutil at head on github: https://github.com/GoogleCloudPlatform/bdutil.