AWS elastic beanstalk deploy always fails (uploading a zipfile) - python-3.x

I upload a new version of my app as a zipfile and click deploy. Then the status changes to severe.
This is the error trace:
WARN
Environment health has transitioned from Info to Degraded. Command failed on all instances. Incorrect application version found on all instances. Expected version "Sample" (deployment 2). Application update failed 10 seconds ago and took 4 minutes.
ERROR
During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version.
ERROR
Failed to deploy application.
ERROR
Unsuccessful command execution on instance id(s) 'i------'. Aborting the operation.
ERROR
[Instance: i-002326d7ceeba0ea9] Command failed on instance. Return code:
1 Output: nginx: [emerg] no host in upstream ":80" in /etc/nginx/conf.d/elasticbeanstalk-nginx-docker-upstream.conf:
2 nginx: configuration file /etc/nginx/nginx.conf test failed Failed to start nginx, abort deployment.
Hook /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh failed.
For more detail, check /var/log/eb-activity.log using console or EB CLI.
ERROR
Failed to start nginx, abort deployment
/var/log/eb-activity.log
here are errors in this log:
[0mInstalling dependencies from Pipfile.lock (5e00f3)…
Failed to load paths: /bin/sh: 1: /root/.local/share/virtualenvs/app-lp47FrbD/bin/python: not found
...
[2020-05-29T01:51:24.746Z] INFO [11395] - [Application update v1.3.3-1#3/AppDeployStage1/AppDeployEnactHook/00run.sh] : Completed activity. Result:
jq: error (at <stdin>:1): Cannot iterate over null (null)
a2f568b1c255eb9e0fdc6ceebdd29b9ec64b9ab4481a3e1c5bcb11828b0ac526
[2020-05-29T01:51:24.747Z] INFO [11395] - [Application update v1.3.3-1#3/AppDeployStage1/AppDeployEnactHook/01flip.sh] : Starting activity...
[2020-05-29T01:51:26.099Z] INFO [11395] - [Application update v1.3.3-1#3/AppDeployStage1/AppDeployEnactHook/01flip.sh] : Activity execution failed, because: nginx: [emerg] no host in upstream ":80" in /etc/nginx/conf.d/elasticbeanstalk-nginx-docker-upstream.conf:2
nginx: configuration file /etc/nginx/nginx.conf test failed
Failed to start nginx, abort deployment (ElasticBeanstalk::ExternalInvocationError)
caused by: nginx: [emerg] no host in upstream ":80" in /etc/nginx/conf.d/elasticbeanstalk-nginx-docker-upstream.conf:2
nginx: configuration file /etc/nginx/nginx.conf test failed
Failed to start nginx, abort deployment (Executor::NonZeroExitStatus)
...
[2020-05-29T01:51:26.099Z] INFO [11395] - [Application update v1.3.3-1#3/AppDeployStage1/AppDeployEnactHook/01flip.sh] : Activity failed.
[2020-05-29T01:51:26.099Z] INFO [11395] - [Application update v1.3.3-1#3/AppDeployStage1/AppDeployEnactHook] : Activity failed.
[2020-05-29T01:51:26.099Z] INFO [11395] - [Application update v1.3.3-1#3/AppDeployStage1] : Activity failed.
[2020-05-29T01:51:26.100Z] INFO [11395] - [Application update v1.3.3-1#3] : Completed activity. Result:
Application update - Command CMD-AppDeploy failed
The inability to deploy has been consistent for this environment, after several attempts, even reverting to an older version.

Afterwards, I resolved this by isolating the code and error messages using a local docker image with the zipfile. Running the code on my machine outside of docker did NOT reveal any problems, because the pip / pipenv part was missing some depdendency.
Steps for local docker testing:
WITHIN a docker container:
docker system prune
Go to the folder with Dockerfile
docker image build -t <app_name>:<version_number> .
TO run locally:
(docker rm <app_name> first, if you've already got a stopped container with the same name from prior testing)
docker container run --publish 80:80 --name <app_name> myapp:1.0
NOTE:
this won't let you test AWS functions that require environment variables, such as ~.aws credentials because they're not inside the image.
(but you could add them with your Dockerfile)
Once the docker container is running, you'll see (I saw) error messages that were not there when testing locally, because they were caused by a missing package dependency and a pipenv error.

Related

WARNING: Uploading artifacts as "archive" to coordinator... failed id=1515 responseStatus=500 Internal Server Error status=500

I'm using Gitlab self server community version ci/cd function, It has been running well, But suddenly one day,All the projects in the gitlab/cicd has failed, it mentions below errors:
Uploading artifacts for successful job
Uploading artifacts...
promotion-api/my-boot-module-system/target/*.jar: found 1 matching files and directories
WARNING: Uploading artifacts as "archive" to coordinator... failed id=1515 responseStatus=500 Internal Server Error status=500 token=xrDFnLeB
WARNING: Retrying... context=artifacts-uploader error=invalid argument
WARNING: Uploading artifacts as "archive" to coordinator... failed id=1515 responseStatus=500 Internal Server Error status=500 token=xrDFnLeB
WARNING: Retrying... context=artifacts-uploader error=invalid argument
WARNING: Uploading artifacts as "archive" to coordinator... failed id=1515 responseStatus=500 Internal Server Error status=500 token=xrDFnLeB
FATAL: invalid argument
ERROR: Job failed: exit code 1
Below is the code in .gitlab-ci.yml
deploy-java:
stage: deploy
dependencies:
- build-java
image:
name: docker/compose:latest
before_script:
- docker info
- docker-compose -v
script:
- cd promotion-api
- docker-compose build
- docker images
- docker ps -a
- docker-compose up -d
tags:
- promotion
2021-3-29:
I switched the runner from linux version to docker, seems all fine till now
You ran into a bug in GitLab itself. Instead of a rather obscure HTTP status 500 it should say what the issue is explicitly. It is likely not a problem in your repository or CI settings.
Follow these issues to find out more:
Artifact is stopped in the transfer, possibly because it takes longer than some timeout to upload: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/26869
Artifact is larger than 1 GB: https://gitlab.com/gitlab-org/gitlab/-/issues/267111
For me the problem was that gitlab had run out of disk-space.
After having deleted unnecessary artifacts and files everything was working correctly again.
You can check your system status (if you host your own gitlab) here: www.gitlab-url.com/admin/system_info
(Go to: Admin Area > Monitoring > System Info)
Hope it helps!

502 Bad Gateway when hosting a Meteor app on AWS Elastic Beanstalk with Meteor-up

I have a Meteor app I've been trying to deploy to AWS with mup-aws-beanstalk.
Here is my repository This is a basic "Hello world" equivalent of MongoDB, Meteor, React, Node.js app. How to install
This works perfectly fine localy, and runs on http://localhost:3000/. When I try to deploy to AWS with the meteor-up mup-aws-beanstalk plugin, it deploys, but I get a 502 Bad Gateway error.
I'm pretty new to this but I did some research and checked the logs.
Checking the logs, I see that the start script isnt working properly
> mup-meteor-example-deploy-aws#1.0.0 start /var/app/current
> bash ./start.sh
┌──────────────────────────────────────────────────┐
│ npm update check failed │
│ Try running with sudo or get access │
│ to the local update config store via │
│ sudo chown -R $USER:$(id -gn $USER) /tmp/.config │
└──────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────┐
│ npm update check failed │
│ Try running with sudo or get access │
│ to the local update config store via │
│ sudo chown -R $USER:$(id -gn $USER) /tmp/.config │
└──────────────────────────────────────────────────┘
Node version
v12.16.1
Npm version
6.14.0
=> Starting health check server
=> Starting App
/var/app/current/programs/server/node_modules/fibers/fibers.js:90
return fn.apply(this, arguments);
^
Error: $ROOT_URL, if specified, must be an URL
at packages/meteor.js:1328:13
at packages/meteor.js:1343:4
at packages/meteor.js:1508:3
at /var/app/current/programs/server/boot.js:401:38
at Array.forEach (<anonymous>)
at /var/app/current/programs/server/boot.js:226:21
at /var/app/current/programs/server/boot.js:464:7
at Function.run (/var/app/current/programs/server/profile.js:280:14)
at /var/app/current/programs/server/boot.js:463:13
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! mup-meteor-example-deploy-aws#1.0.0 start: `bash ./start.sh`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the mup-meteor-example-deploy-aws#1.0.0 start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm WARN Local package.json exists, but node_modules missing, did you mean to install?
I checked the above problem and other StackOverflow questions mention to make the URL start with http or https, which my does.
Then there is also an error from nginx, there are these lines that keep repeating. I am not sure if these two are related.
This post describes the problem might be because the app isn't running on the server/port combination
HOWEVER, this post states that it might be because Elastic Beanstalk is reading the wrong file first, and therefore not opening up the ports in the first place?
I am not sure how or where to change the port numbers, or if this is a problem with npm.
-------------------------------------
/var/log/nginx/error.log
-------------------------------------
2020/07/13 04:02:02 [error] 4632#0: *148508 connect() failed (111: Connection refused) while connecting to upstream, client: 172.31.21.16, server: , request: "GET /aws-health-check-3984729847289743128904723 HTTP/1.1", upstream: "http://127.0.0.1:8039/aws-health-check-3984729847289743128904723", host: "172.31.46.135"
2020/07/13 04:02:06 [error] 4632#0: *148510 connect() failed (111: Connection refused) while connecting to upstream, client: 172.31.46.174, server: , request: "GET /aws-health-check-3984729847289743128904723 HTTP/1.1", upstream: "http://127.0.0.1:8039/aws-health-check-3984729847289743128904723", host: "172.31.46.135"
2020/07/13 04:02:17 [error] 4632#0: *148512 connect() failed (111: Connection refused) while connecting to upstream, client: 172.31.21.16, server: , request: "GET /aws-health-check-3984729847289743128904723 HTTP/1.1", upstream: "http://127.0.0.1:8039/aws-health-check-3984729847289743128904723", host: "172.31.46.135"
This is my deployment output
PS G:\GitFolder\meteor-example-deploy-aws\.deploy> mup deploy
=> Setting up
=> Ensuring IAM Roles and Instance Profiles are setup
Building App Bundle Locally
WARNING: The output directory is under your source tree.
Your generated files may get interpreted as source code!
Consider building into a different directory instead
meteor build ../output
app/node_modules/semantic-ui-css/semantic.css: warn: There are some #import rules in the middle of a file. This might be a bug, as imports are only valid at the beginning of a file.
Browserslist: caniuse-lite is outdated. Please run next command `npm update`
Unable to resolve some modules:
"#babel/runtime/helpers/createSuper" in /G/GitFolder/meteor-example-deploy-aws/app/imports/ui/layouts/App.jsx (web.browser.legacy)
If you notice problems related to these missing modules, consider running:
meteor npm install --save #babel/runtime
=> Archiving Bundle
10% Archived
20% Archived
30% Archived
40% Archived
50% Archived
60% Archived
70% Archived
80% Archived
90% Archived
100% Archived
=> Uploading bundle
Uploaded 11%
Uploaded 23%
Uploaded 35%
Uploaded 46%
Uploaded 58%
Uploaded 70%
Uploaded 81%
Uploaded 93%
Uploaded 100%
Finishing upload. This could take a couple minutes
=> Creating Version
=> Configuring Beanstalk Environment
Updated Environment
=> Waiting for Beanstalk Environment to finish updating
Env Event: Updating environment mup-env-meteor-example-deploy-aws's configuration settings.
Env Event: Rolling with Additional Batch deployment policy enabled. Launching a new batch of 1 additional instance(s).
Env Event: Batch 1: 1 EC2 instance(s) [i-0b017a1b7f1c7151a] launched. Deploying application version.
Env Event: Environment health has transitioned from Severe to Degraded. 100.0 % of the requests are failing with HTTP 5xx. ELB processes are not healthy on 1 out of 2 instances. Configuration update in progress on 1 instance. 0 out of 2 instances completed (running for 2 minutes). ELB health is failing or not available for 1 out of 2 instances. Impaired services on 1 out of 2 instances.
Env Event: Added instance [i-0b017a1b7f1c7151a] to your environment.
Env Event: Failed to run npm install. Snapshot logs for more details.
Env Event: Retrieving logs prior to instance(s) termination. Logs will be available for an hour in the environment management console and at elasticbeanstalk-us-east-1-966889535256/resources/environments/logs/bundle/e-kvnxyajrem.
Env Event: During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version.
Env Event: Failed to deploy configuration.
Env Event: Terminating excess instance(s): [i-0b017a1b7f1c7151a].
Env Event: Command execution completed on all instances successfully.
Env Event: [Instance: i-0b017a1b7f1c7151a] Successfully finished bundling 16 log(s)
=> Deploying new version
=> Waiting for Beanstalk Environment to finish updating
Env Event: Environment health has transitioned from Degraded to Severe. 100.0 % of the requests are failing with HTTP 5xx. Command failed on 1 out of 2 instances. Incorrect application version found on 1 out of 2 instances. Expected version "5" (deployment 10). ELB processes are not healthy on all instances. Application update in progress (running for 42 seconds). ELB health is failing or not available for all instances. Impaired services on 1 out of 2 instances.
Env Event: Rolling with Additional Batch deployment policy enabled. Launching a new batch of 1 additional instance(s).
Env Event: Removed instance [i-0b017a1b7f1c7151a] from your environment.
Env Event: Added instance [i-0733205831e115a28] to your environment.
Env Event: Batch 1: 1 EC2 instance(s) [i-0733205831e115a28] launched. Deploying application version '5'.
Env Event: Unsuccessful command execution on instance id(s) 'i-0733205831e115a28'. Aborting the operation.
Env Event: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
Env Event: [Instance: i-0733205831e115a28] Command failed on instance. Return code: 127 Output: (TRUNCATED)...
/opt/elasticbeanstalk/hooks/appdeploy/pre/45node.sh: line 12: nvm: command not found
/opt/elasticbeanstalk/hooks/appdeploy/pre/45node.sh: line 13: nvm: command not found
/opt/elasticbeanstalk/hooks/appdeploy/pre/45node.sh: line 14: npm: command not found.
Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/45node.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
Env Event: Retrieving logs prior to instance(s) termination. Logs will be available for an hour in the environment management console and at elasticbeanstalk-us-east-1-966889535256/resources/environments/logs/bundle/e-kvnxyajrem.
Env Event: Excess instance(s) terminated.
Env Event: Terminating excess instance(s): [i-0733205831e115a28].
Env Event: Command execution completed on all instances successfully.
Env Event: [Instance: i-0733205831e115a28] Successfully finished bundling 15 log(s)
Env Event: Environment health has transitioned from Severe to Degraded. 100.0 % of the requests are failing with HTTP 5xx. Command failed on 1 out of 2 instances. Incorrect application version found on 1 out of 2 instances. Expected version "1" (deployment 1). ELB processes are not healthy on 1 out of 2 instances. Application update is aborting. 1 out of 2 instances completed (running for 4 minutes). ELB health is failing or not available for 1 out of 2 instances. Impaired services on 1 out of 2 instances.
App is running at mup-env-meteor-example-deploy-aws.eba-cah6ppkm.us-east-1.elasticbeanstalk.com
=> Finding old versions
=> Removing old versions
=> Updating Beanstalk SSL Config
and this everything in my mup.js file
app: {
// Tells mup that the AWS Beanstalk plugin will manage the app
type: 'aws-beanstalk',
name: 'meteor-example-deploy-aws',
path: 'G:/GitFolder/meteor-example-deploy-aws/app',
env: {
ROOT_URL: 'http://mup-env-meteor-example-deploy-aws.eba-cah6ppkm.us-east-1.elasticbeanstalk.com/',
MONGO_URL: 'mongodb://MYUSERNAME:MYPASSREDACTED#docdb-2020-07-06-07-57-38.cluster-c9vs8fwnppko.us-east-1.docdb.amazonaws.com:27017/?ssl=true&ssl_ca_certs=rds-combined-ca-bundle.pem&replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=false'
},
auth: {
id: 'AKIA6C...',
secret: 'xCBpL....'
},
minInstances: 1
},
plugins: ['mup-aws-beanstalk']
};

Unable to run nvidia-docker. docker: Error response from daemon: OCI runtime create failed:

I was trying to re-implement this code from Github and it requires me to install nvidia-docker and run it. The installation of nvidia-docker seemed successful. However, when I run the command nvidia-docker run -it --ipc=host deep-colorization, it throws the following error::
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 1 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request\\\\n\\\"\"": unknown.
ERRO[0002] error waiting for container: context canceled
I am not sure what the error means as I don't have any previous experience with the docker ecosystem. Any kind of assistance is appreciated. I am running Ubuntu 18 by the way.
Thanking you in advance.

Anchore Engine - Jenkins CI plugin

We are trying to scan our docker images using Anchore Engine Jenkins plugin.
Currently we create our application docker images, push it in our own private local registry and then deploy it in our test environments.
Now, we want to setup docker image scanning in our CI/CD process to check for any vulnerabilities.
We have installed Anchore Engine using the recommended Docker-Compose yaml method given in the Documentation link:
https://anchore.freshdesk.com/support/solutions/articles/36000020729-install-on-docker-swarm
Post installation, we installed the
Anchore Container Image Scanner Plugin in Jenkins.
We configured the plugin as mentioned in the document link:
https://wiki.jenkins.io/display/JENKINS/Anchore+Container+Image+Scanner+Plugin
However, the scanning fails. Error Message as follows:
2018-10-11T07:01:44.647 INFO AnchoreWorker Analysis request accepted, received image digest sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8
2018-10-11T07:01:44.647 INFO AnchoreWorker Waiting for analysis of 10.180.25.2:5000/hello-world:latest, polling status periodically
2018-10-11T07:01:44.647 DEBUG AnchoreWorker anchore-engine get policy evaluation URL: http://10.180.25.2:8228/v1/images/sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8/check?tag=10.180.25.2:5000/hello-world:latest&detail=true
2018-10-11T07:01:44.648 DEBUG AnchoreWorker Attempting anchore-engine get policy evaluation (1/300)
2018-10-11T07:01:44.675 DEBUG AnchoreWorker anchore-engine get policy evaluation failed. URL: http://10.180.25.2:8228/v1/images/sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8/check?tag=10.180.25.2:5000/hello-world:latest&detail=true, status: HTTP/1.1 404 NOT FOUND, error: {
"detail": {},
"httpcode": 404,
"message": "image is not analyzed - analysis_status: not_analyzed"
}
NOTE:
In Image TAG 10.180.25.2:5000/hello-world:latest, 10.180.25.2:5000 is our local private registry and hello-world:latest is latest hello-world image available in docker hub which we pulled and pushed in our registry to try out image scanning using Anchore-Engine.
Unfortunately we are not able to find much resource online to try and resolve the above mentioned issue.
Anyone who might have worked on Anchore-Engine, please may I request to have a look and help us resolve this issue.
Also, any suggestions or alternatives to anchore-engine or detailed steps in case we might have missed anything would be really appreciated.
End of the output is as follows:
2018-10-15T00:48:43.880 WARN AnchoreWorker anchore-engine get policy evaluation failed. HTTP method: GET, URL: http://10.180.25.2:8228/v1/images/sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8/check?tag=10.180.25.2:5000/hello-world:latest&detail=true, status: 404, error: {
"detail": {},
"httpcode": 404,
"message": "image is not analyzed - analysis_status: not_analyzed"
}
2018-10-15T00:48:43.880 WARN AnchoreWorker Exhausted all attempts polling anchore-engine. Analysis is incomplete for sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8
2018-10-15T00:48:43.880 ERROR AnchorePlugin Failing Anchore Container Image Scanner Plugin step due to errors in plugin execution
hudson.AbortException: Timed out waiting for anchore-engine analysis to complete (increasing engineRetries might help). Check above logs for errors from anchore-engine
at com.anchore.jenkins.plugins.anchore.BuildWorker.runGatesEngine(BuildWorker.java:480)
at com.anchore.jenkins.plugins.anchore.BuildWorker.runGates(BuildWorker.java:343)
at com.anchore.jenkins.plugins.anchore.AnchoreBuilder.perform(AnchoreBuilder.java:338)
at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
at hudson.model.Build$BuildExecution.build(Build.java:206)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
I also checked status and found below:
docker run anchore/engine-cli:latest anchore-cli --u admin --p admin123 --url http://172.18.0.1:8228/v1 system status
Service analyzer (dockerhostid-anchore-engine, http://anchore-engine:8084): up
Service catalog (dockerhostid-anchore-engine, http://anchore-engine:8082): up
Service policy_engine (dockerhostid-anchore-engine, http://anchore-engine:8087): down (unavailable)
Service simplequeue (dockerhostid-anchore-engine, http://anchore-engine:8083): up
Service apiext (dockerhostid-anchore-engine, http://anchore-engine:8228): up
Service kubernetes_webhook (dockerhostid-anchore-engine, http://anchore-engine:8338): up
Engine DB Version: 0.0.7
Engine Code Version: 0.2.4
It seems service policy engine is down
Service policy_engine (dockerhostid-anchore-engine, http://anchore-engine:8087): down (unavailable)
I also checked the docker logs . I found below error:
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [DEBUG] service (policy_engine) starting in: 4
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [INFO] Registration complete.
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [INFO] Checking feeds client credentials
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [DEBUG] Initializing a feeds client
[service:policy_engine] 2018-10-15 09:37:47+0000 [-] [bootstrap] [DEBUG] init values: [None, None, None, (), None, None]
[service:policy_engine] 2018-10-15 09:37:47+0000 [-] [bootstrap] [DEBUG] using values: ['https://ancho.re/v1/service/feeds', 'https://ancho.re/oauth/token', 'https://ancho.re/v1/account/users', 'anon#ancho.re', 3, 60]
[service:policy_engine] 2018-10-15 09:37:47+0000 [-] [urllib3.connectionpool] [DEBUG] Starting new HTTPS connection (1): ancho.re
[service:policy_engine] 2018-10-15 09:37:50+0000 [-] [bootstrap] [ERROR] Preflight checks failed with error: HTTPSConnectionPool(host='ancho.re', port=443): Max retries exceeded with url: /v1/account/users/anon#ancho.re (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7ffa905f0b90>: Failed to establish a new connection: [Errno 113] No route to host',)). Aborting service startup
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/anchore_manager/cli/service.py", line 158, in startup_service
raise Exception("process exited: " + str(rc))
Exception: process exited: 1
[anchore-policy-engine] [anchore_manager.cli.service/startup_service()] [INFO] service process exited at (Mon Oct 15 09:37:50 2018): process exited: 1
[anchore-policy-engine] [anchore_manager.cli.service/startup_service()] [INFO] exiting service thread
Thanks and Regards,
Rohan Shetty
When images are added to anchore-engine, they are queued for analysis which moves them through a simple state machine that starts with ‘not_analyzed’, goes to ‘analyzing’ and finally ends in either ‘analyzed’ or ‘analysis_failed’. Only when an image has reached ‘analyzed’ will a policy evaluation be possible.
The anchore Jenkins plugin will add an image, then poll the engine for image status/evaluation for the configured number of tries (default 300). Once the image goes to ‘analyzed’ (where policy evaluation is possible), the plugin will then receive a policy evaluation result from the engine.
The plugin will fail the build (by default) if the max retries has been performed and the image has not reached ‘analyzed’, if the image does reach ‘analyzed’ but the policy evaluation is producing a ‘fail’ result (meaning the image didn’t pass your configured policy checks). Note that all build failure behavior can be controlled in the plugin (I.e. there are options to allow the plugin to succeed even if the analysis or image eval fails).
You’ll need to look at the end of the output from your build run (instead of just the beginning from your post), and combined with the information above, it should be clear which scenario is causing the plugin to fail the build.
We have resolved the issue.
Root Cause:
We were not able to establish a successful https connection to URL : https://ancho.re from within the anchore-engine docker container.
As a result the service:policy_engine was not able to start.
https://ancho.re is required to download policy feeds and sync-up periodically. Without these policy anchore-engine won't be able to analyse the docker images.
Solution:
1) We passed a HTTPS_PROXY URL as an environment variable in the docker-compose.yaml of anchore-engine.
We used this proxy URL to bypass restrictions in our environment and establish a connection with https://ancho.re url.
2) Restarted the docker containers.
Finally we got all services up and running including Anchore policy-engine.
FYI:
It takes a while to download all the required Feeds depending on your internet speed.
Lastly, Thanks to the Anchore community for quick responses and support over slack.
Hope this helps.
Warm Regards,
Rohan Shetty

Monitoring JHipster error starting jhipster-alerter

I have installed monitoring out of the box according to this link:
http://www.jhipster.tech/monitoring/
When I start with:
docker-compose up -d
Everything starts but not Elastalert:
First log:
ERROR: for monitoring_jhipster-alerter_1 Cannot start service jhipster-alerter: OCI runtime create failed: container_linux.go:296: starting container process caused "process_linux.go:398: container init caused \"rootfs_linux.go:58: mounting \\"/Users/john/source/intellij/company/app/myservice/alerts/config.yaml\\" to rootfs \\"/var/lib/docker/overlay2/5657c6e9e7bb2be5cf4fa9860c04269e34be15641f4e3f0c1449af7cbf82ced5/merged\\" at \\"/var/lib/docker/overlay2/5657c6e9e7bb2be5cf4fa9860c04269e34be15641f4e3f0c1449af7cbf82ced5/merged/opt/elastalert/config.yaml\\" caused \\"not a directory\\"\"": unknown: Are you trying to mount a direStarting monitoring_jhipster-import-dashboards_1
Second log:
ERROR: for jhipster-alerter Cannot start service jhipster-alerter: OCI runtime create failed: container_linux.go:296: starting container process caused "process_linux.go:398: container init caused \"rootfs_linux.go:58: mounting \\"/Users/john/source/intellij/company/app/myservice/alerts/config.yaml\\" to rootfs \\"/var/lib/docker/overlay2/5657c6e9e7bb2be5cf4fa9860c04269e34be15641f4e3f0c1449af7cbf82ced5/merged\\" at \\"/var/lib/docker/overlay2/5657c6e9e7bb2be5cf4fa9860c04269e34be15641f4e3f0c1449af7cbf82ced5/merged/opt/elastalert/config.yaml\\" caused \\"not a directory\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
ERROR: Encountered errors while bringing up the project.
Using the default docker-compose.yml file that I got with:
curl -O https://raw.githubusercontent.com/jhipster/jhipster-console/master/bootstrap/docker-compose.yml
Im not sure what this messages says?
This is because the volumes path for JHipster Alerter is incorrect. Change
jhipster-alerter:
image: jhipster/jhipster-alerter:latest
environment:
- ES_HOST=jhipster-elasticsearch
- ES_PORT=9200
volumes:
- ../jhipster-alerter/rules/:/opt/elastalert/rules/
- ../alerts/config.yaml:/opt/elastalert/config.yaml
To
- ../alerts/rules/:/opt/elastalert/rules/
- ../jhipster-alerter/config.yaml:/opt/elastalert/config.yaml
As shown in https://github.com/jhipster/jhipster-console/pull/102/commits/fa5bc75ec29ca357477ac1a22203ae6cbe2af2f7.

Resources