packer option to avoid warning while shell provisioning - linux

Is there a way to avoid warning while doing packer shell provisioning. My packer build exits with this warning:
googlecompute:
/usr/local/lib/python2.7/dist-packages/pip/vendor/urllib3/util/ssl.py:160:
InsecurePlatformWarning: A true SSLContext object is not available.
This prevents urllib3 from configuring SSL appropriately and may cause
certain SSL connections to fail. You can upgrade to a newer version of
Python to solve this. For more information, see
https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
googlecompute: InsecurePlatformWarning
==> googlecompute: Deleting instance...
googlecompute: Instance has been deleted!
==> googlecompute: Deleting disk...
googlecompute: Disk has been deleted! Build 'googlecompute' errored: Script exited with non-zero exit status: 1

That's not a warning, it's an error.
You could suppress it by forcing your script to exit with 0. But you probably want to fix the error instead.
If you provide your script i can give more detailed guidance.

Related

Getting error in Chef-ERROR: shard_seed: Failed to get dmi property serial_number: is dmidecode installed?

I am getting error while running a code in chef
chef-client -zr "recipe[test-cookbook::test-recipe1]"
[2020-08-05T16:01:06+00:00] WARN: No config file found or specified on command line. Using command line options instead.
Starting Chef Infra Client, version 16.3.45
[2020-08-05T16:01:08+00:00] ERROR: shard_seed: Failed to get dmi property serial_number: is dmidecode installed?
resolving cookbooks for run list: ["test-cookbook::test-recipe1"]
Synchronizing Cookbooks:
test-cookbook (0.1.0)
Installing Cookbook Gems:
Compiling Cookbooks...
Converging 0 resources
Running handlers:
Running handlers complete
Chef Infra Client finished, 0/0 resources updated in 01 seconds

Anchore Engine - Jenkins CI plugin

We are trying to scan our docker images using Anchore Engine Jenkins plugin.
Currently we create our application docker images, push it in our own private local registry and then deploy it in our test environments.
Now, we want to setup docker image scanning in our CI/CD process to check for any vulnerabilities.
We have installed Anchore Engine using the recommended Docker-Compose yaml method given in the Documentation link:
https://anchore.freshdesk.com/support/solutions/articles/36000020729-install-on-docker-swarm
Post installation, we installed the
Anchore Container Image Scanner Plugin in Jenkins.
We configured the plugin as mentioned in the document link:
https://wiki.jenkins.io/display/JENKINS/Anchore+Container+Image+Scanner+Plugin
However, the scanning fails. Error Message as follows:
2018-10-11T07:01:44.647 INFO AnchoreWorker Analysis request accepted, received image digest sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8
2018-10-11T07:01:44.647 INFO AnchoreWorker Waiting for analysis of 10.180.25.2:5000/hello-world:latest, polling status periodically
2018-10-11T07:01:44.647 DEBUG AnchoreWorker anchore-engine get policy evaluation URL: http://10.180.25.2:8228/v1/images/sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8/check?tag=10.180.25.2:5000/hello-world:latest&detail=true
2018-10-11T07:01:44.648 DEBUG AnchoreWorker Attempting anchore-engine get policy evaluation (1/300)
2018-10-11T07:01:44.675 DEBUG AnchoreWorker anchore-engine get policy evaluation failed. URL: http://10.180.25.2:8228/v1/images/sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8/check?tag=10.180.25.2:5000/hello-world:latest&detail=true, status: HTTP/1.1 404 NOT FOUND, error: {
"detail": {},
"httpcode": 404,
"message": "image is not analyzed - analysis_status: not_analyzed"
}
NOTE:
In Image TAG 10.180.25.2:5000/hello-world:latest, 10.180.25.2:5000 is our local private registry and hello-world:latest is latest hello-world image available in docker hub which we pulled and pushed in our registry to try out image scanning using Anchore-Engine.
Unfortunately we are not able to find much resource online to try and resolve the above mentioned issue.
Anyone who might have worked on Anchore-Engine, please may I request to have a look and help us resolve this issue.
Also, any suggestions or alternatives to anchore-engine or detailed steps in case we might have missed anything would be really appreciated.
End of the output is as follows:
2018-10-15T00:48:43.880 WARN AnchoreWorker anchore-engine get policy evaluation failed. HTTP method: GET, URL: http://10.180.25.2:8228/v1/images/sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8/check?tag=10.180.25.2:5000/hello-world:latest&detail=true, status: 404, error: {
"detail": {},
"httpcode": 404,
"message": "image is not analyzed - analysis_status: not_analyzed"
}
2018-10-15T00:48:43.880 WARN AnchoreWorker Exhausted all attempts polling anchore-engine. Analysis is incomplete for sha256:7d6fb7e5e7a74a4309cc436f6d11c29a96cbf27a4a8cb45a50cb0a326dc32fe8
2018-10-15T00:48:43.880 ERROR AnchorePlugin Failing Anchore Container Image Scanner Plugin step due to errors in plugin execution
hudson.AbortException: Timed out waiting for anchore-engine analysis to complete (increasing engineRetries might help). Check above logs for errors from anchore-engine
at com.anchore.jenkins.plugins.anchore.BuildWorker.runGatesEngine(BuildWorker.java:480)
at com.anchore.jenkins.plugins.anchore.BuildWorker.runGates(BuildWorker.java:343)
at com.anchore.jenkins.plugins.anchore.AnchoreBuilder.perform(AnchoreBuilder.java:338)
at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
at hudson.model.Build$BuildExecution.build(Build.java:206)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
I also checked status and found below:
docker run anchore/engine-cli:latest anchore-cli --u admin --p admin123 --url http://172.18.0.1:8228/v1 system status
Service analyzer (dockerhostid-anchore-engine, http://anchore-engine:8084): up
Service catalog (dockerhostid-anchore-engine, http://anchore-engine:8082): up
Service policy_engine (dockerhostid-anchore-engine, http://anchore-engine:8087): down (unavailable)
Service simplequeue (dockerhostid-anchore-engine, http://anchore-engine:8083): up
Service apiext (dockerhostid-anchore-engine, http://anchore-engine:8228): up
Service kubernetes_webhook (dockerhostid-anchore-engine, http://anchore-engine:8338): up
Engine DB Version: 0.0.7
Engine Code Version: 0.2.4
It seems service policy engine is down
Service policy_engine (dockerhostid-anchore-engine, http://anchore-engine:8087): down (unavailable)
I also checked the docker logs . I found below error:
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [DEBUG] service (policy_engine) starting in: 4
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [INFO] Registration complete.
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [INFO] Checking feeds client credentials
[service:policy_engine] 2018-10-15 09:37:46+0000 [-] [bootstrap] [DEBUG] Initializing a feeds client
[service:policy_engine] 2018-10-15 09:37:47+0000 [-] [bootstrap] [DEBUG] init values: [None, None, None, (), None, None]
[service:policy_engine] 2018-10-15 09:37:47+0000 [-] [bootstrap] [DEBUG] using values: ['https://ancho.re/v1/service/feeds', 'https://ancho.re/oauth/token', 'https://ancho.re/v1/account/users', 'anon#ancho.re', 3, 60]
[service:policy_engine] 2018-10-15 09:37:47+0000 [-] [urllib3.connectionpool] [DEBUG] Starting new HTTPS connection (1): ancho.re
[service:policy_engine] 2018-10-15 09:37:50+0000 [-] [bootstrap] [ERROR] Preflight checks failed with error: HTTPSConnectionPool(host='ancho.re', port=443): Max retries exceeded with url: /v1/account/users/anon#ancho.re (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7ffa905f0b90>: Failed to establish a new connection: [Errno 113] No route to host',)). Aborting service startup
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/anchore_manager/cli/service.py", line 158, in startup_service
raise Exception("process exited: " + str(rc))
Exception: process exited: 1
[anchore-policy-engine] [anchore_manager.cli.service/startup_service()] [INFO] service process exited at (Mon Oct 15 09:37:50 2018): process exited: 1
[anchore-policy-engine] [anchore_manager.cli.service/startup_service()] [INFO] exiting service thread
Thanks and Regards,
Rohan Shetty
When images are added to anchore-engine, they are queued for analysis which moves them through a simple state machine that starts with ‘not_analyzed’, goes to ‘analyzing’ and finally ends in either ‘analyzed’ or ‘analysis_failed’. Only when an image has reached ‘analyzed’ will a policy evaluation be possible.
The anchore Jenkins plugin will add an image, then poll the engine for image status/evaluation for the configured number of tries (default 300). Once the image goes to ‘analyzed’ (where policy evaluation is possible), the plugin will then receive a policy evaluation result from the engine.
The plugin will fail the build (by default) if the max retries has been performed and the image has not reached ‘analyzed’, if the image does reach ‘analyzed’ but the policy evaluation is producing a ‘fail’ result (meaning the image didn’t pass your configured policy checks). Note that all build failure behavior can be controlled in the plugin (I.e. there are options to allow the plugin to succeed even if the analysis or image eval fails).
You’ll need to look at the end of the output from your build run (instead of just the beginning from your post), and combined with the information above, it should be clear which scenario is causing the plugin to fail the build.
We have resolved the issue.
Root Cause:
We were not able to establish a successful https connection to URL : https://ancho.re from within the anchore-engine docker container.
As a result the service:policy_engine was not able to start.
https://ancho.re is required to download policy feeds and sync-up periodically. Without these policy anchore-engine won't be able to analyse the docker images.
Solution:
1) We passed a HTTPS_PROXY URL as an environment variable in the docker-compose.yaml of anchore-engine.
We used this proxy URL to bypass restrictions in our environment and establish a connection with https://ancho.re url.
2) Restarted the docker containers.
Finally we got all services up and running including Anchore policy-engine.
FYI:
It takes a while to download all the required Feeds depending on your internet speed.
Lastly, Thanks to the Anchore community for quick responses and support over slack.
Hope this helps.
Warm Regards,
Rohan Shetty

Puppet can't deactivate nodes

I'm using Puppet with PuppetDb. The two are connected and I can see PuppetDb update whenever I add or update a node.
But when I try to deactivate a node with puppet node deactivate nodeName I get back:
Warning: Error connecting to puppetdb on 8081 at route /pdb/cmd/v1?checksum=36a4313be5bac718badc45495f0266bf87c7a806&version=3&certname=v-hub-1.5659710c-33d5-45f2-a477-6
ccf1357e1ac.local.dockerapp.io&command=deactivate_node, error message received was 'SSL_connect SYSCALL returned=5 errno=0 state=unknown state'. Failing over to the next
PuppetDB server_url in the 'server_urls' list
Error: Failed to execute '/pdb/cmd/v1?checksum=36a4313be5bac718badc45495f0266bf87c7a806&version=3&certname=v-hub-1.5659710c-33d5-45f2-a477-6ccf1357e1ac.local.dockerapp.i
o&command=deactivate_node' on at least 1 of the following 'server_urls': https://puppetdb:8081
Error: undefined method `[]' for #<Puppet::Util::Log:0x00000003a15178>
Error: Try 'puppet help node deactivate' for usage
Any suggestions on how to debug this? I've tried deleting and regenerating the certificate with puppet cert generate puppetdb. As mentioned when it comes to creating or updating nodes on PuppetDb there is no problem.
Puppetserver version: 2.7.2

Issue deploying to elastic beanstalk "50npm.sh failed"

I'm having an issue deploying some code to one of my environments.
Creating application version archive "app-aa68e-170213_103330".
Uploading PAS-API/app-aa68e-170213_103330.zip to S3. This may take a while.
Upload Complete.
INFO: Environment update is starting.
INFO: Deploying new version to instance(s).
ERROR: Failed to run npm install. Snapshot logs for more details.
ERROR: [Instance: i-0ee97a5c7bcab8d51] Command failed on instance. Return code: 1 Output: (TRUNCATED)..."/opt/elasticbeanstalk/containerfiles/ebnode.py", line 180, in npm_install
raise e
subprocess.CalledProcessError: Command '['/opt/elasticbeanstalk/node-install/node-v4.4.6-linux-x64/bin/npm', '--production', 'install']' returned non-zero exit status 1.
Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/50npm.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
ERROR: Unsuccessful command execution on instance id(s) 'i-0ee97a5c7bcab8d51'. Aborting the operation.
It seems that the deployment is failing when npm install is running on the server.
When I checked the package.json it seemed some of the deps were added as a tarball instead of a version number, meaning that when it installed and saved it looked like this:
"basic-auth": "https://registry.npmjs.org/basic-auth/-/basic-auth-1.0.4.tgz"
Rather than
"basic-auth": "^1.1.0",
This was failing on npm install on the eb instance, it looks like it works now.
If it was OK before and you didn't change deployment settings, you can reboot the instance and it usually solve the problem.

Openstack TripleO undercloud installation "could not find class ::ironic::drivers::deploy"

My host is:
cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
The host setup was done as described here: http://docs.openstack.org/developer/tripleo-docs/environments/environments.html#virtual-environment up to the "Continue with Undercloud ..." step
The result:
sudo virsh list --all
Id Name State
----------------------------------------------------
3 baremetalbrbm_0 running
4 instack running
- baremetalbrbm_1 shut off
The undercloud setup was done as described here: http://docs.openstack.org/developer/tripleo-docs/installation/installation.html
The installation was attempted on the instack VM. Did the SSL setup as well.
Running
openstack undercloud install
fails with
+ puppet apply --detailed-exitcodes /etc/puppet/manifests/puppet-stack-config.pp Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked. Warning: Scope(Class[Swift]): swift_hash_suffix has been deprecated and should be replaced with swift_hash_path_suffix, this will be removed Warning: Scope(Class[Nova::Keystone::Auth]): Note that service_name parameter default value will be changed to "Compute Service" (according future release. In case you use different value, please update your manifests accordingly. Warning: Scope(Class[Nova::Keystone::Auth]): Note that service_name_v3 parameter default value will be changed to "Compute Service v3" (acco in a future release. In case you use different value, please update your manifests accordingly. Warning: Scope(Class[Glance::Api]): The known_stores parameter is deprecated, use stores instead Warning: Scope(Class[Glance::Api]): default_store not provided, it will be automatically set to glance.store.filesystem.Store Warning: Scope(Class[Nova::Api]): In N cycle, enabled_apis will have to be an array of APIs to enable. Warning: Scope(Class[Neutron::Server]): identity_uri, auth_tenant, auth_user, auth_password, auth_region configuration options are deprecateted options Warning: Scope(Class[Neutron::Agents::Dhcp]): The dhcp_domain parameter is deprecated and will be removed in future releases Warning: Scope(Class[Heat]): Default value for rabbit_heartbeat_timeout_threshold parameter is different from OpenStack project defaults Warning: Scope(Class[Heat]): "admin_user", "admin_password", "admin_tenant_name" configuration options are deprecated in favor of auth_plugi Warning: Scope(Class[Nova::Network::Neutron]): neutron_auth_plugin parameter is deprecated and will be removed in a future release, use neut Error: Could not find class ::ironic::drivers::deploy for instack on node instack Error: Could not find class ::ironic::drivers::deploy for instack on node instack
+ rc=1
+ set -e
+ echo 'puppet apply exited with exit code 1' puppet apply exited with exit code 1
+ '[' 1 '!=' 2 -a 1 '!=' 0 ']'
+ exit 1 [2016-05-19 15:32:29,361] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/cot status 1]
[2016-05-19 15:32:29,362] (os-refresh-config) [ERROR] Aborting... Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 987, in install
_run_orc(instack_env) File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 866, in _run_orc
_run_live_command(args, instack_env, 'os-refresh-config') File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 444, in _run_live_command
raise RuntimeError('%s failed. See log for details.' % name) RuntimeError: os-refresh-config failed. See log for details. Command 'instack-install-undercloud' returned non-zero exit status 1
Tried to install the ironic api as described here http://docs.openstack.org/developer/ironic/deploy/install-guide.html although to my understanding, this should not be necessary, since the undercloud was not installed on a baremetal machine.
Same result.
Some hours of Puppet readings later, I went into the /etc/puppet/modules/ironic/manifests/drivers folder and found, to no surprise, that the deploy class was not there. Perhaps it should not have been needed? I copied it from https://github.com/openstack/puppet-ironic/blob/master/manifests/drivers/deploy.pp and it seems to have got past the error originally reported. Fingers crossed.

Resources