I am running packer via a jenkins pipeline and want to delete the ami afterwards.
I am using a small python3/boto3 script to do that.
However, when calling describe_images I get an empty list. No errors (via debug).
If I run the same script via the same docker based agent (on a ec2 jenkins node) but from a different pipeline, it works.
I also do not have issues on another project with similar settings.
Sometimes, intermittently it will work, but seldom.
I can rule out a general config issue as the same script works perfectly on the same systems (just a different jenkins pipeline).
I can also rule out general issue with the jenkins pipeline, as it will intermittently work - without changes.
What am I missing?
Yikes, this was a stupid mistake on my side. So my script to fetch the ami-id from the packer manifest.json was not returning the correct ami-id (I assumed I'd only find one ami-id in that file).
Related
I think I'm fundamentally missing something. I'm new to CI/CD and trying to set up my first pipeline ever with gitlab.
The project is a pre-existing PHP project.
I don't want to clean it up just yet, at the moment I've pushed the whole thing into a docker container and it's running fine talking to google cloud's mysql databases etc as it should locally and also on a remote google cloud testing VM.
The dream is to be able to push to the development branch, and then merge the dev banch into the test branch which then TRIGGERS automated tests (easy part), and also causes the remote test VM (hosted on google cloud), to PULL the newest changes, rebuild the image from the latest docker file (or pull the latest image from gitlab image register)... and then rebuild the container with the newest image.
I'm playing around with gitlab's runner but I'm not understanding what it's actually for, despite looking through almost all the online content for it.
Do I just install it in the google cloud VM, and then when I push to gitlab from my development machine.. the repo will 'signal' the runner (which is running on the VM, to execute a bunch of scripts (which might include git pull on the newest changes?).
Because I already pre-package my app into a container locally (and push the image to the image registry) do I need to use docker as my executor on the runner? or can i just use shell and shell the commands in?
What am I missing?
TLDR and extra:
Questions:
What is runner actually for,
where is it meant to be installed?
Does it care which directory it is run in?
If it doesn't care which directory it's run,
where does it execute it's script commands? At root?
If I am locally building my own images and uploading them to gitlab's registry,
Do I need to set my executor to docker? Shouldn't I just set it to shell, pull the image, and build it? (Assuming the runner is runing on the remote VM).
What is runner actually for?
You have your project along with a .gitlab-ci.yml file. .gitlab-ci.yml defines what stages your CI/CD pipeline has and what to do in each stage. This typically consists of a build,test,deploy stages. Within each stage you can define multiple job. For example in build stage you may have 3 jobs to build on debian, centos and windows (in GitLab glossary build:debian, build:centos, build:windows). A GitLab runner clones the project read the gitlab-ci.yaml file and do what he is instructed to do. So basically GitLab runner is a Golang process that executes some instructed tasks.
where is it meant to be installed?
You can install a runner in your desired environment listed here. https://docs.gitlab.com/runner/install/
or
you can use a shared runner that is already installed on GitLab's infrastructure.
Does it care which directory it is run in?
Yes. Every task executed by runner is relativly to CI_PROJECT_DIR defined in https://gitlab.com/help/ci/variables/README. But you can alter this behaviour.
where does it execute it's script commands? At root?
Do I need to set my executor to docker? Shouldn't I just set it to shell, pull the image, and build it?
A runner can have mutiple executors such as docker, shell, virtualbox etc but docker being the most common one. If you use docker as the executor you can pull any image from docker hub or your configured registry and you can do loads of stff with docker images. In a docker environment normally you run them as the root user.
https://docs.gitlab.com/runner/executors/README.html
See gitlab access logs , runner is constantly polling the server
I am trying to debug a shell script that is executed via a Jenkins job. The first thing the script does is include another script that is in a completely different repo. My instinct is telling me that the user that Jenkins is executing the script from has access to the directory for the other repo through $PATH or some other similar mechanism, but nothing I’m seeing indicates this.
I’ve looked over variables in http://$host/systemInfo, tried logging on to the Linux box, switched to various users and searched through command history for each, looked at $PATH variable for each, and even tried executing a test shell script with the same include as different users. Still not seeing anything to indicate how Jenkins is able to include a file from a different repo and have not been able to get the include to work in my test script.
My main questions are:
How can I determine what user Jenkins is executing the original shell script as? I would assume user 'jenkins' but I'm not able to get the include to work in my test script executing as this user.
How is Jenkins able to include a script from a different repo?
I'm sure I'm just running into some fundamental Jenkins ignorance on my part but not finding answers. Thanks in advance for any insight.
Finally found the answer and it seems really obvious now that I see it. The Jenkins server that the job runs from has a PATH environment variable defined in the server config in the Jenkins interface. This PATH points to the directory containing the external script.
We are successfully using code deploy for deployment , however we have a request from client to separate deployment script repository and code repository , right now code repository contains the appspec.yml and other script which need to be run and available to coders too.
I tried searching google and stackoverflow but found nothing :( .
Do we need to make use of other tool like chef,puppet etc ? however client want to be solution using aws only.
Kindly help.
I've accomplished this by adding extra step to my build process.
During the build, my CI tool checks out second repository which contains deployment related scripts and appspec.yml file. After that we zip up the code + scripts and ship it to CodeDeploy.
Don't forget that appspec.yml has to be in root directory.
I hope it helps.
I have a custom deployment script (*.sh script) defined for my azure deployment.
Just today, I have found that I am unable to publish. I updated my bitbucket repository and after a while I get an error similar to the following:
Command 'starter.cmd deploy_pvl_cont ...' was aborted due to no output nor CPU activity for 180 seconds. You can increase the SCM_COMMAND_IDLE_TIMEOUT app setting (or WEBJOBS_IDLE_TIMEOUT if this is a WebJob) if needed.\r\nstarter.cmd deploy_pvl_content.sh
I have tried a number of things to try to diagnose the problem.
Increase SCM_COMMAND_IDLE_TIMEOUT to 300
Run the script locally (Works)
Set up a new fresh deployment slot and try publishing same commit (Same error)
Tried publishing the previously successful commit (Same error)
Look for useful error messages in a diagnostic log dump (Coldn't find anything more useful)
Tried running the deployment script from the Kudu Console (No output returned, like it didn't actually run)
Tried reverting git to a previous version as suggested by #david-ebbo
Tried simplifying my script to a single echo command with the same results
Not sure what I can do to debug this further. Ideally I would like to get the output of the shell script on the azure host but don't know how to get it. Any ideas?
Updated answer
This is a regression caused by the move to git 2.8.x in Azure. The issue is tracked by https://github.com/projectkudu/kudu/issues/2041.
Here is a very simple workaround (and you don't need to bring in the old git tools): instead of setting your COMMAND to deploy_pvl_content.sh, set it to bash deploy_pvl_content.sh
We'll address the issue, but this workaround will get you going.
Original answer (only leaving for context)
You could be running into some flavor of this issue, which is caused by the upgrade to git 2.8.1 that we just did.
While we're trying to get to the bottom of it, please try this workaround to see if that helps:
Go to Kudu Console
Create a d:\home\bin folder
Copy the old Windows git 1.8.x folder in there. You can get the content from here. If you drag and drop the zip into Kudu console, there is a special unzip drop area that will expand it.
Try your deployment again
I have a Master Jenkins running on a Windows machine. I'm trying to add a Linux slave and so far seems fine. For some reason if a new project is ran on the Linux Machine that triggers another project due to dependencies, the build fail because is unable to see the artifacts from the first project.
If I run everything on the master (i.e. disabling the slave) it works fine. I suspect is some kind of artifact sharing that needs to take place, but I haven't been able to find examples or figure out how to do this synchronization.
Thanks in advance!