Packer failed when executed on Gitlab-runner - gitlab

I have a packer file to deploy Centos 7 using vSphere-Iso builder that works Ok when executed directly on a linux server but when I try to run the same packer file using a gitlab-runner it fails as it does not wait until the OS is installed. It fails after waiting for 1 minute but if I run the packer command with -on-error=run-cleanup-provisioner the OS install finish succesuflly so clear the issue is that packer is just not waiting.
2021/07/20 12:02:40 packer.io plugin: [INFO] Waiting for IP, up to total timeout: 30m0s, settle timeout: 5m0s
==> vsphere-iso.autogenerated_1: Waiting for IP...
==> vsphere-iso.autogenerated_1: Clear boot order...
==> vsphere-iso.autogenerated_1: Power off VM...
==> vsphere-iso.autogenerated_1: Destroying VM...
2021/07/20 12:03:12 [INFO] (telemetry) ending
==> Wait completed after 1 minute 2 seconds
2021/07/20 12:03:12 machine readable: error-count []string{"1"}
==> Some builds didn't complete successfully and had errors:
My boot command is the following as I do not use DHCP.
boot_command = ["<up><tab> text inst.ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/vmware-ks.cfg ip=10.118.12.117::10.118.12.1:255.255.255.0:{{ .Name }}.localhost:ens192:none<enter><wait>"]
I have tested using options like ssh_host, ip_wait_address, ip_settle_timeout, ssh_wait_timeout, pause_before_connecting but nothing seems to work.
As I said, the same packer pkr.hcl file works OK if run it manually on a regular linux but not on my gitlab-runner that is a runner installed directly on my Gitlab server (Yes, I know is not the best practice but I only use the runner for this task)
Packer versions 1.7.2 and 1.7.3 tested, gitlab-runner 14.0.0 and 14.0.1 tested.

Managed to make it work by changing the las wait on my boot command for wait5m. This will give the OS enough time to get installed and the VM rebooted.
New boot command boot_command = ["<up><tab> text inst.ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/vmware-ks.cfg ip=10.118.12.117::10.118.12.1:255.255.255.0:{{ .Name }}.localhost:ens192:none<enter><wait5m>"]
All the other wait options from packer are no longer needed with this boot command.
Doing some test I managed to make it work as well by creating a Firewall drop rule for the VM just after the kickstar file was loaded and removing the FW rules once the OS was installed. Definitelly, packer is just ignoring all the wait machanism native to packer when running on the gitlab-runner
EDIT: After having the same issue with my Windows Templates y tested using a different gitlab-runner installed on a different server instead of the one in the same gitlab server and it worked perfectly with my initial contifiguration for both, windows and centos.

Related

gitlab-runner:Pipeling is pending infinitely

I install a Specific Runners,and the status is actived.
my .gitlab-ci.ymi file code:
stages:
- build
build_maven:
stage: build
only:
- master
script:
- echo "hello CI/CD"
tags:
- vue-dev-pub
when I push the master branch,the gitlab-runner is running,but it's pending infinitely。
the job page show:
This job has not started yet
This job is in pending state and is waiting to be picked by a runner
if I excute the runner manually,the job can pass.
the command of gitlab-runner verify shows:
Runtime platform arch=amd64 os=linux pid=24616 revision=d0b76032 version=12.0.2
WARNING: Running in user-mode.
WARNING: The user-mode requires you to manually start builds processing:
WARNING: $ gitlab-runner run
WARNING: Use sudo for system-mode:
WARNING: $ sudo gitlab-runner...
Verifying runner... is alive runner=T4iKvsT3
I am waiting for you respond,thanks!
If you run the runner manually in debug mode gitlab-runner --debug run you may see the actual error message, in my case it was:
WARNING: Failed to process runner builds=0 error=failed to update executor: missing Machine options executor=docker+machine runner=pSUsX4yR
That's because on runner creation, I selected option docker+machine rather than docker.
After amending /etc/gitlab-runner/config.toml to docker and running gitlab-runner restart followed by gitlab-runner verify, pipeline started running again.
I had a similar problem with my (shell) runners on linux. It would work fine on runners installed and registered on one of my computers but not another. (Even as tags matched correctly in runner and job)
After
gitlab-runner register I would get:
New runner. Has not connected yet
After
gitlab-runner verify that error would go away. But I would get This job is in pending state and is waiting to be picked by a runner
After
gitlab-runner restart
It would all work.
gitlab-runner status
gitlab-runner: Service is running!
Maybe you have tagged your runner but your job has no tags. Refer :how to run untagged jobs
https://stackoverflow.com/a/53371027/10570524
The tags section in your .gitlab-ci.yml file specifies this job has to be picked by a runner that has the same tags (reference).
tags:
- vue-dev-pub
So unless there is actually a runner available for your project that has the vue-dev-pub tag it will keep waiting for one to become available.
first, remove the old config in sys
rm /etc/systemd/system/gitlab-runner.servicetemd
now, you need install gitlab-runner with gitlab user:
gitlab-runner install --user=gitlab-runner --working-directory=/home/gitlab-runner
root installations fail

GitLab-Runner "listen_address not defined" error

I'm running a Laravel api on my server, and I wanted to use Gitlab-runner for CD. The first two runs were good, but then I started to see this problem listen_address not defined, session endpoints disabled builds=0
I'm running a linux server on a web shared hosting, so I can access a terminal and get some priviliges but I can't do some sudo stuff like installing a service. That's why I've been running gitlab-runner in user-mode
Error info
Configuration loaded builds=0
listen_address not defined, metrics & debug endpoints disabled builds=0
[session_server].listen_address not defined, session endpoints disabled builds=0
.gitlab-runner/config.toml
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "CD API REST Sistema SIGO"
url = "https://gitlab.com/"
token = "blablabla"
executor = "shell"
listen_address="my.server.ip.address:8043"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
I have literally wasted 2 days on this subject. I have followed the below steps to get the runners configured and execute jobs successfully.
I am using Mac OS X 10.13 and Git Lab 12. However, people with other OS also can check this out.
I have stopped the runners and uninstalled them. Now deleted all references and files to gitlab runner, including the gitlab executable also.
I got to know GitLab Runner executable paths from https://docs.gitlab.com/runner/configuration/advanced-configuration.html
I have installed them again using the gitlab official documentation.
Then the runners shows online in the gitlab portal. However, the jobs are not getting executed. It shows simply stuck. It tried to get information from logs using
gitlab-runner -debug run
Then I got to know that listen_address not defined. After a long try I got to know that simply enabling Run Untagged jobs did the trick. The jobs started and completed successfully. Still the I see the listen_address not defined from debug. So that misled me.
Though it seems that last one task has solved my problem, but doing all the tasks in a batch did the trick.
Conversely, an alternative to Avinash's solution is to include the tags you create when you register the runner in the gitlab-ci.yml file
stages:
- testing
testing:
stage: testing
script:
- echo 'Hello world'
tags:
- my-tags

How do I run Linux tasks without Docker (on the underlying system)?

Tasks image_resource property is marked as optional in the documentation, but GNU/Linux tasks fail without it.
Also, the docs for the type property of image_resource say:
Required. The type of the resource. Usually docker-image
But I couldn't find any information about other supported types.
How can I run tasks on the underlying system without any container technology, like in my Windows and macOS workers?
In Concourse, you really are not supposed to do anything outside of Docker. That is one of the main features. Concourse runs in Docker containers and starts new containers for each build. If you want to run one or more Linux commands in sh or bash in the container, you can try something like this below, for your task config.
- task: linux
config:
platform: linux
image_resource:
type: docker-image
source: {repository: ubuntu, tag: '18.04'}
run:
dir: /<path-to-dir>
path: sh
user: root
args:
- -exc
- |
echo "Running in Linux!"
ls
scp <you#your-host-machine:file> .
telnet <your-host-machine>
<whatever>
...

Selenium::WebDriver::Error with Firefox & Chrome

Problems
1. Browser = Firefox (Non Geckodriver, Selenium v2.53.4)
(Works on one linux thin client but not on another...)
$ bundle exec rake parallel:spec
Selenium::WebDriver::Error::WebDriverError:
unable to bind to locking port 7054 within 120 seconds
2. Broswer = Firefox (Geckodriver v0.14.0, Selenium-webdriver v3.1.0)
$ bundle exec rake parallel:spec
Net::ReadTimeout:
Net::ReadTimeout
3. Browser = Chrome (Chromedriver v2.27, Selenium-webdriver v3.1.0)
$ bundle exec rake parallel:spec
Selenium::WebDriver::Error::NoSuchDriverError:
no such session
(Driver info: chromedriver=2.27.440175 ,platform=Linux 3.16.0-0.bpo.4-amd64 x86_64)
My Setup
Server with the following installed:
-Linux - Debian x86_64 Wheezy
-ruby 2.2.5p319 (2016-04-26 revision 54774)
-Firefox v46.0.3
-Chrome 56.0.2924.87 (64-bit)
-ChromeDriver 2.27.440175
-Xvfb (x11-xserver-utils v 7.7~3 through headless gem)
Gems
-Selenium v3.1.0 (was 2.53.4)
-parallel_tests v2.10.0
-capybara (2.7.1)
-rspec-activemodel-mocks (1.0.3)
-rspec-core (3.4.4)
-rspec-expectations (3.4.0)
-rspec-mocks (3.4.1)
-rspec-rails (3.4.2)
-rspec-support (3.4.1)
-headless (2.2.3) (Xvfb)
Multiple thin clients running their software off the mentioned server setup.
My computer is one of many...
Important: There is another computer that does NOT encounter the mentioned problem running the same software and same versions off the same server!
Things that are NOT the problem
It is not an incompatibility between my firefox browser version and Selenium.
Why not?
a) Firefox v46.0.3 and Selenium v2.53.4 is currently installed on our server and another client of this server successfully runs parallel_tests using the mentioned versions of Firefox & Selenium.
b) Which Firefox version is compatible with Selenium 2.53.0?
There are no zombie processes(still running firefox) causing firefox to lock port 7054.
This is specifically after each failure has occurred and prior to starting a new $ bundle exec rake parallel:spec run.
Why not?
refer to items 1 & 2 in 'Things I Have Tried'
Turned Out this was not the case, although it was not the cause of the problem
Databases were not always properly killed, See Update 5.
However the databases not being killed were an outcome of the problem.
They were not the cause of the problem, refer to solution section.
Side note
For those wishing to install the mentioned versions to get selenium / firefox working:
Installing a previous version doesn't fix most problems
Things I have tried
Removed any processes still running
$ killall ruby; killall rspec; killall firefox
Result: Failed...
Discovered that completing step 1 is not enough to kill all zombie processes.
After logging out into a different tty i discovered that there was still an rspec, ruby and firefox process running!
So I logged out of my user, logged into a new tty, killed all zombie processes using:
$ kill -9 process_id
I then rerun $ ps aux to ensure all processes are cleaned.
Result: Failed...
Gain insight into the problem.
Ran $ lsof -i TCP:7054 see what is holding that process.
Result: It was my firefox test, no suprise, no real insight gained.
Ensured parallel test databases were running correctly.
Dropped all databases, recreated databases, reloaded all schemas, reseeded (development), reprepared.
Result: Failed... I doubted this was the cause, but doing this certainly eliminated it.
Deleted firefox cache, all persisted setting, everything, for a clean start.
Result: Failed...
Try to eliminate any local environment variables obtained from the project.
Did this by copying the project directory from the working computer.
Then reran $ bundle exec rake parallel:spec.
Result: Failed...
Try to eliminate all local environment variables (project and linux).
Did this by creating a new linux user.
Then switched into the new user.
$ su new_user -l
Copied over the minimum zsh items needed.
Then ran $ bundle exec rake parallel:spec
Result: Failed...
Ensured that /etc/hosts contained:.
127.0.0.1 localhost
Result: Failed...
Running the tests in a single thread (not parallel).
$ rspec spec
Result: Successfully runs (does not hit the problem)
See Update 1
See Update 2
See Update 3
See Update 4
See Update 5
Partial Solution
See Update 6
Debugged Selenium & Parallel_tests gems
Result: Identified that the issue is NOT in Selenium
See Update 7
Result: Running tests in parallel worked. But why?
See Update 8
Result:
Discovered Selenium 3.1.0 changed the way files are automatically downloaded.
This caused tests to hang indefinitely whilst running parallel test.
Which caused the databases to be held open.
Things I am going to try (Updates)
Run tests with chromedriver in chrome browser and see if it passes after the fix.
Update 1
I replaced firefox for chrome.
When I run a single test, the test successfully completes with chrome.
It did so with firefox as well.
However running $ bundle exec rake parallel:spec
Result: Failed...
Selenium::WebDriver::Error::NoSuchDriverError:
no such session
(Driver info: chromedriver=2.27.440175 ,platform=Linux 3.16.0-0.bpo.4-amd64 x86_64)
Update 2
I updated selenium-webdriver gem to the latest gem (was v2.53.4 now 3.2.2)
Result: Failed...
Selenium::WebDriver::Error::NoSuchDriverError:
no such session
(Driver info: chromedriver=2.27.440175 ,platform=Linux 3.16.0-0.bpo.4-amd64 x86_64)
Update 3
Located lock file for parallel test (~.config/google-chrome).
Identified 3 persisting lock files.
Other users only had 1.
Deleted these and reran tests.
Result: Failed...
Selenium::WebDriver::Error::NoSuchDriverError:
no such session
(Driver info: chromedriver=2.27.440175 ,platform=Linux 3.16.0-0.bpo.4-amd64 x86_64)
Update 4
Upgraded selenium-webdriver to v3.1.0 (latest stable)
Upgraded parallel_tests to v2.13.0 (latest)
Installed Geckodriver v0.14.0 (latest)
Then ran $ bundle exec rake parallel:spec
Result: Failed...
Failure/Error: visit "#/login"
Net::ReadTimeout:
Net::ReadTimeout
Update 5
Whilst in the firefox (Geckodriver v0.14.0, Selenium-webdriver v3.1.0) branch.
I only realised when I had to drop all my parallel_test databases that some were still open.
#ltsp:~/ap$ bundle exec rake parallel:drop[32]
Couldn't drop ap_test_andre32 : #<ActiveRecord::StatementInvalid: PG::ObjectInUse: ERROR: database "ap_test_andre32" is being accessed by other users
DETAIL: There are 3 other sessions using the database.
: DROP DATABASE IF EXISTS "ap_test_andre32">
Couldn't drop ap_test_andre25 : #<ActiveRecord::StatementInvalid: PG::ObjectInUse: ERROR: database "ap_test_andre25" is being accessed by other users
DETAIL: There are 3 other sessions using the database.
: DROP DATABASE IF EXISTS "ap_test_andre25">
When rake parallel:spec does not complete (indefinetly hangs during ),
the process must be killed manually.
Doing so leaves databases locked to the parallel_tests that were using them at the time.
So they must be identified and cleaned up.
postgres 743 0.0 0.0 222364 33628 ? Ss 15:30 0:00 postgres: andre ap_test_andre32 [local] idle in transaction
andre 24581 0.0 0.0 7852 2028 pts/36 S+ 15:49 0:00 grep andre32
postgres 26822 0.0 0.0 220032 23400 ? Ss 15:35 0:00 postgres: andre ap_test_andre32 [local] ALTER TABLE waiting
postgres 29684 0.0 0.0 220032 24064 ? Ss 15:40 0:00 postgres: andre ap_test_andre32 [local] ALTER TABLE waiting
Update 5 Solution:
search for database processes & kill all of them
ps aux | grep test_andre
andre#ltsp:~/ap$ sudo kill -9 743 26822 29684
I was then able to drop my databases.
bundle exec rake parallel:drop[32]
Update 6
Whilst in the firefox (Geckodriver v0.14.0, Selenium-webdriver v3.1.0) branch.
Cloned parallel_tests & Selenium projects locally.
Replaced my gems with a path to the locally cloned projects.
Debugged starting with the error stack trace.
Results
Updated to selenium 3.1.0 and loaded geckodriver (marionette).
I discovered that my firefox profile was not setup correctly with Capybara.
This broke my local single thread tests.
Fixed this.
Discovered that geckodriver is not to be used for FF<48.
Also discovered that the capybara, selenium 3+ & FF48+ combo is not yet ready for use.
Some vital functions are not working. (Right clicking, window resizing...)
Refer here for full details
After investigating parallel_tests, was able to rule that out.
Continued to debug in the firefox test case.
Used the locking port error as my guide.
Ruled out Selenium as the cause of the error.
After debugging the stack trace, it was proving to be very likely that the error state was inherited.
This was just a strong hunch at the time.
It later proved to be correct...
So summary here was that firefox had processes that were being locked.
And they were not being locked by Selenium.
Update 7
Whilst in the firefox (Selenium-webdriver v2.53.4) branch.
Went back to the new linux user that was created.
In light of Update 5, I dropped cleaned up all running processes.
Dropped all databases.
$ bundle exec rake parallel:spec
Result: Parallel tests worked
But why?
The databases were not the cause of the issue.
There was something else.
Update 8
Whilst in the firefox (Geckodriver v0.14.0, Selenium-webdriver v3.1.0) branch.
Identified the reason why the tests were failing and idenfinitely hanging.
This caused the issues described in Update 5 & 6.
It was caused by a change in the way Selenium accepts firefox profile settings.
I identified that the integration tests that were failing were ones that launched a pdf download.
Previously, I had this automated so that the download modal would not appear.
Instead it would automatically download the file to a specified folder.
Updating to Selenium 3.1.0 broke this.
Tests hanged indefinitely.
Databases were held open.
The problems identified, in the updates were not the root cause.
The root cause was that firefox/chrome browser ports were not closed and held open.
After looking at htop, Polkitd was seen to be taking up 16.5gb of ram!
This was caused by a memory leak in Polkitd.
After checking the issues it was confirmed that Polkitd memory leak is a know issue.
The issue has been fixed but only in later distributions of linux debian and not for Wheezy.
After restarting Polkitd, and rerunning the tests in parallel they worked!
This explains why the first time I created a new linux user with a clean profile the parallel test issues were still occuring. - Memory leaks are unpredictable.
It also explains why another computer did not run into the issue.
And why the second time I created a new user the parallel tests worked!
Phew, that took a lot of effort!
Polkitd was uninstalled as it was not needed for any printers or other software that we run.
Overall, if anyone else has the locking issue, it would be helpful to follow some of the process detection that I have done as some of the issues are common to all OS.

Does 'docker run' modify image state?

I have a Dockerfile that uses an ubuntu base image and installs a bunch of dependencies with apt-get and dpkg. Then it copies some javascript files and runs a node app. The node app spawns a child process and executes xvfb-run selenium-standalone start.
If I build the docker image with --no-cache and run it using docker run -i -t <image id> my app starts and connects to the selenium server immediately. If I kill the container using CTRL-C or docker stop <container id> and then run the exact same docker run command as above, my app starts as normal, but cannot connect to the selenium server. If I leave it alone, five minutes later, it will connect properly on its own. It behaves this way every time I run docker run until I do a clean image build.
Changing a node source file and rebuilding mostly from cache does not alter this behavior. I've repeated the process several times and it's always the same.
I can't figure out how the behavior can change from one docker run to the next, if the same image is used. Where is the shared state?
Log when working:
gulp run
22:42:31.541 INFO - Launching a standalone Selenium Server
Setting system property webdriver.chrome.driver to /usr/lib/node_modules/selenium-standalone/.selenium/chromedriver/2.16-x64-chromedriver
22:42:31.579 INFO - Java: Oracle Corporation 24.79-b02
22:42:31.579 INFO - OS: Linux 3.18.5-tinycore64 amd64
22:42:31.594 INFO - v2.46.0, with Core v2.46.0. Built from revision 87c69e2
22:42:31.676 INFO - Driver provider org.openqa.selenium.ie.InternetExplorerDriver registration is skipped:
registration capabilities Capabilities [{platform=WINDOWS, ensureCleanSession=true, browserName=internet explorer, version=}] does not match the current platform LINUX
22:42:31.676 INFO - Driver class not found: com.opera.core.systems.OperaDriver
22:42:31.677 INFO - Driver provider com.opera.core.systems.OperaDriver is not registered
[22:42:31] Using gulpfile /opt/app/gulpfile.js
[22:42:31] Starting 'run'...
[22:42:31] Finished 'run' after 1.29 ms
Started App.
22:42:31.764 INFO - RemoteWebDriver instances should connect to: http://127.0.0.1:4444/wd/hub
22:42:31.764 INFO - Selenium Server is up and running
Selenium started
2015-08-19T22:42:32.445Z Starting app on port: 8000
Logs when not working are exactly the same except missing the RemoteWebDriver, 'Selenium Server is up and running', and 'Selenium started.' lines.
Try removing the container instead of just stopping it:
docker stop <container id>
docker rm <container id>

Resources