I'm writing Jenkins job using job DSL. It looks like:
job(jobName) {
description("This is my Jenkins job.")
steps {
// Executing some shell here.
}
scm {
// Checking out some branch from Git.
}
triggers {
bitbucketPush()
scm ''
}
}
It works fine, but for some reason, executing my shell script it fails with an errors:
/usr/lib/git-core/git-pull: 83: /usr/lib/git-core/git-sh-setup: sed: not found
basename: write error: Broken pipe
/usr/lib/git-core/git-pull: 299: /usr/lib/git-core/git-sh-setup: uname: not found
etc.
As far as I understand, the issue is with PATH variable. When I'm fixing it in Jenkins from UI (in Configure section) it works fine. (adding something like this: PATH=/usr/local/bin:/usr/bin
As I'm creating a lot of job, it would great to fix this PATH during creation process in my DSL scripts.
How it may be added into my DSL?
The problem is not related to Job DSL. Try to configure the job manually and fix all problems. Then translate you configuration to Job DSL.
In this case there is something wrong with the environment on you build agent, e.g. git is not installed properly.
Related
I want to create a simple job using NodeJS, Github and Jenkins.
There are an exchange what runs on two servers addresses:
for example, us.exchange.com and eu.exchange.com.
I created an environment variable named SERVERS_LOCATION,
browser.get(`http://${process.env.SERVERS_LOCATION}.exchange.com`);
and a Jenkins parameter named SERVERS_LOCATION_JEN which may takes two options - US and EU.
Also I created a pipeline in Jenkins where I want to run parameterized build by choose one or another option, for that I use pipeline script in jenkinsfile what looks like that:
pipeline{
agent any
options{
disableConcurrentBuilds()
}
stages{
stage("install npm"){
steps{
bat "npm install"
bat "npx webdriver-manager update --versions.chrome 76.0.3809.68"
}
}
stage("executing job"){
steps{
bat "SERVERS_LOCATION=%SERVERS_LOCATION_JEN% npx protractor config/conf.js"
}
}
}
}
The main idea is to take the choosen value from Jenkins variable SERVERS_LOCATION_JEN and put it to environment variable ${process.env.SERVERS_LOCATION}, which can be used in code for further calls.
But when I running this job I have an error:
'SERVERS_LOCATION' is not recognized as an internal or external command,operable program or batch file.
P.S. running that job from git-bash works fine. (Win10 Chrome browser)
Could you point me please what I am doing wrong?
You have to use "set" command to assign a value to a variable in batch, so please use the below code:-
bat "set SERVERS_LOCATION=%SERVERS_LOCATION_JEN% npx protractor config/conf.js"
I want to display colored output in jenkins which is produced by node.js
Both work separately, but not combined:
Node Script
My test script test.js:
console.log(require("chalk").red("Node Red"))
Calling the test script in the shell works:
node test.js => OK
Calling a colored shell script in jenkins works:
echo -e "\033[31mShell Red\033[0m" => OK
But calling the node script in jenkins does not display any colors:
node test.js => No Color, when executed in jenkins
For me it worked when putting
export FORCE_COLOR=1
at the top of my script.
See https://github.com/chalk/supports-color#info
The answer of Raphael pointed me in the right direction. Here my complete solution for a Jenkins Pipeline Script (Scripted Pipeline):
:
node {
ansiColor('xterm') {
withEnv(['FORCE_COLOR=3']) {
...
sh "some-node-script-using-chalk.js"
...
}
}
}
If you are using the Declarative Pipeline see https://jenkins.io/doc/pipeline/tour/environment/ how to set environment variables in a Declarative Pipeline Script.
I just found the problem in my case :
In The Job Configuration
Look at the Bindings
Check the checkbox named "Color ANSI Console Output"
And it works (for me...)
The following is a simplified manifest I am running:
package {'ruby2.4':
ensure => installed
}
exec { "gem2.4_install_bundler":
command => "/usr/bin/gem2.4 install bundler",
require => Package['ruby2.4']
}
Puppet apply runs this manifest correctly i.e
installs ruby2.4 package (which includes gem2.4)
Installs bundler using gem2.4
However, puppet apply --noop FAILS because puppet cannot find the executable '/usr/bin/gem2.4' because ruby2.4 is not installed with --noop.
My question is if there is a standard way to test a scenario like this with puppet apply --noop? To validate that my puppet manifest is executing correctly?
It occurs to me that I may have to parse the output and validate the order of the executions. If this is the case, is there a standard way/tool for this?
A last resort is a very basic check that the puppet at least runs, which can be determined with the --detailed-exitcodes option. (a code different to 1).
Thank you in advance
rspec-puppet is the standard tool for that level of verification. It can build a catalog from the manifest (e.g. for a class, defined type, or host) and then you can write tests to verify the contents.
In your case you could verify that the package resource exists, that the exec resource exists, and verify the ordering between them. This would be just as effective as running the agent with --noop mode and parsing the output - but easier and cheaper to run.
rspec-puppet works best with modules, so assuming you follow the setup for your module from the website (adding rspec-puppet to your Gemfile, running rspec-puppet-init), and let's say this is in a class called ruby24, a simple spec in spec/classes/ruby24_spec.rb would be:
require 'spec_helper'
describe 'ruby24' do
it { is_expected.to compile.with_all_deps }
it { is_expected.to contain_package('ruby2.4').with_ensure('installed') }
it { is_expected.to contain_exec('gem2.4_install_bundler').with_command('/usr/bin/gem2.4 install bundler') }
it { is_expected.to contain_exec('gem2.4_install_bundler').that_requires('Package[ruby2.4]') }
end
I wrote a simple module to install a package (BioPerl) on a Ubuntu VM. The whole init.pp file is here:
https://gist.github.com/anonymous/17b4c31bf7309aff14dfdcd378e44f40
The problem is it doesn't work, and it gives me no feedback to let me know why it doesn't work. There are 3 simple steps in the module. I checked and it didn't do any of them. Heres the first 2:
Step 1: Download an archive and save it to /usr/local/lib
exec { 'bioperl-download':
command => "sudo /usr/bin/wget --no-check-certificate -O ${archive_path} ${package_uri}",
require => Package['wget']
}
Step 2: Extract the archive
exec { 'bioperl-extract':
command => "sudo /usr/bin/tar zxvf ${archive_path} --directory ${install_path}; sudo rm ${archive_path}",
require => Exec['bioperl-download']
}
pretty simple. But I have no idea where the problem is because I can't see what its doing. The provisioner is set to verbose mode, and here are the output lines for my module:
==> default: Notice: /Stage[main]/Bioperl/Exec[bioperl-download]/returns: executed successfully
==> default: Notice: /Stage[main]/Bioperl/Exec[bioperl-extract]/returns: executed successfully
==> default: Notice: /Stage[main]/Bioperl/Exec[bioperl-path]/returns: executed successfully
So all I know is it executed these three steps successfully. It doesn't tell me anything about whether the steps did their job properly or not. I know that it didn't download the archive to /usr/local/lib that directory, and that it didn't add an environment variable file to /usr/profile.d. Maybe the issue is the variables containing the directories are wrong. Maybe the variable containing the archives download URI is wrong. How can I find these things out?
UPDATE:
It turns out the module does work. But to improve the module (since I want to upload it to forge.puppetlabs.com, I tried implementing the changes suggested by Matt. Heres the new code:
file { 'bioperl-download':
path => "${archive_path}",
source => "http://cpan.metacpan.org/authors/id/C/CJ/CJFIELDS/${archive_name}",
ensure => "present"
}
exec { 'bioperl-extract':
command => "sudo /bin/tar zxvf ${archive_name}",
cwd => "${bioperl_target_dir}",
require => File['bioperl-download']
}
A problem: It gives me an error telling me that the source cannot be http://. I see in the docs that they do indeed allow http:// files as the source for the file resource. Maybe I'm using an older version of puppet?
I want to try out the puppet-archive module, but I'm not sure how I can set it as a required dependency. By that, I mean how I can make sure its installed first. Do I need to get my module to download the module from github and save it to the modules directory? Or is there a way to let puppet install it automatically? I added it as a dependency to the metadata.json file, but that doesn't install it. I know I can just get my module to download the package, but I was wondering what best practice for this is.
The initial problem you describe is acceptance testing. Verifying that the Puppet resources and code you wrote actually resulted in the desired end state you wanted is normally accomplished with Serverspec: http://serverspec.org/. For example, you can write a Puppet module to deploy an application, but you only know that Puppet did what you told it to, and not that the application actually successfully deployed. Note Serverspec is also what people generally use to solve this problem for Ansible and Chef also.
You can write a Serverspec test similar to the following to help test your module's end state:
describe file('/usr/local/lib/bioperl.tar.gz') do
it { expect(subject).to be_file }
end
describe file('/usr/profile.d/env_file') do
it { expect_subject).to be_file }
its(:content) { is_expected.to match(/env stuff/) }
end
However, your problem also seems to deal with debugging why your acceptance tests failed. For that, you need unit testing. This is normally solved with RSpec-Puppet: http://rspec-puppet.com/. I would show you how to write some tests for your situation, but I don't think you should be writing your Puppet module the way that you did, so it would render the unit tests irrelevant.
Instead, consider using a file resource with the source attribute and a HTTP URI to grab the tarball instead of an exec with wget: https://docs.puppet.com/puppet/latest/type.html#file-attribute-source. Also, you might want to consider using the Puppet archive module to assist you: https://forge.puppet.com/puppet/archive.
If you have questions on how to use these tools to provide unit and acceptance testing, or have questions on how to refactor your module, then don't hesitate to write followup questions on StackOverflow and we can help you.
I have following code in my script:
def ant_fs = (new AntBuilder())
def fs = ant_fs.fileset( dir: <path> )
fs.each{
println( "Fileset item: $it" )
}
When I launch it from Maven (mvn ... in command line) or from Intellij IDEA I see that fileset object is initialized successfully (I see correct files' pathes).
When I launch this code via Jenkins I see that fs object is not created but I do not see any exception in output.
Could you please help me resolve the issue?
Thanks In Advcance!
Note: I have surefire plugin for Maven2.
Looks like this issue was caused by incorrect user Jenkins Agent settings.
I setup user into Jenkins Service (Win host) as Administrator and my script started to work. It was caused because I work with shared folder on another host which required authentification. I setup authentification on that host for Administrator account, but Jenkins by default launches test as System account.