Jenkins pipeline sh step fails - linux

I'm learning Jenkins Pipelines and I'm trying to execute anything on a Linux build server but I get errors about it being unable to create a folder.
Here is the pipeline code
node('server') {
stage("Build-Release-Linux64-${NODE_NAME}") {
def ws = pwd()
sh "ls -lha ${ws}"
}
}
This is the error I get:
sh: 1: cannot create /opt/perforce/workspace/Dels-Testing-Area/MyStream-main#tmp/durable-07c26e68/pid; jsc=durable-8c9234a2eb6c2feded950bac03c8147a;JENKINS_SERVER_COOKIE=$jsc /opt/perforce/workspace/Dels-Testing-Area/MyStream-main#tmp/durable-07c26e68/script.sh: Directory nonexistent
I've checked the server while this is running and I can see that it does create
the file "/opt/perforce/workspace/Dels-Testing-Area/MyStream-main#tmp/durable-07c26e68/script.sh"
The file contains the following and is being created by Jenkins and not myself:
#!/bin/sh -xe
It does not matter what I try to execute using the sh step, I get the same error.
Can anyone shed some light on why this is happening?
-= UPDATE =-
I'm currently using Jenkins 2.46.2 LTS and there are a number of updates available. I'm going to wait for a quite period and perform a full update and try this again in case it fixes anything.

I found out that the problem was because I had a single quote in my folder name. As soon as I removed the single quote it ran perfectly. This also links to this Jenkins issue [https://issues.jenkins-ci.org/browse/JENKINS-44341] where I added a comment and voted for a fix.
So the fix is, only use the following characters in folder and job names [0-9a-zA-Z_-] excluding the square brackets and also don't use spaces.
I can confirm that using special characters and spaces in the "display name" field of a folder's configuration works fine.

Related

Gitlab CI variables returns empty string?

It's been 2 days since one of my project' build starts failing on Gitlab CI. The main error was E_MISSING_APP_KEY and when I check another variable just by echoing $HOST and $PORT from my .gitlab-ci.yml config, like this
tests:
script:
- echo "${HOST} ${PORT}"
- node -e "console.log(process.env.HOST, process.env.PORT)"
- node_modules/.bin/nyc node ace test -t 0
I got nothing.
The build was failed because it can't read my environment variable that I set on its CI Settings.
Anyone experiencing same issue? & how to solve this?
Update:
I'm trying to create new project with only containing .gitlab-ci.yml file here and it's seems working just fine
But why the world it's still failing on my main project?
For anyone else having a similar problem:
check your variable, if it is protected your branch has to be protected as well or remove the protected option on your variable
The issue is solved by delete all of my variables I've had & set them back from the CI Setting. And the build pipeline is running without any errors. (except the actual testing is still failed, lol)
Honestly, I'm still wondering why this could happened? and hopefully no one will experiencing same kind of issue like me here..

Gradle Copy Task causes file permission issue

Background
I have a JetBrains Plugin that I am developing. Recently, I moved from a Windows to a Ubuntu system. I am trying to set up everything correctly as before. Note: I am fairly new to Linux.
Issue
I am having an apparent file permission issue (as can be seen in the error section of this question) when I ran the following Gradle script. Note: Whenever I build the project, this Gradle script is automatically called. It also worked correctly on Windows.
If I comment out the copy{...} closure then everything works correctly. I just need to manually copy over the required file.
tasks.create(name: "copyJar_v${project['version']}") {
group GROUP_CHROMATERIAL
def mostCurrentJarFile = "ChroMATERIAL-${project['version']}.jar"
// comment this out and there are not errors, but I need to do this copy manually
copy {
into '/' // Copy into project's root folder
from 'build/libs', {
include mostCurrentJarFile
rename mostCurrentJarFile, 'ChroMATERIAL.jar'
}
}
}
Error
FAILURE: Build failed with an exception.
* Where:
Build file
'/home/ciscorucinski/IdeaProjects/ChroMATERIAL/ChroMATERIAL/build.gradle' line: 100
* What went wrong:
A problem occurred evaluating project ':ChroMATERIAL'.
> Could not copy file '/home/ciscorucinski/IdeaProjects/ChroMATERIAL/ChroMATERIAL/build/libs/ChroMATERIAL-2.5.1.jar' to '/ChroMATERIAL.jar'.
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
BUILD FAILED
Total time: 0.139 secs
/ChroMATERIAL.jar (Permission denied)
오후 12:20:39: Task execution finished 'copyJar_v2.5.1'.
What I tried
I have been using the UI to modify the file permissions of the Plugin Project Folder within IdeaProjects.
I gave everyone Create and delete files permissions and I Change Permissions for Enclosing Files...
Everyone has Read and write access to files and Create and delete files access to folders.
Clicked Change
Run the Gradle script again ... same error message
Thoughts
It appears that Linux is not letting Gradle modify these files. I can comment out the code and do everything myself, but I need to allow Gradle to have higher control. Don't know how though.
I notice that when I go back to Change Permissions for Enclosing Files... the shown permissions are not what I selected! Others has Read-only access to files and Access files access to folders. I don't know if this is common Ubuntu behavior, a bug, or something else.
It would be nice to know how to give as restrictive as possible access while fixing this.
Well, it is true -- you cannot (and should not) copy files into the Root folder in a linux box. You could if you ran the script as sudo, but that is a bad idea.
Edited
Since you want to copy to the root of the project, you can use ${projectDir} or ${rootDir}.
Also, you should be able to do this without the hassle of a closure -- and it makes your script cleaner, IMHO -- by using the built in Copy task:
task copyClientLoc(type: Copy) {
from "build/libs/"
into "${rootDir}"
include "ChroMATERIAL-${project['version']}.jar"
fileMode = 0644
}

puppet - How to debug and test to see if your module is working properly

I wrote a simple module to install a package (BioPerl) on a Ubuntu VM. The whole init.pp file is here:
https://gist.github.com/anonymous/17b4c31bf7309aff14dfdcd378e44f40
The problem is it doesn't work, and it gives me no feedback to let me know why it doesn't work. There are 3 simple steps in the module. I checked and it didn't do any of them. Heres the first 2:
Step 1: Download an archive and save it to /usr/local/lib
exec { 'bioperl-download':
command => "sudo /usr/bin/wget --no-check-certificate -O ${archive_path} ${package_uri}",
require => Package['wget']
}
Step 2: Extract the archive
exec { 'bioperl-extract':
command => "sudo /usr/bin/tar zxvf ${archive_path} --directory ${install_path}; sudo rm ${archive_path}",
require => Exec['bioperl-download']
}
pretty simple. But I have no idea where the problem is because I can't see what its doing. The provisioner is set to verbose mode, and here are the output lines for my module:
==> default: Notice: /Stage[main]/Bioperl/Exec[bioperl-download]/returns: executed successfully
==> default: Notice: /Stage[main]/Bioperl/Exec[bioperl-extract]/returns: executed successfully
==> default: Notice: /Stage[main]/Bioperl/Exec[bioperl-path]/returns: executed successfully
So all I know is it executed these three steps successfully. It doesn't tell me anything about whether the steps did their job properly or not. I know that it didn't download the archive to /usr/local/lib that directory, and that it didn't add an environment variable file to /usr/profile.d. Maybe the issue is the variables containing the directories are wrong. Maybe the variable containing the archives download URI is wrong. How can I find these things out?
UPDATE:
It turns out the module does work. But to improve the module (since I want to upload it to forge.puppetlabs.com, I tried implementing the changes suggested by Matt. Heres the new code:
file { 'bioperl-download':
path => "${archive_path}",
source => "http://cpan.metacpan.org/authors/id/C/CJ/CJFIELDS/${archive_name}",
ensure => "present"
}
exec { 'bioperl-extract':
command => "sudo /bin/tar zxvf ${archive_name}",
cwd => "${bioperl_target_dir}",
require => File['bioperl-download']
}
A problem: It gives me an error telling me that the source cannot be http://. I see in the docs that they do indeed allow http:// files as the source for the file resource. Maybe I'm using an older version of puppet?
I want to try out the puppet-archive module, but I'm not sure how I can set it as a required dependency. By that, I mean how I can make sure its installed first. Do I need to get my module to download the module from github and save it to the modules directory? Or is there a way to let puppet install it automatically? I added it as a dependency to the metadata.json file, but that doesn't install it. I know I can just get my module to download the package, but I was wondering what best practice for this is.
The initial problem you describe is acceptance testing. Verifying that the Puppet resources and code you wrote actually resulted in the desired end state you wanted is normally accomplished with Serverspec: http://serverspec.org/. For example, you can write a Puppet module to deploy an application, but you only know that Puppet did what you told it to, and not that the application actually successfully deployed. Note Serverspec is also what people generally use to solve this problem for Ansible and Chef also.
You can write a Serverspec test similar to the following to help test your module's end state:
describe file('/usr/local/lib/bioperl.tar.gz') do
it { expect(subject).to be_file }
end
describe file('/usr/profile.d/env_file') do
it { expect_subject).to be_file }
its(:content) { is_expected.to match(/env stuff/) }
end
However, your problem also seems to deal with debugging why your acceptance tests failed. For that, you need unit testing. This is normally solved with RSpec-Puppet: http://rspec-puppet.com/. I would show you how to write some tests for your situation, but I don't think you should be writing your Puppet module the way that you did, so it would render the unit tests irrelevant.
Instead, consider using a file resource with the source attribute and a HTTP URI to grab the tarball instead of an exec with wget: https://docs.puppet.com/puppet/latest/type.html#file-attribute-source. Also, you might want to consider using the Puppet archive module to assist you: https://forge.puppet.com/puppet/archive.
If you have questions on how to use these tools to provide unit and acceptance testing, or have questions on how to refactor your module, then don't hesitate to write followup questions on StackOverflow and we can help you.

Problems with EXEC pplcd from PeopleSoft Application Engine

On a Unix server, I am running an application engine via the process scheduler.
In it, I am attempting to use a "zip" Unix command from within an "Exec" pplcode function.
However, I only get the error
PS_Exec(P): Error executing batch command with reason: No such file or directory (2)
I have tried it several ways. The most logical approach I thought was to change directory back to the root, then change to the specified directory so that I could easily use the zip command, such as the following...
Exec("cd / && cd /opt/psfin/pt850/dat/PSFIN1/PYMNT && zip INVREND INVREND.XML");
1643 12.20.34 0.000048 72: Exec("cd /opt/psfin/pt850/dat/PSFIN1/PYMNT");
1644 12.20.34 0.001343 PS_Exec(P): Error executing batch command with reason: No such file or directory (2)
I've even tried the following....just to see if anything works from within an Exec...
Exec("ls");
Sure enough, it gave the same error.
Now, some of you may be wondering, does the account that is associated with the process scheduler actually have authority on this particular directory path on the server ? Well, I was able to create the xml file given in the previous command with no problems.
I just cannot seem to be able to modify it with the Exec issuance of Unix commands.
I'm wondering if this is an error of rights and permissions from the unix server with regards to the operator id that the process scheduler is running from. However, given that it can create and write to a file there, I cannot understand why the Exec command would be met with any resistance....Just my gut shot in the dark...
Any help would be GREATLY appreciated!!!
Thanks,
Flynn
Not sure if you're still having an issue, but in your Exec code, adding the optional %FilePath_Absolute constant should help. When that constant is left off, PS automatically prefixes all commands with <PS_HOME>. You'll have to specify absolute paths with this flag on though. I've changed the command to something that should work.
Exec("zip /opt/psfin/pt850/dat/PSFIN1/PYMNT/INVREND /opt/psfin/pt850/dat/PSFIN1/PYMNT/INVREND.XML", %FilePath_Absolute);
The documentation at PeopleBooks is a little confusing sometimes, but it explains it fairly well in this case.
You can always store the absolute location in a variable and prefix that to your commands so you don't have to keep typing out /opt/psfin/pt850/dat/PSFIN1/PYMNT/.

Add-AzureNodeWebRole

I try to go over the tutorial/presentation at http://learnwindowsazure.sourceforge.net/, and I cannot reproduce the steps related to node.
When I run Add-AzureNodeWebRole command I get Add-AzureNodeWebRole : service root is invalid or empty
Also when running the previous command, New-AzureService", there aren't any files created in the folder.
What would this exactly mean?
you need to run the cmdlet New-AzureServiceProject not New-AzureService.

Resources