Jenkins build on temp "#" directory - linux

I'm having some issues with Jenkins where my job keeps building under a temp directory example build_job_name##.
I tried deleting all of the folders with that were created in the Workspace/, deleted all of the past builds under Jobs/, added the delete workspace before and after the job runs and even recreated the job from scratch.
The name of the job is important so I need to get build_job_name to stop building as build_job_name##.
Does anyone know what needs to be done to stop it from building temp folders?

Related

Can I save files directly to $(Build.ArtifactStagingDirectory) and publish them?

I keep saving a file to $(Build.ArtifactStagingDirectory) but when checking using Get-ChildItem, the directory is empty. Also, publishing doesn't produce anything. Just an empty directory.
Is the only way to save files to that directory is using copy task?
You can use these variables only in running build or release pipelines. Each build uses own $(Build.ArtifactStagingDirectory), $(Build.BinariesDirectory) and $(Build.SourcesDirectory) on your build agent. These folders are accessible for any cmd file or PowerShell script in the current build run.
It is not the only way to use copy task to save files to other directory.
You could also use bash,cmd or PowerShell script in pipeline.
The cause of why you failed to list them:
1.You need to select folder instead of the file in Source Foler.
pipeline definition
2.This directory($(Build.ArtifactStagingDirectory)) is purged before each new build.
So the files just exist when you running the pipeline.

dbt crontab copies old logs and doesn't run

My coworker set up a virtual machine that is running Linux and dbt. The dbt run is scheduled with crontab like this:
0 3 * * * . /home/user/copydbtrel_and_run.sh
The script is really simple itself:
cd
cd .dbt
cd folder1
dbt run --target dev
cd
cd .dbt
cd folder2
dbt run --target dev
The problem is that the cron scheduling works as expected, but it doesn't do what it's supposed to. I'm not sure if it actually starts the dbt run at all, but what definitely happens:
all files from .dbt/folder1/logs get deleted
old log files from a week ago are copied from somewhere to the log folder
same or similar happens in .dbt/folder1/target, the files there refer to the same week old run as if nothing was run in between
the actual dbt job doesn't do what it's supposed to. No loading of database tables happens
If I just run the script manually, it does what it's supposed to i.e. it runs the job and appends results to the log files.
So, what's going on here? I haven't used Linux in a long time and dbt isn't familiar to me, so I don't know where to start debugging. Also, my coworker is on vacation, so he can't help.

git hub process seems to be running in this repository after deleting index.lock

Another git process seems to be running in this repository, e.g.
an editor opened by 'git commit'. Please make sure all processes
are terminated then try again. If it still fails, a git process
may have crashed in this repository earlier:
remove the file manually to continue.
After deleting index.lock file
The first test would be a simple reboot, to check if the issue persists.
If you cannot reboot right away, check a tool like Process Explorer, which allows you to search for any process keeping an handle to your repository folder 'rmc_backend'(or any file within that folder)

Issue when using "Workspace Clean" in Azure devops

I've got privately hosted pipelines using Azure DevOps and want to be able to clean the directory before each deployment. I found that I could use workspace: clean all which works and deletes all of the content in the directory, the issue being that the task immediately fails afterwards with the error:
I don't understand why this error is happening, it clearly deletes the content but then immediately fails . Has anybody else encountered this?
##[error]The directory is not empty. :XXX
When other processes are using the contents of the target folder, I run into a similar issue.
For example:
When I set the clean: all and keep the folder opening in File Explorer, I get the error.
To solve this issue, you could try the following points:
Check if the target folder is using by other processes.
You can restart the local machine to block the background process。
Try to manually delete the folder D:\azagent\A5\_work\15\client_build and check if it could work.

Jenkins tfs plug-in and checkout source on remote node

First, I'm a Jenkins neophyte. I have made a free-style software project in Jenkins to perform my Linux build. The Jenkins server is running on Windows so there are slave nodes configured for doing this Linux build. The sources are kept in a TFS server.
I updated our TFS plugin to the latest of 4.0.0. This plugin says that it is no longer necessary for slave nodes to have the Team Explorer Everywhere package installed as it uses the Java API. However, when I kick off my build, I get this:
Started by user Andy Falanga (afalanga)
[EnvInject] - Loading node environment variables.
Building remotely on dmdevlnx64-01 (PY27-64 CENTOS6-64 LOG4CPLUS PY26-64) in workspace /home/builder/jenkins/workspace/Linux Autotools Build
Deleting project workspace... done
Querying for remote changeset at '$/Sources/Branches/Andy/AutotoolsMigration' as of 'D2015-10-05T18:26:27Z'...
Query result is: Changeset #4872 by 'WINNTDOM\afalanga' on '2015-09-25T23:36:24Z'.
Listing workspaces from http://ets-tfs:8080/tfs/SoftwareCollection...
... Long list of workspaces
Workspace Created by Team Build
Getting version 'C4872' to '/home/builder/jenkins/workspace/Linux Autotools Build'...
Finished getting version 'C4872'.
[Linux Autotools Build] $ /bin/bash /tmp/hudson7081873611439714406.sh
Bootstrapping autotools
/tmp/hudson7081873611439714406.sh: line 4: ./bootstrap: No such file or directory
Build step 'Execute shell' marked build as failure
Notifying upstream projects of job completion
Finished: FAILURE
I log into that system and look in the directory /home/builder/jenkins/workspace/Linux Autotools Build and sure enough, there's nothing there. My configuration is pretty simple.
I have discard old builds checked and a simple rotation (this is just me learning how to use it).
I have it set to "Restrict where the build is done" and a label which associates to the 3 slave nodes for doing this build.
All TFS credentials are input and correct.
No build triggers
A simple shell script for Build->Execute Shell which bootstraps the autotools and calls configure and then make.
What am I doing incorrectly?
I found the answer and am posting it here in case someone runs into this. This seems better than simply deleting the question. The TFS plugin doesn't seem to like spaces in the project name. The name before Linux Autotools Build which didn't work and the name now, LinuxAutotoolsBuild which does.
The errors provided by the Jenkins system didn't provide enough information for this to be apparent. After trying a few other things the thought occurred, "Perhaps the spaces are causing grief."
Hope this helps someone.

Resources