Perforce trigger won't run rubyscript - perforce

jenkins change-submit //... "ruby %quote%//HVS/Main/BuildScripts/notify_jenkins.rb%quote%"
So I have made the above p4 trigger in my triggers file, and I'm trying to run a build script file that I wrote in ruby but when I try to submit a file, I'm getting this error:
'jenkins' validation failed: ruby: No such file or directory -- //HVS/Main/BuildScripts/notify_jenkins.rb (LoadError)
Is there no way to make a p4 trigger run a file that's inside of a stream? The documentation says you can do this, but when I try to run it, it's saying it can't find the file.

Per the doc:
https://www.perforce.com/perforce/r14.2/manuals/p4sag/chapter.scripting.html#basics.scripts.depot
the format you want is:
jenkins change-submit //... "ruby %//HVS/Main/BuildScripts/notify_jenkins.rb%"
Surrounding it in %quote% characters means you're expecting the OS to be able to interpret that path as a local filesystem path.

Related

Execute shell script for database backup

I have a ReactJS-neo4j application, deployed on a cloud server. Currently, i create backups of my databases manually.
Now I want to automate this process. I want to automatically execute the above query every day
Can anyone tell me how to automate the above process ?
You need to change your neo4j configuration file found in <HOME_neo4j>/conf/neo4j.conf as below. The location of the file is different if you are not using Linux server, like Debian.
apoc.export.file.enabled=true
apoc.import.file.use_neo4j_config=false
The 2nd line will enable you to save the json file from default folder "import" to any folder you want.
Then open a terminal (or ssh) that connects to your cloud server. Go to <HOME_neo4j> directory where cypher-shell is installed. Copy and run this one liner script below.
echo "CALL apoc.export.json.all(\"/home/backups/deploymentName/backup_mydeployment.json\", { useTypes: true } )" | bin/cypher-shell -u neo4j -p <awesome_psw> --format plain
This will save the json file in /home/backups/deploymentName just like what you do in your neo4j browser.
I will leave it up to you on 1) how to add the timestamp YYMMDD0000_ in the filename via linux command and 2) schedule the job every midnight via crontab. Goodluck!

Gradle Copy Task causes file permission issue

Background
I have a JetBrains Plugin that I am developing. Recently, I moved from a Windows to a Ubuntu system. I am trying to set up everything correctly as before. Note: I am fairly new to Linux.
Issue
I am having an apparent file permission issue (as can be seen in the error section of this question) when I ran the following Gradle script. Note: Whenever I build the project, this Gradle script is automatically called. It also worked correctly on Windows.
If I comment out the copy{...} closure then everything works correctly. I just need to manually copy over the required file.
tasks.create(name: "copyJar_v${project['version']}") {
group GROUP_CHROMATERIAL
def mostCurrentJarFile = "ChroMATERIAL-${project['version']}.jar"
// comment this out and there are not errors, but I need to do this copy manually
copy {
into '/' // Copy into project's root folder
from 'build/libs', {
include mostCurrentJarFile
rename mostCurrentJarFile, 'ChroMATERIAL.jar'
}
}
}
Error
FAILURE: Build failed with an exception.
* Where:
Build file
'/home/ciscorucinski/IdeaProjects/ChroMATERIAL/ChroMATERIAL/build.gradle' line: 100
* What went wrong:
A problem occurred evaluating project ':ChroMATERIAL'.
> Could not copy file '/home/ciscorucinski/IdeaProjects/ChroMATERIAL/ChroMATERIAL/build/libs/ChroMATERIAL-2.5.1.jar' to '/ChroMATERIAL.jar'.
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
BUILD FAILED
Total time: 0.139 secs
/ChroMATERIAL.jar (Permission denied)
오후 12:20:39: Task execution finished 'copyJar_v2.5.1'.
What I tried
I have been using the UI to modify the file permissions of the Plugin Project Folder within IdeaProjects.
I gave everyone Create and delete files permissions and I Change Permissions for Enclosing Files...
Everyone has Read and write access to files and Create and delete files access to folders.
Clicked Change
Run the Gradle script again ... same error message
Thoughts
It appears that Linux is not letting Gradle modify these files. I can comment out the code and do everything myself, but I need to allow Gradle to have higher control. Don't know how though.
I notice that when I go back to Change Permissions for Enclosing Files... the shown permissions are not what I selected! Others has Read-only access to files and Access files access to folders. I don't know if this is common Ubuntu behavior, a bug, or something else.
It would be nice to know how to give as restrictive as possible access while fixing this.
Well, it is true -- you cannot (and should not) copy files into the Root folder in a linux box. You could if you ran the script as sudo, but that is a bad idea.
Edited
Since you want to copy to the root of the project, you can use ${projectDir} or ${rootDir}.
Also, you should be able to do this without the hassle of a closure -- and it makes your script cleaner, IMHO -- by using the built in Copy task:
task copyClientLoc(type: Copy) {
from "build/libs/"
into "${rootDir}"
include "ChroMATERIAL-${project['version']}.jar"
fileMode = 0644
}

exec sh from PySpark

I'm trying to run a .sh file loading from a .py file in a PySpark's job but I receive a message always saying that .sh file is not found
This is my code:
test.py:
import os,sys
os.system("sh ./check.sh")
and my gcloud command:
gcloud beta dataproc jobs submit pyspark --cluster mserver file:///home/myuser/test.py
test.py file is loaded well but the system can't find check.sh file
I figure out that is something related with the file's path but not sure
I tried also with os.system("sh home/myuser/check.sh") and same result
I think that this should be easy to do so ... ideas?
The "current working directory" used by Dataproc jobs submitted through the API is a temporary directory with a unique name for each job; if the file wasn't uploaded with the job itself, you'll have to access it using your absolute path.
If you indeed added the check.sh file manually to /home/myuser/check.sh, then you should be able to call it using the fully qualified path, os.system("sh /home/myuser/check.sh"); make sure to start your absolute path with a /.

Problems with EXEC pplcd from PeopleSoft Application Engine

On a Unix server, I am running an application engine via the process scheduler.
In it, I am attempting to use a "zip" Unix command from within an "Exec" pplcode function.
However, I only get the error
PS_Exec(P): Error executing batch command with reason: No such file or directory (2)
I have tried it several ways. The most logical approach I thought was to change directory back to the root, then change to the specified directory so that I could easily use the zip command, such as the following...
Exec("cd / && cd /opt/psfin/pt850/dat/PSFIN1/PYMNT && zip INVREND INVREND.XML");
1643 12.20.34 0.000048 72: Exec("cd /opt/psfin/pt850/dat/PSFIN1/PYMNT");
1644 12.20.34 0.001343 PS_Exec(P): Error executing batch command with reason: No such file or directory (2)
I've even tried the following....just to see if anything works from within an Exec...
Exec("ls");
Sure enough, it gave the same error.
Now, some of you may be wondering, does the account that is associated with the process scheduler actually have authority on this particular directory path on the server ? Well, I was able to create the xml file given in the previous command with no problems.
I just cannot seem to be able to modify it with the Exec issuance of Unix commands.
I'm wondering if this is an error of rights and permissions from the unix server with regards to the operator id that the process scheduler is running from. However, given that it can create and write to a file there, I cannot understand why the Exec command would be met with any resistance....Just my gut shot in the dark...
Any help would be GREATLY appreciated!!!
Thanks,
Flynn
Not sure if you're still having an issue, but in your Exec code, adding the optional %FilePath_Absolute constant should help. When that constant is left off, PS automatically prefixes all commands with <PS_HOME>. You'll have to specify absolute paths with this flag on though. I've changed the command to something that should work.
Exec("zip /opt/psfin/pt850/dat/PSFIN1/PYMNT/INVREND /opt/psfin/pt850/dat/PSFIN1/PYMNT/INVREND.XML", %FilePath_Absolute);
The documentation at PeopleBooks is a little confusing sometimes, but it explains it fairly well in this case.
You can always store the absolute location in a variable and prefix that to your commands so you don't have to keep typing out /opt/psfin/pt850/dat/PSFIN1/PYMNT/.

TortoiseProc CruiseControl.NET: Unable to execute file problem

I am new to CruiseControl and automated build. My problem is that the ccnet service always promt me "unable to execute file TortoiseProc.exe /command ...". My config file looks like this
TortoiseProc.exe /command:update /path:C:\Work\global.ad.lib.objectmanagement /closeonend:1
This command(tortoiseProc....) works well in a CMD window. The Ccnet service is execute with an Admin account. "C:\Program Files\TortoiseSVN\bin" is in the environnement variables and can be executed from anywhere. If i force a build from the Dashboard, it builds perfectly. I have the feeling this is just a simple stupid thing...
Tks
You will need to specify TortoiseProc.exe parameters separately from the executable name, inside "buildArgs" element. Here is the right ccnet.config fragment for your situation:
<exec>
<description>Execute TortoiseProc.exe</description>
<baseDirectory>c:\path\to\tortoiseproc\folder</baseDirectory>
<executable>TortoiseProc.exe</executable>
<buildArgs>/command:update /path:C:\Work\global.ad.lib.objectmanagement /closeonend:1</buildArgs>
</exec>
Also you can create cmd-file with your commands and use exec without parameters, if that would be easier for you.

Resources