Problems with EXEC pplcd from PeopleSoft Application Engine - linux

On a Unix server, I am running an application engine via the process scheduler.
In it, I am attempting to use a "zip" Unix command from within an "Exec" pplcode function.
However, I only get the error
PS_Exec(P): Error executing batch command with reason: No such file or directory (2)
I have tried it several ways. The most logical approach I thought was to change directory back to the root, then change to the specified directory so that I could easily use the zip command, such as the following...
Exec("cd / && cd /opt/psfin/pt850/dat/PSFIN1/PYMNT && zip INVREND INVREND.XML");
1643 12.20.34 0.000048 72: Exec("cd /opt/psfin/pt850/dat/PSFIN1/PYMNT");
1644 12.20.34 0.001343 PS_Exec(P): Error executing batch command with reason: No such file or directory (2)
I've even tried the following....just to see if anything works from within an Exec...
Exec("ls");
Sure enough, it gave the same error.
Now, some of you may be wondering, does the account that is associated with the process scheduler actually have authority on this particular directory path on the server ? Well, I was able to create the xml file given in the previous command with no problems.
I just cannot seem to be able to modify it with the Exec issuance of Unix commands.
I'm wondering if this is an error of rights and permissions from the unix server with regards to the operator id that the process scheduler is running from. However, given that it can create and write to a file there, I cannot understand why the Exec command would be met with any resistance....Just my gut shot in the dark...
Any help would be GREATLY appreciated!!!
Thanks,
Flynn

Not sure if you're still having an issue, but in your Exec code, adding the optional %FilePath_Absolute constant should help. When that constant is left off, PS automatically prefixes all commands with <PS_HOME>. You'll have to specify absolute paths with this flag on though. I've changed the command to something that should work.
Exec("zip /opt/psfin/pt850/dat/PSFIN1/PYMNT/INVREND /opt/psfin/pt850/dat/PSFIN1/PYMNT/INVREND.XML", %FilePath_Absolute);
The documentation at PeopleBooks is a little confusing sometimes, but it explains it fairly well in this case.
You can always store the absolute location in a variable and prefix that to your commands so you don't have to keep typing out /opt/psfin/pt850/dat/PSFIN1/PYMNT/.

Related

Gradle Copy Task causes file permission issue

Background
I have a JetBrains Plugin that I am developing. Recently, I moved from a Windows to a Ubuntu system. I am trying to set up everything correctly as before. Note: I am fairly new to Linux.
Issue
I am having an apparent file permission issue (as can be seen in the error section of this question) when I ran the following Gradle script. Note: Whenever I build the project, this Gradle script is automatically called. It also worked correctly on Windows.
If I comment out the copy{...} closure then everything works correctly. I just need to manually copy over the required file.
tasks.create(name: "copyJar_v${project['version']}") {
group GROUP_CHROMATERIAL
def mostCurrentJarFile = "ChroMATERIAL-${project['version']}.jar"
// comment this out and there are not errors, but I need to do this copy manually
copy {
into '/' // Copy into project's root folder
from 'build/libs', {
include mostCurrentJarFile
rename mostCurrentJarFile, 'ChroMATERIAL.jar'
}
}
}
Error
FAILURE: Build failed with an exception.
* Where:
Build file
'/home/ciscorucinski/IdeaProjects/ChroMATERIAL/ChroMATERIAL/build.gradle' line: 100
* What went wrong:
A problem occurred evaluating project ':ChroMATERIAL'.
> Could not copy file '/home/ciscorucinski/IdeaProjects/ChroMATERIAL/ChroMATERIAL/build/libs/ChroMATERIAL-2.5.1.jar' to '/ChroMATERIAL.jar'.
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
BUILD FAILED
Total time: 0.139 secs
/ChroMATERIAL.jar (Permission denied)
오후 12:20:39: Task execution finished 'copyJar_v2.5.1'.
What I tried
I have been using the UI to modify the file permissions of the Plugin Project Folder within IdeaProjects.
I gave everyone Create and delete files permissions and I Change Permissions for Enclosing Files...
Everyone has Read and write access to files and Create and delete files access to folders.
Clicked Change
Run the Gradle script again ... same error message
Thoughts
It appears that Linux is not letting Gradle modify these files. I can comment out the code and do everything myself, but I need to allow Gradle to have higher control. Don't know how though.
I notice that when I go back to Change Permissions for Enclosing Files... the shown permissions are not what I selected! Others has Read-only access to files and Access files access to folders. I don't know if this is common Ubuntu behavior, a bug, or something else.
It would be nice to know how to give as restrictive as possible access while fixing this.
Well, it is true -- you cannot (and should not) copy files into the Root folder in a linux box. You could if you ran the script as sudo, but that is a bad idea.
Edited
Since you want to copy to the root of the project, you can use ${projectDir} or ${rootDir}.
Also, you should be able to do this without the hassle of a closure -- and it makes your script cleaner, IMHO -- by using the built in Copy task:
task copyClientLoc(type: Copy) {
from "build/libs/"
into "${rootDir}"
include "ChroMATERIAL-${project['version']}.jar"
fileMode = 0644
}

Jenkins pipeline sh step fails

I'm learning Jenkins Pipelines and I'm trying to execute anything on a Linux build server but I get errors about it being unable to create a folder.
Here is the pipeline code
node('server') {
stage("Build-Release-Linux64-${NODE_NAME}") {
def ws = pwd()
sh "ls -lha ${ws}"
}
}
This is the error I get:
sh: 1: cannot create /opt/perforce/workspace/Dels-Testing-Area/MyStream-main#tmp/durable-07c26e68/pid; jsc=durable-8c9234a2eb6c2feded950bac03c8147a;JENKINS_SERVER_COOKIE=$jsc /opt/perforce/workspace/Dels-Testing-Area/MyStream-main#tmp/durable-07c26e68/script.sh: Directory nonexistent
I've checked the server while this is running and I can see that it does create
the file "/opt/perforce/workspace/Dels-Testing-Area/MyStream-main#tmp/durable-07c26e68/script.sh"
The file contains the following and is being created by Jenkins and not myself:
#!/bin/sh -xe
It does not matter what I try to execute using the sh step, I get the same error.
Can anyone shed some light on why this is happening?
-= UPDATE =-
I'm currently using Jenkins 2.46.2 LTS and there are a number of updates available. I'm going to wait for a quite period and perform a full update and try this again in case it fixes anything.
I found out that the problem was because I had a single quote in my folder name. As soon as I removed the single quote it ran perfectly. This also links to this Jenkins issue [https://issues.jenkins-ci.org/browse/JENKINS-44341] where I added a comment and voted for a fix.
So the fix is, only use the following characters in folder and job names [0-9a-zA-Z_-] excluding the square brackets and also don't use spaces.
I can confirm that using special characters and spaces in the "display name" field of a folder's configuration works fine.

OpenMPI: ORTE was unable to reliably start one or more daemons

I've been at it for days but could not solve my problem.
I am running:
mpiexec -hostfile ~/machines -nolocal -pernode mkdir -p $dstpath where $dstpath points to current directory and "machines" is a file containing:
node01
node02
node03
node04
This is the error output:
Failed to parse XML input with the minimalistic parser. If it was not
generated by hwloc, try enabling full XML support with libxml2.
[node01:06177] [[6421,0],0] ORTE_ERROR_LOG: Error in file base/plm_base_launch_support.c at line 891
--------------------------------------------------------------------------
ORTE was unable to reliably start one or more daemons.
This usually is caused by:
* not finding the required libraries and/or binaries on
one or more nodes. Please check your PATH and LD_LIBRARY_PATH
settings, or configure OMPI with --enable-orterun-prefix-by-default
* lack of authority to execute on one or more specified nodes.
Please verify your allocation and authorities.
* the inability to write startup files into /tmp (--tmpdir/orte_tmpdir_base).
Please check with your sys admin to determine the correct location to use.
* compilation of the orted with dynamic libraries when static are required
(e.g., on Cray). Please check your configure cmd line and consider using
one of the contrib/platform definitions for your system type.
* an inability to create a connection back to mpirun due to a
lack of common network interfaces and/or no route found between
them. Please check network connectivity (including firewalls
and network routing requirements).
--------------------------------------------------------------------------
[node01:06177] 1 more process has sent help message help-errmgr-base.txt / failed-daemon-launch
[node01:06177] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
Failed to parse XML input with the minimalistic parser. If it was not
generated by hwloc, try enabling full XML support with libxml2.
[node01:06181] [[6417,0],0] ORTE_ERROR_LOG: Error in file base/plm_base_launch_support.c at line 891
I have 4 machines, node01 to node04. In order to log into these 4 nodes, I have to first log in to node00. I am trying to run some distributed graph functions. The graph software is installed in node01 and is supposed to be synchronised to the other nodes using mpiexec.
What I've done:
Made sure all passwordless login are setup, every machine can ssh to any other machine with no issues.
Have a hostfile in the home directory.
echo $PATH gives /home/myhome/bin:/home/myhome/.local/bin:/usr/include/openmpi:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
echo $LD_LIBRARY_PATH gives
/usr/lib/openmpi/lib
This has previously worked before, but it just suddenly started giving these errors. I got my administrator to install fresh machines but it still gave such errors. I've tried doing it one node at a time but it gave the same errors. I'm not entirely familiar with command line at all so please give me some suggestions. I've tried reinstalling OpenMPI from source and from sudo apt-get install openmpi-bin. I'm on Ubuntu 16.04 LTS.
You should focus on fixing:
Failed to parse XML input with the minimalistic parser. If it was not
generated by hwloc, try enabling full XML support with libxml2.
[node01:06177] [[6421,0],0] ORTE_ERROR_LOG: Error in file base/plm_base_launch_support.c at line 891

puppet: Could not back up <file>: Got passed new contents for sum

I had a question I was hoping someone might have an answer to. Essentially what I'm doing is try to ensure I'm always using a fixed, slightly older version of phpunit, which I've placed in my module's file resources.
The manifest:
file
{
"/usr/bin/phpunit":
ensure => file,
owner => 'root',
group => 'root',
mode => 0755,
source => "puppet:///modules/php/phpunit"
}
Preparation: I download the current ('wrong') version of phpunit and place it in /usr/bin.
So the first run puppet succeeds:
Notice: Compiled catalog for <hostname> in environment production in 3.06 seconds
Notice: /Stage[main]/Php/File[/usr/bin/phpunit]/content: content changed '{md5}9f61f732829f4f9e3d31e56613f1a93a' to '{md}38789acbf53196e20e9b89e065cbed94'
Notice: /Stage[main]/Httpd/Service[httpd]: Triggered 'refresh' from 1 events
Notice: Finished catalog run in 15.86 seconds
Then I download the current (still 'wrong') version of phpunit and place it in /usr/bin again.
This time the puppet run fails.
Notice: Compiled catalog for <hostname> in environment production in 2.96 seconds
Error: Could not back up /usr/bin/phpunit: Got passed new contents for sum {md5}9f61f732829f4f9e3d31e56613f1a93a
Error: Could not back up /usr/bin/phpunit: Got passed new contents for sum {md5}9f61f732829f4f9e3d31e56613f1a93a
Error: /Stage[main]/Php/File[/usr/bin/phpunit]/content: change from {md5}9f61f732829f4f9e3d31e56613f1a93a to {md5}38789acbf53196e20e9b89e065cbed94 failed: Could not back up /usr/bin/phpunit: Got passed new contents for sum {md5}9f61f732829f4f9e3d31e56613f1a93a
What gives? If I delete the file ( /var/lib/puppet/clientbucket/9/f/6/1/f/7/3/2/9f61f732829f4f9e3d31e56613f1a93a/ ) from my filebucket it will work again... for the next run, but not the one after that.
What am I doing wrong?
I'd appreciate any input and thanks in advance.
Been having this error as well. I solved it with a combination of two previous answers.
Firstly I had to delete /var/lib/puppet/clientbucket on the client node by running:
sudo rm -r /var/lib/puppet/clientbucket
Just doing this will only let it run once more.
Then I had to mark the backup => false to stop it recreating the file, missing out either step failed to solve it for me. The accepted answer is incorrect by saying there is
"no solution other than upgrading".
I was able to fix the same problem by removing /var/lib/puppet/clientbucket on the client node.
This node has been running out of disk space, so puppet has probably incorrectly stored empty files there.
As a workaround, you can set backup => false in the file resource. This is a little unsafe, of course.
This has no solution other than to upgrade since there's a bug in certain versions of puppet where files containing both UTF8 and binary characters are treated wrongly, and it results in an error message.
https://tickets.puppetlabs.com/browse/PUP-1038
The ridiculously overcomplicated solution I used as a workaround is to have a .tar file in the file resource which notifies an exec which untars and places the actual executable in the correct directory, making sure the timestamp for the latter is newer than the former.
It's far from ideal but it works in cases like mine where upgrading puppet to the most current version isn't an attractive option.

Runtime.exec() in Hadoop on Azure environment

This question is related to Hadoop on Azure environment.
I am trying to use Runtime.exec() to execute a batch script in the reduce function. I could not get this running in Hadoop on Azure environment while it runs fine in the Hadoop on Linux. I tested the Runtime.exec() code snippet in my desktop (windows 7) environment and it runs fine there. I have made sure that I consume the output and error streams of the sub-process after Runtime.exec().
The batch script contains the below ( a single command):
c:\hdfs\mapred\local\taskTracker\nabeel\jobcache\job_201207121317_0024\attempt_201207121317_0024_r_000001_0\work\tool.exe
-f c:\hdfs\mapred\local\taskTracker\nabeel\jobcache\job_201207121317_0024\work\11_task_201207121317_0024_r_000001.out
-i c:\hdfs\mapred\local\taskTracker\nabeel\jobcache\job_201207121317_0024\attempt_201207121317_0024_r_000001_0\work\input.txt
I distribute the tool.exe and input.txt files using Distributed cache and it creates a symlink from the working directory. tool.exe and input.txt points to the actual files in the jobcache directory.
2012-07-16 04:31:51,613 INFO org.apache.hadoop.mapred.TaskRunner: Creating symlink: /hdfs/mapred/local/taskTracker/distcache/-978619214658189372_-1497645545_209290723/10.73.50.78tool.exe <- \hdfs\mapred\local\taskTracker\nabeel\jobcache\job_201207121317_0024\attempt_201207121317_0024_r_000001_0\work\tool.exe
2012-07-16 04:31:51,644 INFO org.apache.hadoop.mapred.TaskRunner: Creating symlink: /hdfs/mapred/local/taskTracker/distcache/-4944695173898834237_1545037473_2085004342/10.73.50.78input.txt <- \hdfs\mapred\local\taskTracker\nabeel\jobcache\job_201207121317_0024\attempt_201207121317_0024_r_000001_0\work\input.txt
The reducer gives the below error when it runs.
Command Execution Error: Cannot run program
"cmd /q /c c:\hdfs\mapred\local\taskTracker\nabeel\jobcache\job_201207121317_0024\work\11_task_201207121317_0024_r_0000011513543720767963399.bat":
CreateProcess error=2, The system cannot find the file specified
In another case, I tried running the same but without using the absolute paths.. The output stream from the sub-process is shown below:
c:\hdfs\mapred\local\taskTracker\nabeel\jobcache\job_201207121317_0022\attempt_201207121317_0022_r_000000_0\work>tool.exe -f /hdfs/mapred/local/taskTracker/nabeel/jobcache/job_201207121317_0022/work/1_task_201207121317_0022_r_000000.out
-i input.txt
I do not know how the job working directory paths and distributed cache works in Hadoop on Azure environment. Could you please let me know if I am missing something here (or) there is something I need to take care of while using Runtime.exec() in Hadoop on Azure environment.
Thanks,
.,._
Reply to sender | Reply to group | Reply via web post | Start a New Topic
I am not familiar with Hadoop. But the error message seems to be obvious. It would be better if you can check whether the file exists.
c:\hdfs\mapred\local\taskTracker\nabeel\jobcache\job_201207121317_0024\work\11_task_201207121317_0024_r_0000011513543720767963399.bat
Best Regards,
Ming Xu

Resources