Puppet Multi-Packages Installation error: vim-enchanced - puppet

Any idea please how to solve this error:
Error: Execution of '/bin/yum -d 0 -e 0 -y install vim-enchanced' returned 1: Error: Nothing to do
Error: /Stage[main]/Multi_package_node/Package[vim-enchanced]/ensure: change from 'purged' to 'present' failed: Execution of '/bin/yum -d 0 -e 0 -y install vim-enchanced' returned 1: Error: Nothing to do
class multi_package_node {
$multi_package = [ 'vim-enhanced', 'unzip']
package { $multi_package: ensure => 'installed' }
}
node 'nodeName' {
include multi_package_node
}

Related

execSync throws and error trying to run node

I am running some processes inside of an EC2 instance.
To run it I initiate it with an SSM command:
cd / && cd home/ec2-user && . .nvm/nvm.sh && cd ufo && npm run start
and inside of it, I have a method in app.ts which is initialized with ts-node app.ts
import { execSync } from 'node:child_process';
import { takeNextScheduledAudit } from './sqs-scheduler';
import { uploadResultsToBucket } from './s3-uploader';
import { AuditRunParams } from "./types";
import { sendAuditResults } from "./sendResults";
(async function conductor(): Promise<void> {
const nextAuditRunParams = await takeNextScheduledAudit();
if (!nextAuditRunParams) {
execSync("sudo shutdown -h now");
}
const { targetUrl, requesterId, endpoint } = nextAuditRunParams as AuditRunParams;
try {
execSync(`npx user-flow --url=${targetUrl} --open=false`);
const resultsUrl = await uploadResultsToBucket(targetUrl);
await sendAuditResults(requesterId, endpoint, resultsUrl);
} catch (error) {
console.log(error);
}
await conductor();
})();
If I log in manually and run npm run start the scripts works as intended but if I run it using the SSM command I get this output:
> start
> ts-node app.ts
Error: Command failed: npx user-flow --url=https://deep-blue.io/ --open=false
at checkExecSyncError (node:child_process:841:11)
at execSync (node:child_process:912:15)
at conductor (/home/ec2-user/ufo/app.ts:15:17)
at processTicksAndRejections (node:internal/process/task_queues:96:5) {
status: 243,
signal: null,
output: [ null, <Buffer >, <Buffer 0a> ],
pid: 2691,
stdout: <Buffer >,
stderr: <Buffer 0a>
}
and this error:
Error: Command failed: sudo shutdown -h now
at checkExecSyncError (node:child_process:841:11)
at execSync (node:child_process:912:15)
at conductor (/home/ec2-user/ufo/app.ts:10:17)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async conductor (/home/ec2-user/ufo/app.ts:21:5) {
status: null,
signal: 'SIGTERM',
output: [ null, Buffer(0) [Uint8Array] [], Buffer(0) [Uint8Array] [] ],
pid: 2705,
stdout: Buffer(0) [Uint8Array] [],
stderr: Buffer(0) [Uint8Array] []
}
failed to run commands: exit status 1
Moreover, if I run execSync("node -v && npx -v") it also throws an error.
Why can I run this script when i am logged in but if i run it via a SSM command it does not recognize node inside of node?
--- Edit - Added Info ---
When running execSync(node -v && npx -v,{shell: '/bin/bash'}) I get an error:
Error: Command failed: node -v && npx -v
When running execSync(ps -p $$ && echo $SHELL, {shell: '/bin/bash'}):
PID TTY TIME CMD
7817 ? 00:00:00 bash
/bin/bash
And when I loggin and do ps -p $$ && echo $SHELL I get:
PID TTY TIME CMD
6873 pts/0 00:00:00 bash
/bin/bash
By default, all of the child_process functions execute in the same environment as the process that launched them. I don't have an account handy to test with, but it's quite likely that SSM skips over a traditional shell and just executes certain runtimes directly.
You can use the exec options like this to set a particular shell in which to launch the process:
const output = execSync('echo "doing stuff"', {
shell: '/bin/bash',
})
console.log('***** output:', output.toString())
This is assuming the OS you're using for the EC2 instance has bash available. Most flavors of linux should, but for what you're doing there, /bin/sh is sufficient if not. To get a list of the available shells, you can run:
cat /etc/shells
## or possibly
sudo cat /etc/shells
EDIT: Since you say it works fine in a shell already, you have presumably already handled this, but user-flow would also have to be available. It's not a module from npmjs, so would need to already be present on the box as either a local dependency or a private repo to which the EC2 instance has access.

Docker build fails, getting: failed to connect to proxy No route to host

I am trying to set Prelude SIEM on docker container by following this repository:
https://hub.docker.com/r/2xyo/prelude-siem
When I try to build the container with:
docker build -t prelude .
I get a curl error when trying to reach to the proxy server:
sumaia#main-srv:~/core/docker-prelude-siem$ docker build -t prelude .
Sending build context to Docker daemon 157.2kB
Step 1/37 : FROM centos:centos7
---> eeb6ee3f44bd
Step 2/37 : MAINTAINER 2xyo
---> Using cache
---> 067affd319d1
Step 3/37 : ARG PRELUDE_VERSION
---> Using cache
---> 52efc46cd210
Step 4/37 : ENV PRELUDE_VERSION ${PRELUDE_VERSION:-1.2.6}
---> Using cache
---> 34dfd4af7e35
Step 5/37 : RUN mkdir -p /opt/prelude-src
---> Using cache
---> 8ed92176b862
Step 6/37 : WORKDIR /opt/prelude-src
---> Using cache
---> 817dd04576b6
Step 7/37 : RUN curl https://www.prelude-siem.org/attachments/download/408/libpreludedb-${PRELUDE_VERSION}.tar.gz | tar xz && curl https://www.prelude-siem.org/attachments/download/409/prelude-correlator-${PRELUDE_VERSION}.tar.gz | tar xz && curl https://www.prelude-siem.org/attachments/download/410/libprelude-${PRELUDE_VERSION}.tar.gz | tar xz && curl https://www.prelude-siem.org/attachments/download/411/prelude-lml-${PRELUDE_VERSION}.tar.gz | tar xz && curl https://www.prelude-siem.org/attachments/download/412/prelude-lml-rules-${PRELUDE_VERSION}.tar.gz | tar xz && curl https://www.prelude-siem.org/attachments/download/413/prewikka-${PRELUDE_VERSION}.tar.gz | tar xz && curl https://www.prelude-siem.org/attachments/download/414/prelude-manager-${PRELUDE_VERSION}.tar.gz | tar xz
---> Running in 738d226a8f9f
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0curl: (7) Failed connect to <Proxy IP>:80; No route to host
gzip: stdin: unexpected end of file
tar: Child returned status 1
tar: Error is not recoverable: exiting now
The command '/bin/sh -c curl https://www.prelude-siem.org/attachments/download/408/libpreludedb-${PRELUDE_VERSION}.tar.gz | tar xz && curl https://www.prelude-siem.org/attachments/download/409/prelude-correlator-${PRELUDE_VERSION}.tar.gz | tar xz && curl https://www.prelude-siem.org/attachments/download/410/libprelude-${PRELUDE_VERSION}.tar.gz | tar xz && curl https://www.prelude-siem.org/attachments/download/411/prelude-lml-${PRELUDE_VERSION}.tar.gz | tar xz && curl https://www.prelude-siem.org/attachments/download/412/prelude-lml-rules-${PRELUDE_VERSION}.tar.gz | tar xz && curl https://www.prelude-siem.org/attachments/download/413/prewikka-${PRELUDE_VERSION}.tar.gz | tar xz && curl https://www.prelude-siem.org/attachments/download/414/prelude-manager-${PRELUDE_VERSION}.tar.gz | tar xz' returned a non-zero code: 2
sumaia#main-srv:~/core/docker-prelude-siem$
My network is behind proxy, and it has been working on my machine without issues for some time, and docker is able to pull images without issues, here is my ~/.docker/config.json file content:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "c29tYXl5YWhtb2hkOnZDZ3Y1ampBTThuZ1NNOA=="
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.12 (linux)"
},
"proxies": {
"default": {
"httpProxy": "http://<proxy IP>:80",
"httpsProxy": "http://<proxy IP>:80",
"noProxy": "*.internaldomain.net"
}
}
}
am I missing something?
So the issue was due to an old interface that is not disabled, after disabling it, the issue was fixed.

AWS lambda function throwing error "constchildProcess is not defined"

I am using AWS lambda function with below code
'use strict';
constchildProcess= require("child_process");
constpath= require("path");
const backupDatabase = () => {
const scriptFilePath =path.resolve(__dirname, "./backup.sh");
return newPromise((resolve, reject) => {
childProcess.execFile(scriptFilePath, (error) => {
if (error) {
console.error(error);
resolve(false);
}
resolve(true);
});
});
};
module.exports.handler = async (event) => {
const isBackupSuccessful = await backupDatabase();
if (isBackupSuccessful) {
return {
status: "success",
message: "Database backup completed successfully!"
};
}
return {
status: "failed",
message: "Failed to backup the database! Check out the logs for more details"
};
};
The code above run's with in the docker container, tries to run the below backup script
#!/bin/bash
#
# Author: Bruno Coimbra <bbcoimbra#gmail.com>
#
# Backups database located in DB_HOST, DB_PORT, DB_NAME
# and can be accessed using DB_USER. Password should be
# located in $HOME/.pgpass and this file should be
# chmod 0600[1].
#
# Target bucket should be set in BACKUP_BUCKET variable.
#
# AWS credentials should be available as needed by aws-cli[2].
#
# Dependencies:
#
# * pg_dump executable (can be found in postgresql-client-<version> package)
# * aws-cli (with python environment configured execute 'pip install awscli')
#
#
# References
# [1] - http://www.postgresql.org/docs/9.3/static/libpq-pgpass.html
# [2] - http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html
#
#
###############
### Variables
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
DB_HOST=
DB_PORT="5432"
DB_USER="postgres"
BACKUP_BUCKET=
###############
#
# **RISK ZONE** DON'T TOUCH below this line unless you know
# exactly what you are doing.
#
###############
set -e
export PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
### Variables
S3_BACKUP_BUCKET=${BACKUP_BUCKET:-test-db-backup-bucket}
TEMPFILE_PREFIX="db-$DB_NAME-backup"
TEMPFILE="$(mktemp -t $TEMPFILE_PREFIX-XXXXXXXX)"
DATE="$(date +%Y-%m-%d)"
TIMESTAMP="$(date +%s)"
BACKUPFILE="backup-$DB_NAME-$TIMESTAMP.sql.gz"
LOGTAG="DB $DB_NAME Backup"
### Validations
if [[ ! -r "$HOME/.pgpass" ]]; then
logger -t "$LOGTAG" "$0: Can't find database credentials. $HOME/.pgpass file isn't readable. Aborted."
exit 1
fi
if ! which pg_dump > /dev/null; then
logger -t "$LOGTAG" "$0: Can't find 'pg_dump' executable. Aborted."
exit 1
fi
if ! which aws > /dev/null; then
logger -t "$LOGTAG" "$0: Can't find 'aws cli' executable. Aborted."
exit 1
fi
logger -t "$LOGTAG" "$0: remove any previous dirty backup file"
rm -f /tmp/$TEMPFILE_PREFIX*
### Generate dump and compress it
logger -t "$LOGTAG" "Dumping Database..."
pg_dump -O -x -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -w "$DB_NAME" > "$TEMPFILE"
logger -t "$LOGTAG" "Dumped."
logger -t "$LOGTAG" "Compressing file..."
nice gzip -9 "$TEMPFILE"
logger -t "$LOGTAG" "Compressed."
mv "$TEMPFILE.gz" "$BACKUPFILE"
### Upload it to S3 Bucket and cleanup
logger -t "$LOGTAG" "Uploading '$BACKUPFILE' to S3..."
aws s3 cp "$BACKUPFILE" "s3://$S3_BACKUP_BUCKET/$DATE/$BACKUPFILE"
logger -t "$LOGTAG" "Uploaded."
logger -t "$LOGTAG" "Clean-up..."
rm -f $TEMPFILE
rm -f $BACKUPFILE
rm -f /tmp/$TEMPFILE_PREFIX*
logger -t "$LOGTAG" "Finished."
if [ $? -eq 0 ]; then
echo "script passed"
exit 0
else
echo "script failed"
exit 1
fi
I created a docker image with above app.js content and bakup.sh with the below docker file
ARG FUNCTION_DIR="/function"
FROM node:14-buster
RUN apt-get update && \
apt install -y \
g++ \
make \
cmake \
autoconf \
libtool \
wget \
openssh-client \
gnupg2
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - && \
echo "deb http://apt.postgresql.org/pub/repos/apt/ buster-pgdg main" | tee /etc/apt/sources.list.d/pgdg.list && \
apt-get update && apt-get -y install postgresql-client-12
ARG FUNCTION_DIR
RUN mkdir -p ${FUNCTION_DIR} && chmod -R 755 ${FUNCTION_DIR}
WORKDIR ${FUNCTION_DIR}
COPY package.json .
RUN npm install
COPY backup.sh .
RUN chmod +x backup.sh
COPY app.js .
ENTRYPOINT ["/usr/local/bin/npx", "aws-lambda-ric"]
CMD ["app.handler"]
I am running the docker container created with the image created from the above docker file
docker run -v ~/aws:/aws -it --rm -p 9000:8080 --entrypoint /aws/aws-lambda-rie backup-db:v1 /usr/local/bin/npx aws-lambda-ric app.handler
And trying to hit that docker container with below curl command
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
when I run curl command I am seeing the below error
29 Nov 2021 10:57:30,838 [INFO] (rapid) extensionsDisabledByLayer(/opt/disable-extensions-jwigqn8j) -> stat /opt/disable-extensions-jwigqn8j: no such file or directory
29 Nov 2021 10:57:30,838 [WARNING] (rapid) Cannot list external agents error=open /opt/extensions: no such file or directory
START RequestId: 053246ef-4687-438d-aade-a6794b917b79 Version: $LATEST
2021-11-29T10:57:30.912Z undefined INFO Executing 'app.handler' in function directory '/function'
2021-11-29T10:57:30.919Z undefined ERROR constchildProcess is not defined
29 Nov 2021 10:57:30,926 [WARNING] (rapid) First fatal error stored in appctx: Runtime.ExitError
29 Nov 2021 10:57:30,927 [WARNING] (rapid) Process 53(npx) exited: Runtime exited with error: exit status 1
29 Nov 2021 10:57:30,927 [ERROR] (rapid) Init failed error=Runtime exited with error: exit status 1 InvokeID=
29 Nov 2021 10:57:30,927 [WARNING] (rapid) Reset initiated: ReserveFail
29 Nov 2021 10:57:30,927 [WARNING] (rapid) Cannot list external agents error=open /opt/extensions: no such file or directory
Could someone help me with fixing the error ? My expected output is the message as described in the function, but am seeing the errors.
Thank you
Because they both do not exist. There is a typo on your first 2 lines:
constchildProcess= require("child_process");
constpath= require("path");
Should be:
const childProcess= require("child_process");
const path= require("path");

AWS lambda function throwing error "newPromise is not defined"

I am using AWS lambda function with below code
'use strict';
var newPromise = require('es6-promise').Promise;
const childProcess= require("child_process");
const path= require("path");
const backupDatabase = () => {
const scriptFilePath =path.resolve(__dirname, "./backup.sh");
return newPromise((resolve, reject) => {
childProcess.execFile(scriptFilePath, (error) => {
if (error) {
console.error(error);
resolve(false);
}
resolve(true);
});
});
};
module.exports.handler = async (event) => {
const isBackupSuccessful = await backupDatabase();
if (isBackupSuccessful) {
return {
status: "success",
message: "Database backup completed successfully!"
};
}
return {
status: "failed",
message: "Failed to backup the database! Check out the logs for more details"
};
};
The code above run's with in the docker container, tries to run the below backup script
#!/bin/bash
#
# Author: Bruno Coimbra <bbcoimbra#gmail.com>
#
# Backups database located in DB_HOST, DB_PORT, DB_NAME
# and can be accessed using DB_USER. Password should be
# located in $HOME/.pgpass and this file should be
# chmod 0600[1].
#
# Target bucket should be set in BACKUP_BUCKET variable.
#
# AWS credentials should be available as needed by aws-cli[2].
#
# Dependencies:
#
# * pg_dump executable (can be found in postgresql-client-<version> package)
# * aws-cli (with python environment configured execute 'pip install awscli')
#
#
# References
# [1] - http://www.postgresql.org/docs/9.3/static/libpq-pgpass.html
# [2] - http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html
#
#
###############
### Variables
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
DB_HOST=
DB_PORT="5432"
DB_USER="postgres"
BACKUP_BUCKET=
###############
#
# **RISK ZONE** DON'T TOUCH below this line unless you know
# exactly what you are doing.
#
###############
set -e
export PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
### Variables
S3_BACKUP_BUCKET=${BACKUP_BUCKET:-test-db-backup-bucket}
TEMPFILE_PREFIX="db-$DB_NAME-backup"
TEMPFILE="$(mktemp -t $TEMPFILE_PREFIX-XXXXXXXX)"
DATE="$(date +%Y-%m-%d)"
TIMESTAMP="$(date +%s)"
BACKUPFILE="backup-$DB_NAME-$TIMESTAMP.sql.gz"
LOGTAG="DB $DB_NAME Backup"
### Validations
if [[ ! -r "$HOME/.pgpass" ]]; then
logger -t "$LOGTAG" "$0: Can't find database credentials. $HOME/.pgpass file isn't readable. Aborted."
exit 1
fi
if ! which pg_dump > /dev/null; then
logger -t "$LOGTAG" "$0: Can't find 'pg_dump' executable. Aborted."
exit 1
fi
if ! which aws > /dev/null; then
logger -t "$LOGTAG" "$0: Can't find 'aws cli' executable. Aborted."
exit 1
fi
logger -t "$LOGTAG" "$0: remove any previous dirty backup file"
rm -f /tmp/$TEMPFILE_PREFIX*
### Generate dump and compress it
logger -t "$LOGTAG" "Dumping Database..."
pg_dump -O -x -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -w "$DB_NAME" > "$TEMPFILE"
logger -t "$LOGTAG" "Dumped."
logger -t "$LOGTAG" "Compressing file..."
nice gzip -9 "$TEMPFILE"
logger -t "$LOGTAG" "Compressed."
mv "$TEMPFILE.gz" "$BACKUPFILE"
### Upload it to S3 Bucket and cleanup
logger -t "$LOGTAG" "Uploading '$BACKUPFILE' to S3..."
aws s3 cp "$BACKUPFILE" "s3://$S3_BACKUP_BUCKET/$DATE/$BACKUPFILE"
logger -t "$LOGTAG" "Uploaded."
logger -t "$LOGTAG" "Clean-up..."
rm -f $TEMPFILE
rm -f $BACKUPFILE
rm -f /tmp/$TEMPFILE_PREFIX*
logger -t "$LOGTAG" "Finished."
if [ $? -eq 0 ]; then
echo "script passed"
exit 0
else
echo "script failed"
exit 1
fi
I created a docker image with above app.js content and bakup.sh with the below docker file
ARG FUNCTION_DIR="/function"
FROM node:14-buster
RUN apt-get update && \
apt install -y \
g++ \
make \
cmake \
autoconf \
libtool \
wget \
openssh-client \
gnupg2
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - && \
echo "deb http://apt.postgresql.org/pub/repos/apt/ buster-pgdg main" | tee /etc/apt/sources.list.d/pgdg.list && \
apt-get update && apt-get -y install postgresql-client-12
ARG FUNCTION_DIR
RUN mkdir -p ${FUNCTION_DIR} && chmod -R 755 ${FUNCTION_DIR}
WORKDIR ${FUNCTION_DIR}
COPY package.json .
RUN npm install
COPY backup.sh .
RUN chmod +x backup.sh
COPY app.js .
ENTRYPOINT ["/usr/local/bin/npx", "aws-lambda-ric"]
CMD ["app.handler"]
I am running the docker container created with the image created from the above docker file
docker run -v ~/aws:/aws -it --rm -p 9000:8080 --entrypoint /aws/aws-lambda-rie backup-db:v1 /usr/local/bin/npx aws-lambda-ric app.handler
And trying to hit that docker container with below curl command
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
when I run curl command I am seeing the below error
An error I see is :"newPromise is not defined","trace":["ReferenceError: newPromise is not defined"," at backupDatabase (/function/app.js:9:3)","
Tried adding the variable var newPromise = require('es6-promise').Promise;, but that gave a new error "Cannot set property 'scqfkjngu7o' of undefined","trace"
Could someone help me with fixing the error ? My expected output is the message as described in the function, but am seeing the errors.
Thank you
Node 14 supports promises natively. You should do:
return new Promise((resolve, reject) => {
childProcess.execFile(scriptFilePath, (error) => {
if (error) {
console.error(error);
resolve(false);
}
resolve(true);
});
Note the space between new and Promise. Promise is the object and you are using a constructor. There is no need to import any module.

Puppet agent considers old version of package installed using exec

I am trying to install autoconf version 2.69 by building it from source. After autoconf is installed, my intention is to build another package called crmsh from its source. I want to do this using Puppet.
I have written a few classes that enable me to do this using puppet. The class contents are below.
Download autoconf from source
class custom-autoconf {
require custom-packages-1
exec { "download_autoconf" :
command => "wget http://ftp.gnu.org/gnu/autoconf/autoconf-2.69.tar.gz ; \
tar xvfvz autoconf-2.69.tar.gz; ",
path => ["/bin","/usr/bin","/sbin","/usr/sbin"],
cwd => '/root',
unless => "test -e /root/autoconf-2.69.tar.gz",
provider => shell,
}
notify { 'autoconf_download' :
withpath => true,
name => "download_autoconf",
message => "Execution of autoconf download completed. "
}
}
Build autoconf
class custom-autoconf::custom-autoconf-2 {
require custom-autoconf
exec { "install_autoconf" :
command => "sh configure ; \
make && make install ; \
sleep 5 ; \
autoconf --version",
path => ["/bin","/usr/bin","/sbin","/usr/sbin"],
timeout => 1800,
logoutput => true,
cwd => '/root/autoconf-2.69',
onlyif => "test -d /root/autoconf-2.69",
provider => shell,
}
notify { 'autoconf_install' :
withpath => true,
name => "install_autoconf",
message => "Execution of autoconf install completed. Requires custom-autoconf class completion "
}
}
Download crmsh source
class custom-autoconf::custom-crmsh {
require custom-autoconf::custom-autoconf-2
exec { "clone_crmsh" :
command => "git clone https://github.com/crmsh/crmsh.git ; ",
path => ["/bin","/usr/bin","/sbin","/usr/sbin"],
cwd => '/root',
unless => "test -d /root/crmsh",
provider => shell,
}
notify { 'crmsh_clone' :
withpath => true,
name => "clone_crmsh",
message => "Execution of git clone https://github.com/crmsh/crmsh.git completed. Requires custom-autoconf-2 "
}
}
Build crmsh
class custom-autoconf::custom-crmsh-1 {
require custom-autoconf::custom-crmsh
exec {"build_crmsh" :
command => "pwd ; \
autoconf --version ; \
sleep 5 ; \
autoconf --version ; \
sh autogen.sh ; \
sh configure ; \
make && make install ; ",
path => ["/bin","/usr/bin","/sbin","/usr/sbin"],
require => Class['custom-autoconf::custom-crmsh'],
cwd => '/root/crmsh',
onlyif => "test -d /root/crmsh",
provider => shell,
}
notify { 'crmsh_build' :
withpath => true,
name => "build_crmsh",
message => "Execution of crmsh build is complete. Depends on custom-crmsh"
}
}
The problem is that the crmsh build fails saying autoconf version is 2.63. Notice: /Stage[main]/Custom-autoconf::Custom-crmsh-1/Exec[build_crmsh]/returns: configure.ac:11: error: Autoconf version 2.69 or higher is required
When puppet execution completes with this failure, I see that autoconf version is 2.69 (meaning, the initial build of autoconf was successful).
Could someone please tell me why Puppet is considering autoconf version as 2.63 when in the system it is 2.69. Or, am I missing something here?
It was actually my mistake. It turns out that autoconf binary was present in /usr/bin and /usr/local/bin. The custom autoconf build creates the binary in /usr/local/bin and this was not mentioned in the "path =>" section. Since this was missing, puppet was executing autoconf present in /usr/bin. Adding /usr/local/bin in path fixed the issue.
Thanks for the help.

Resources