Puppet task command does not return output - puppet

During executing Puppet task run --nodes ip command i get reports/logs for most of the servers but only for a single server i get "no output for this node ".Even changes are not reflected in that machine while the same command works for rest of the machines.But at the same time job is not failed,it succeeds.
Note: I have a .sh file inside module/tasks/init.sh which contains puppet agent -t --tags (i pass parameters) command in it.Upon executing puppet task run command,puppet agent -t will be ran on those machines.
What will be the cause and is there anything not allowing to run that .sh file?
Update:
tasks module - modulename/tasks/init.sh
#!/bin/bash
puppet agent -t --noop --tags $PT_class --environment=$PT_env
Command to be executed in master:
puppet task run task_module_name class=modulename env=staging --query 'inventory { trusted.extensions.certname = "fqdn"}'
When in run this commadn it gets executed successfully but tasks module is not executed on agent machine
Note: I face this only on CIS hardening AMI on AWS

Related

Github Cli command "gh pr create" execution encountered "EOF" in Docker

I tried executing this command in the Terminal(and my environment is Ubuntu22.04) and it worked well with some Loading Effects, but when I configure this command in my docker container(created by my CI/CD pipeline executor) it does not work properly when the CI/CD pipeline runs. Below is my configuration and execution error:
- nsenter --mount=/host/proc/1/ns/mnt sh -c "cd /home/..USER../Desktop/..PROJECT..&&export https_proxy=..PROXY../&&gh pr create --title "test" --body "test" --base cicd_test"
Error in the CI/CD Console logs which runs properly in the local Terminal with some loading effects
hope run properly in the docker or some other alternative command that may work

Gitlab job failure: Cannot overwrite variable Host because it is read-only or constant

Hi people of the internet.
Basically I am unable to run even the simplest job and I keep getting the same error no matter what I put in the .gitlab-ci.yml file. See example below:
Here is the .gitlab-ci.yml file:
stages:
- test
job1:
stage: test
tags:
- testing
script:
- echo "Hello world!"
Here is the output ("?" corresponds to intentionally blacked out information):
Running with gitlab-runner 14.10.0 (c6bb62f6)
on runner_test ????????
Preparing the "shell" executor
00:00
Using Shell executor...
Preparing environment
00:00
Running on LAPTOP-????????...
Getting source from Git repository
00:01
WriteError:
Line |
219 | $HOST="[MASKED]"
| ~~~~~~~~~~~~~~~~~~~~~~
| Cannot overwrite variable Host because it is read-only or constant.
ERROR: Job failed: exit status 1
I know that $HOST is a reserved variable in powershell but I don't see the link between the error and the code. It may have something to do with the configuration of the runner on Windows. Has anyone encountered this error on Gitlab before? Or any suggestions on how to debug?
Here are the steps that I took to install the runner on Gitlab for Windows (see https://docs.gitlab.com/runner/install/windows.html):
Create a folder somewhere in the system: C:\GitLab-Runner.
Download the binary for 64-bit and put it into the folder (see https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-windows-amd64.exe).
Run prompt as an administrator
Run the following command:
cd C:\GitLab-Runner
gitlab-runner.exe register
Enter your GitLab instance URL (see Gitlab > Settings > CI/CD > Runners > Specific runners)
Enter the token to register the runner (see Gitlab > Settings > CI/CD > Runners > Specific runners)
Enter a description for the runner: runner_test for instance
Enter the tags associated with the runner, separated by commas: testing, windows for instance
Provide the runner executor: shell
Install GitLab Runner as a service and start it
cd C:\GitLab-Runner
gitlab-runner.exe install
gitlab-runner.exe start
I also had to install the latest version of pwsh in Windows (see gitlab-runner: prepare environment failed to start process pwsh in windows):
Run prompt as an administrator
Install the newer pwsh.exe:
winget install Microsoft.PowerShell
Restart the runner
cd C:\GitLab-Runner
gitlab-runner.exe restart
This issue was due to my choice of shell for some reason. A Gitlab runner can choose a shell among the following: bash, sh, powershell, pwsh, and cmd (the last one being deprecated now).
As I stated above I had been using pwsh. So, I went after the config.toml file inside of the C:\GitLab-Runner directory to manually make the change from pwsh to powershell.
...
[[runners]]
name = "runner_test"
executor = "shell"
shell = "powershell"
...
I then restarted the runner and got the job to complete properly:
cd C:\GitLab-Runner
gitlab-runner restart
I still get the error (more like a warning now) but it does not prevent the job from finishing anymore. If anyone has a better answer with a proper explanation I would gladly accept it as the answer to this question.
Note that pwsh to powershell are both powershell scripts (see https://docs.gitlab.com/runner/shells/index.html):
powershell Fully Supported PowerShell script. All commands are executed in PowerShell Desktop context. In GitLab Runner 12.0-13.12, this is the default when registering a new runner.
pwsh Fully Supported PowerShell script. All commands are executed in PowerShell Core context. In GitLab Runner 14.0 and later, this is the default when registering a new runner.

Databricks init scripts not working sometimes

Ok, it is very strange. I have some init scripts that I would like to run when a cluster starts
cluster has the init script , which is in a file (in dbfs)
basically this
dbfs:/databricks/init-scripts/custom-cert.sh
Now , when I make the init script like this, it works (no ssl errors for my endpoints. Also, the event logs for the cluster shows the duration as 1 second for the init script
dbutils.fs.put("/databricks/init-scripts/custom-cert.sh", """#!/bin/bash
cp /dbfs/orgcertificates/orgcerts.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt" >> /databricks/spark/conf/spark-env.sh
""")
However, if I just put the init script in an bash script and upload it to DBFS through a pipeline, the init script does not do anything. It executes , as per the event log but the execution duration is 0 sec.
I have the sh script in a file named
custom-cert.sh
with the same contents as above, i.e.
#!/bin/bash
cp /dbfs/orgcertificates/orgcerts.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt"
but when I check /usr/local/share/ca-certificates/ , it does not contain /dbfs/orgcertificates/orgcerts.crt, even though the cluster init script has run.
Also, I have compared the contents of the init script in both cases and it least to the naked eye, I can't figure out any difference
i.e.
%sh
cat /dbfs/databricks/init-scripts/custom-cert.sh
shows the same contents in both the scenarios. What is the problem with the 2nd case?
EDIT: I read a bit more about init scripts and found that the logs of init scripts are written here
%sh
ls /databricks/init_scripts/
Looking at the err file in that location, it seems there is an error
sudo: update-ca-certificates
: command not found
Why is it that update-ca-certificates found in the first case but not when I put the same script in a sh script and upload it to dbfs (instead of executing the dbutils.fs.put within a notebook) ?
EDIT 2: In response to the first answer. After running the command
dbutils.fs.put("/databricks/init-scripts/custom-cert.sh", """#!/bin/bash
cp /dbfs/orgcertificates/orgcerts.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt" >> /databricks/spark/conf/spark-env.sh
""")
the output is the file custom-cert.sh and then I restart the cluster with the init script location as dbfs:/databricks/init-scripts/custom-cert.sh and then it works. So, it is essentially the same content that the init script is reading (which is the generated sh script). Why can't it read it if I do not use dbfs put but just put the contents in bash file and upload it during the CI/CD process?
As we aware, An init script is a shell script that runs during startup of each cluster node before the Apache Spark driver or worker JVM start. case-2 When you run bash
command by using of %sh magic command means you are trying to execute this command in Local driver node. So that workers nodes is not able to access . But based on
case-1 , By using of %fs magic command you are trying run copy command (dbutils.fs.put )from root . So that along with driver node , other workers node also can access path .
Ref : https://docs.databricks.com/data/databricks-file-system.html#summary-table-and-diagram
It seems that my observations I made in the comments section of my question is the way to go.
I now create the init script using a databricks job that I run during the CI/CD pipeline from Azure DevOps.
The notebook has the commands
dbutils.fs.rm("/databricks/init-scripts/custom-cert.sh")
dbutils.fs.put("/databricks/init-scripts/custom-cert.sh", """#!/bin/bash
cp /dbfs/internal-certificates/certs.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt" >> /databricks/spark/conf/spark-env.sh
""")
I then create a Databricks job (pointing to this notebook), the cluster is a job cluster which is just temporary . Of course , in my case , even this job creation is automated using a powershell script.
I then call this Databricks job in the release pipeline using again a Powershell script.
This creates the file
/databricks/init-scripts/custom-cert.sh
I then use this file in any other cluster that accesses my org's endpoints (without certificate errors).
I do not know (or still understand), why can't the same script file be just part of a repo and uploaded during the release process (instead of it being this Databricks job calling a notebook). I would love to know the reason . The other answer on this question does not hold true as you can see, that the cluster script is created by a job cluster and then accessed from another cluster as part of its init script.
It simply boils down to how the init script gets created.
But I get my job done. Just if it helps someone get their job done too.
I have raised a support case though to understand the reason.

Docker - unable to run script

What I'm doing
I am using AWS batch to run a docker container for a large compute job. I have configured the ECR/ECS successfully to the best of my knowledge but am having issues running the required commands for reasons that are beyond my level of understanding with docker ( newbie )
What I need to do is pass the below commands into my application and start my application to perform some heavy computing tasks; all commands listed below must be present.
The Issue(s)
The issue arises when I send the submit job to AWS batch; this service pulls the image from the ACR ( amazon container repository ) and spins up a compute environment. The issue comes from when I try to run the command I pass in, below I will go throgh it.
"command": [
"mkdir -p logging",
"chmod 777 logging/",
"docker run -t -i -e my-application", # container name
"-e APIKEY",
"-e BASEURI",
"-e APIUSER",
"-v WORKSPACE /logging:/src/log",
"DOCKERIMAGE",
"python my_app.py",
"-t APP_USER",
"-e APP_ENVIRONMENT",
"-u APP_USERNAME",
"-p APP_PASSWORD",
"-i IN_PATH",
"-o OUT_PATH",
"-b tmp/"
]
The command above generates the following error(s)
container_linux.go:370: starting container process caused: exec: "mkdir -p log": executable file not found in $PATH
I tried to pass in the command to echo the env var $PATH but was unsuccesfull getting a response and resulted in a similar error.
I have ran successfully "ls" and was able to see the directory contents of my application inside.
I am not however able to run any of these commands that I have included in the command [] section. I have tried just running python and such in hopes of getting a more detailed error but was unsuccessful.
Logic in plain English
Create a path called logging if it doesnt exist
set the permissions for logging
run the docker container and pass in the environment variables while doing so
Tell docker to run the python file my_app.py and pass in the expected runtime args
Execute and perform the required logic deligated in the python3 application
Questions
Why can I not create a directory here called "logging" where am I ?
Am I running these properly as defined by AWS batch? or docker
What am I missing or where am I going wrong?
AWS Batch high level doc
AWS Batch link specific to what i'm doing
Assuming that you're following the syntax described in the Container
Properties
section of the AWS docs, you have several problems with the syntax of
your command directive.
First
The command directive can only run a single command. You can't mash together a bunch of commands as you're trying to do in your example. If you need to run multiple commands you would need to embed them as an argument to a shell. For example, something like:
command: ["/bin/sh", "-c", "mkdir -p logging; chmod 777 logging; ..."]
Second
You must properly tokenize your
command lines -- that is, when you type mkdir -p logging at the
command prompt, the shell splits this into three parts (or "tokens"): ['mkdir', '-p', 'logging']. You need to do the same thing when building up the
list of arguments to command.
This is invalid:
command: ["mkdir -p logging"]
That would looking for a command named mkdir -p logging, and of course no such command exists. That would properly be written as:
command: ["mkdir", "-p", "logging"]
Third
I'm not very familiar with the AWS batch environment, but it's unlikely you can run a docker command inside a docker` container as you're trying to do. It's unclear why you're doing this, though: why not just configure your AWS batch job with the appropriate image, environment variables, etc?
Take a look at some of these example job definitions.

Puppet task run throwing invalid json error

My Puppet sever is PE 2017.3.1
My Agent node is on 5.5 version
I am facing an error while executing the command
/opt/puppetlabs/puppet/bin/puppet-task run sample --nodes puppet-agent
My Sample file is a bash file which contains : -
#!/usr/bin/env bash
hostnamectl
I was able to list my task using cli
The above puppet-task command throws an error on command line: -
1. Starting Job
2. Invalid Json
Puppet access login token expired. So after recreating it, I was able to perform the task.

Resources