My Puppet sever is PE 2017.3.1
My Agent node is on 5.5 version
I am facing an error while executing the command
/opt/puppetlabs/puppet/bin/puppet-task run sample --nodes puppet-agent
My Sample file is a bash file which contains : -
#!/usr/bin/env bash
hostnamectl
I was able to list my task using cli
The above puppet-task command throws an error on command line: -
1. Starting Job
2. Invalid Json
Puppet access login token expired. So after recreating it, I was able to perform the task.
Related
Hi people of the internet.
Basically I am unable to run even the simplest job and I keep getting the same error no matter what I put in the .gitlab-ci.yml file. See example below:
Here is the .gitlab-ci.yml file:
stages:
- test
job1:
stage: test
tags:
- testing
script:
- echo "Hello world!"
Here is the output ("?" corresponds to intentionally blacked out information):
Running with gitlab-runner 14.10.0 (c6bb62f6)
on runner_test ????????
Preparing the "shell" executor
00:00
Using Shell executor...
Preparing environment
00:00
Running on LAPTOP-????????...
Getting source from Git repository
00:01
WriteError:
Line |
219 | $HOST="[MASKED]"
| ~~~~~~~~~~~~~~~~~~~~~~
| Cannot overwrite variable Host because it is read-only or constant.
ERROR: Job failed: exit status 1
I know that $HOST is a reserved variable in powershell but I don't see the link between the error and the code. It may have something to do with the configuration of the runner on Windows. Has anyone encountered this error on Gitlab before? Or any suggestions on how to debug?
Here are the steps that I took to install the runner on Gitlab for Windows (see https://docs.gitlab.com/runner/install/windows.html):
Create a folder somewhere in the system: C:\GitLab-Runner.
Download the binary for 64-bit and put it into the folder (see https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-windows-amd64.exe).
Run prompt as an administrator
Run the following command:
cd C:\GitLab-Runner
gitlab-runner.exe register
Enter your GitLab instance URL (see Gitlab > Settings > CI/CD > Runners > Specific runners)
Enter the token to register the runner (see Gitlab > Settings > CI/CD > Runners > Specific runners)
Enter a description for the runner: runner_test for instance
Enter the tags associated with the runner, separated by commas: testing, windows for instance
Provide the runner executor: shell
Install GitLab Runner as a service and start it
cd C:\GitLab-Runner
gitlab-runner.exe install
gitlab-runner.exe start
I also had to install the latest version of pwsh in Windows (see gitlab-runner: prepare environment failed to start process pwsh in windows):
Run prompt as an administrator
Install the newer pwsh.exe:
winget install Microsoft.PowerShell
Restart the runner
cd C:\GitLab-Runner
gitlab-runner.exe restart
This issue was due to my choice of shell for some reason. A Gitlab runner can choose a shell among the following: bash, sh, powershell, pwsh, and cmd (the last one being deprecated now).
As I stated above I had been using pwsh. So, I went after the config.toml file inside of the C:\GitLab-Runner directory to manually make the change from pwsh to powershell.
...
[[runners]]
name = "runner_test"
executor = "shell"
shell = "powershell"
...
I then restarted the runner and got the job to complete properly:
cd C:\GitLab-Runner
gitlab-runner restart
I still get the error (more like a warning now) but it does not prevent the job from finishing anymore. If anyone has a better answer with a proper explanation I would gladly accept it as the answer to this question.
Note that pwsh to powershell are both powershell scripts (see https://docs.gitlab.com/runner/shells/index.html):
powershell Fully Supported PowerShell script. All commands are executed in PowerShell Desktop context. In GitLab Runner 12.0-13.12, this is the default when registering a new runner.
pwsh Fully Supported PowerShell script. All commands are executed in PowerShell Core context. In GitLab Runner 14.0 and later, this is the default when registering a new runner.
Ok, it is very strange. I have some init scripts that I would like to run when a cluster starts
cluster has the init script , which is in a file (in dbfs)
basically this
dbfs:/databricks/init-scripts/custom-cert.sh
Now , when I make the init script like this, it works (no ssl errors for my endpoints. Also, the event logs for the cluster shows the duration as 1 second for the init script
dbutils.fs.put("/databricks/init-scripts/custom-cert.sh", """#!/bin/bash
cp /dbfs/orgcertificates/orgcerts.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt" >> /databricks/spark/conf/spark-env.sh
""")
However, if I just put the init script in an bash script and upload it to DBFS through a pipeline, the init script does not do anything. It executes , as per the event log but the execution duration is 0 sec.
I have the sh script in a file named
custom-cert.sh
with the same contents as above, i.e.
#!/bin/bash
cp /dbfs/orgcertificates/orgcerts.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt"
but when I check /usr/local/share/ca-certificates/ , it does not contain /dbfs/orgcertificates/orgcerts.crt, even though the cluster init script has run.
Also, I have compared the contents of the init script in both cases and it least to the naked eye, I can't figure out any difference
i.e.
%sh
cat /dbfs/databricks/init-scripts/custom-cert.sh
shows the same contents in both the scenarios. What is the problem with the 2nd case?
EDIT: I read a bit more about init scripts and found that the logs of init scripts are written here
%sh
ls /databricks/init_scripts/
Looking at the err file in that location, it seems there is an error
sudo: update-ca-certificates
: command not found
Why is it that update-ca-certificates found in the first case but not when I put the same script in a sh script and upload it to dbfs (instead of executing the dbutils.fs.put within a notebook) ?
EDIT 2: In response to the first answer. After running the command
dbutils.fs.put("/databricks/init-scripts/custom-cert.sh", """#!/bin/bash
cp /dbfs/orgcertificates/orgcerts.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt" >> /databricks/spark/conf/spark-env.sh
""")
the output is the file custom-cert.sh and then I restart the cluster with the init script location as dbfs:/databricks/init-scripts/custom-cert.sh and then it works. So, it is essentially the same content that the init script is reading (which is the generated sh script). Why can't it read it if I do not use dbfs put but just put the contents in bash file and upload it during the CI/CD process?
As we aware, An init script is a shell script that runs during startup of each cluster node before the Apache Spark driver or worker JVM start. case-2 When you run bash
command by using of %sh magic command means you are trying to execute this command in Local driver node. So that workers nodes is not able to access . But based on
case-1 , By using of %fs magic command you are trying run copy command (dbutils.fs.put )from root . So that along with driver node , other workers node also can access path .
Ref : https://docs.databricks.com/data/databricks-file-system.html#summary-table-and-diagram
It seems that my observations I made in the comments section of my question is the way to go.
I now create the init script using a databricks job that I run during the CI/CD pipeline from Azure DevOps.
The notebook has the commands
dbutils.fs.rm("/databricks/init-scripts/custom-cert.sh")
dbutils.fs.put("/databricks/init-scripts/custom-cert.sh", """#!/bin/bash
cp /dbfs/internal-certificates/certs.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt" >> /databricks/spark/conf/spark-env.sh
""")
I then create a Databricks job (pointing to this notebook), the cluster is a job cluster which is just temporary . Of course , in my case , even this job creation is automated using a powershell script.
I then call this Databricks job in the release pipeline using again a Powershell script.
This creates the file
/databricks/init-scripts/custom-cert.sh
I then use this file in any other cluster that accesses my org's endpoints (without certificate errors).
I do not know (or still understand), why can't the same script file be just part of a repo and uploaded during the release process (instead of it being this Databricks job calling a notebook). I would love to know the reason . The other answer on this question does not hold true as you can see, that the cluster script is created by a job cluster and then accessed from another cluster as part of its init script.
It simply boils down to how the init script gets created.
But I get my job done. Just if it helps someone get their job done too.
I have raised a support case though to understand the reason.
Info + objective:
I'm using MAAS to deploy workstations with Ubuntu.
MAAS just deploys the machine with stock Ubuntu, and I then run a bash script I wrote to set up everything needed.
So far, I've ran that bash script manually on the newly deployed machines. Now, I'm trying to have MAAS run that script automatically.
What I did + error:
In the MAAS machine, I create the following file curtin file called /var/snap/maas/current/preseeds/curtin_userdata_ubuntu which contains the following:
write_files:
bash_script:
path: /root/script.sh
content: |
#!/bin/bash
echo blabla
... very long bash script
permissions: '0755'
late_commands:
run_script: ["/bin/bash /root/script.sh"]
However, in the log, I see the following:
known-caiman cloud-init[1372]: Command: ['/bin/bash /root/script.sh']
known-caiman cloud-init[1372]: Exit code: -
known-caiman cloud-init[1372]: Reason: [Errno 2] No such file or directory: '/bin/bash /root/script.sh': '/bin/bash /root/script.sh'
Question
I'm not sure putting such a large bash script in the curtin file is a good idea. Is there a way to store the bash script on the MAAS machine, and have curtin upload it to the server, and then execute it? If not, Is it possible to fix the error I'm having?
Thanks ahead!
This worked executing the command:
["curtin", "in-target", "--", "/bin/bash", "/root/script.sh"]
Though this method still means I have to write to a file and then execute it. I'm still hoping there's a way to upload a file and then execute it.
I do not add my script to curtin file.
I run below command and deploy servers.
maas admin machine deploy $system_id user_data=$(base64 -w0 /root/script.sh)
I would try
runcmd:
- [/bin/scp, user#host:/somewhere/script.sh, /root/]
late_commands:
run_script: ['/bin/bash', '/root/script.sh']
This obviously imply that you inject the proper credentials on the machine being deployed.
During executing Puppet task run --nodes ip command i get reports/logs for most of the servers but only for a single server i get "no output for this node ".Even changes are not reflected in that machine while the same command works for rest of the machines.But at the same time job is not failed,it succeeds.
Note: I have a .sh file inside module/tasks/init.sh which contains puppet agent -t --tags (i pass parameters) command in it.Upon executing puppet task run command,puppet agent -t will be ran on those machines.
What will be the cause and is there anything not allowing to run that .sh file?
Update:
tasks module - modulename/tasks/init.sh
#!/bin/bash
puppet agent -t --noop --tags $PT_class --environment=$PT_env
Command to be executed in master:
puppet task run task_module_name class=modulename env=staging --query 'inventory { trusted.extensions.certname = "fqdn"}'
When in run this commadn it gets executed successfully but tasks module is not executed on agent machine
Note: I face this only on CIS hardening AMI on AWS
I am trying to use sh or ssh to connect to a linux box via jenkins (I am a noob admittedly). Even trying a ls command I am getting error - I did have this working before however - any help greatly appreciated.
Building in workspace /var/lib/jenkins/jobs/Demo/workspace executing
script:
USER="jenkins" sh '''#!/bin/bash
HOST=10.59.151.121
USER=devuser
PASSWORD=TGMCfpfS
ls
bye
EOF
'''
: No such file or directory [SSH] exit-status: 127 Build step
'Execute shell script on remote host using ssh' marked build as
failure Finished: FAILURE
For some reason I found that adding commands after the ''' allows them to be executed - even although the same warning appears, it works fine!