the command apt-get upgrade fails in puppet - puppet

I'm running a puppet master and I need to execute these commands on my puppet agent.
Lock kernel from updating
for i in $(dpkg -l "*$(uname -r)*" | grep kernel | awk '{print $2}'); do echo $i hold | dpkg --set-selections; done
Update
apt-get update -y
Upgrade
apt-get upgrade -y
apt-get update -y runs smoothly, but the other two aren't.
Can you give the correct Puppet syntax for this?

exec {'lock kernel from updating':
command => "bash -c 'for i in $(dpkg -l "uname -r" | grep kernel | awk '{print \$2}'); do echo \$i hold | dpkg --set-selections; done'",
}
exec{'update':
command => 'apt-get update -y',
}
exec{'upgrade':
command => 'apt-get upgrade -y',
}

Related

How to automate installation of missing GPG keys on Linux

I've been working with Linux containers for several years. I am surprised that I wasn't able to find a thread about this question. Scenario:
I've just added a new package index (/etc/sources.list.d/example.list) and want to install a package, let's call it snailmail.
I run the commands:
apt-get update && apt-get install -y snailmail
I get the following error:
W: GPG error: https://example.com/snailmail/debian stable InRelease:
The following signatures couldn't be verified because the public key is not available:
NO_PUBKEY 7EF2A9D5F293ECE4
What is the best way to automate the installation of GPG keys?
apt-key now seems to be deprecated, I have created a script that will detect and get the missing keys, you can get it here.
#!/bin/sh -e
tmp="$(mktemp)"
sudo apt-get update 2>&1 | sed -En 's/.*NO_PUBKEY ([[:xdigit:]]+).*/\1/p' | sort -u > "${tmp}"
cat "${tmp}" | xargs sudo gpg --keyserver "hkps://keyserver.ubuntu.com:443" --recv-keys # to /usr/share/keyrings/*
cat "${tmp}" | xargs -L 1 sh -c 'sudo gpg --yes --output "/etc/apt/trusted.gpg.d/$1.gpg" --export "$1"' sh # to /etc/apt/trusted.gpg.d/*
rm "${tmp}"
Here's a handy script that can be called during the build process to download and install common GPG keys (from the Ubuntu keyserver):
Prerequisites:
wget
for PUBKEY in $(apt-get update 2>&1 | grep NO_PUBKEY | awk '{print $NF}')
do
wget -q "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x${PUBKEY}" -O - | sed -n '/BEGIN/,/END/p' | apt-key add - 2>/dev/null
done

Errors still print to the terminal

I'm writing a bash script here to install docker and send all outputs to the logs.txt file. But i still get errors such as the one below displayed on the terminal, what I'm i doing wrong here to get these errors?
E: Could not get lock /var/lib/dpkg/lock-frontend - open (11: Resource temporarily unavailable)
E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?
if [[ `command -v apt-get` ]]; then
echo -e "\n${GREEN}[${WHITE}+${GREENS}]${GREENS} Getting requirements....."
sleep 1;
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release >> logs.txt
echo -e "\n${GREEN}[${WHITE}+${GREENS}]${GREENS} Adding Docker’s official GPG key........"
sleep 1;
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo -e "\n${GREEN}[${WHITE}+${GREENS}]${GREENS} Installing Docker......."
sleep 1;
sudo apt-get install -y docker-ce docker-ce-cli containerd.io >> logs.txt
echo -e "\n${GREEN}[${WHITE}+${GREENS}]${GREENS} Docker version........"
sleep 1;
docker --version | head -n1

Why does this not work to configure node using nvm and yarn on remote VM?

I am trying to automate VM configuration with a script and am having some trouble getting access to some path variables that get set in either ~/.bashrc, ~/.bash_profile, or ~/.profile.
My remote VM is running ubuntu 14.04 LTS and I am deploying over ssh.
This is the array that gets joined together to be run as a bash command to configure the vm by installing nvm:
return [
rm -rf ~/.nvm,
sudo apt-get update,
sudo apt-get install -y build-essential libssl-dev,
curl -sL https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh -o install_nvm.sh,
bash install_nvm.sh,
echo "source ~/.nvm/nvm.sh" >> ~/.bash_profile
].join('\n');
return [
`rm -rf ~/.nvm`,
`sudo apt-get update`,
`sudo apt-get install -y build-essential libssl-dev`,
`curl -sL https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh -o install_nvm.sh`,
`bash install_nvm.sh`,
`echo "source ~/.nvm/nvm.sh" >> ~/.bash_profile`
].join('\n');
But when when I run the next script that actually installs node and yarn, it cannot find nvm:
return [
`nvm install ${config.node.version}`,
`nvm use ${config.node.version}`,
`echo "using node $(node -v) and npm $(npm -v)"`,
`curl -o- -L https://yarnpkg.com/install.sh | bash`,
'echo "export PATH="$HOME/.yarn/bin:$PATH"" >> ~/.bash_profile',
].join('\n');
This is the error:
bash: nvm: command not found
bash: line 1: nvm: command not found`
I don't want to ssh in and manually add anything to any of the various profiles. I'd like it all to be done by the script. I also want to avoid sourcing ~/.nvm/nvm.sh or sourcing any of the profiles when the ssh session begins. I was under the impression that an ssh session automatically sources ~/.bash_profile, which should then read from those variables correct? If not, then how else can I configure my deployment script to automatically have access to these variables?
Based on the fact that you are using && as you said in your comments I would add a line to actually source ~/.nvm/nvm.sh before running the nvm commands. You likely don't have the command available at the shell until that has been run.
Change this:
return [
`rm -rf ~/.nvm`,
`sudo apt-get update`,
`sudo apt-get install -y build-essential libssl-dev`,
`curl -sL https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh -o install_nvm.sh`,
`bash install_nvm.sh`,
`echo "source ~/.nvm/nvm.sh" >> ~/.bash_profile`
].join('\n');
To this:
return [
`rm -rf ~/.nvm`,
`sudo apt-get update`,
`sudo apt-get install -y build-essential libssl-dev`,
`curl -sL https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh -o install_nvm.sh`,
`bash install_nvm.sh`,
`echo "source ~/.nvm/nvm.sh" >> ~/.bash_profile`,
`source ~/.nvm/nvm.sh`
].join('\n');

Why this command destroyed my ubuntu 14:04 installation?

I successfully used this command to remove all the old kernels from my system:
dpkg --list |
grep linux-image |
awk '{ print $2 }' |
sort -V |
sed -n '/'"linux-image-3.13.0-100-generic"'/q;p' |
xargs sudo apt-get -y purge
But when I used this modified version to un-install cups, dpkg started to remove packages unrelated to cups:
dpkg --list |
grep cups |
awk '{ print $2 }' |
sort -V |
xargs sudo apt-get -y purge
By the time I realized what was happening, my system had became already unbootable. I don't know if it's supposed to happen with xargs, but I could not stop the execution with a Ctrl+C sequence.

cannot 'sudo' inside of bash if statement

I've dual linux boot i'm newbie in bash
when running the following script i got strange error:
if [[ 'grep -i fedora /etc/issue' ]]; then
echo "the OS is Fedora"
$(sudo yum update -y && sudo yum upgrade -y)
else
echo "the OS is Ubuntu"
$(sudo apt-get update && sudo apt-get upgrade -y && sudo apt-get dist-upgrade -y)
fi
error : ./server_update.sh: line 9: Loaded: command not found
It's attempting to execute the output of your apt-get/yum commands, lose the $(..)
You also have an issue at the start:
if [[ -n "$(grep -i fedora /etc/issue)" ]]; then
is the correct way to check if a string exists.
Your code should then look like this:
if [[ -n "$(grep -i fedora /etc/issue)" ]]; then
echo "the OS is Fedora"
sudo yum update -y && sudo yum upgrade -y
else
echo "the OS is Ubuntu"
sudo apt-get update && sudo apt-get upgrade -y && sudo apt-get dist-upgrade -y
fi

Resources