how do i fix my script problem issuing SED command - linux

i am trying to write my first script and everything is working fine it is to automatically install a new server, the only problem i have is using sed to change the ssl certificate file i have followed all the answers in the forums available here but i still cant get it to overwrite i have used 2 other sed commands and working fine
i am running script on ubuntu 16.04 with apache2 and php7.0 lamp
the script completes but no rewrite of conf
this is my script just in case anything is conflicting
#!/bin/bash
apt-get -y update
apt-get -y upgrade
apt-get -y install apache2
apt-get install -y php7.0 libapache2-mod-php7.0 php7.0-cli php7.0-common php7.0-mbstring php7.0-gd php7.0-intl php7.0-xml php7.0-mysql php7.0-mcrypt php7.0-zip
echo mysql-server-5.1 mysql-server/root_password password PASSWORD | debconf-set-selections
echo mysql-server-5.1 mysql-server/root_password_again password PASSWORD | debconf-set-selections
apt-get install -y mysql-server
/etc/init.d/mysql restart
a2enmod ssl
a2ensite default-ssl.conf
service apache2 restart
APP_PASS="PASSWORD"
ROOT_PASS="PASSWORD"
APP_DB_PASS="PASSWORD"
echo "phpmyadmin phpmyadmin/dbconfig-install boolean true" | debconf-set-selections
echo "phpmyadmin phpmyadmin/app-password-confirm password $APP_PASS" | debconf-set-selections
echo "phpmyadmin phpmyadmin/mysql/admin-pass password $ROOT_PASS" | debconf-set-selections
echo "phpmyadmin phpmyadmin/mysql/app-pass password $APP_DB_PASS" | debconf-set-selections
echo "phpmyadmin phpmyadmin/reconfigure-webserver multiselect apache2" | debconf-set-selections
apt-get install -y phpmyadmin
sed -i 's/Port 22/Port 4747/g' /etc/ssh/sshd_config
sed -i 's/PermitRootLogin yes/PermitRootLogin no/g' /etc/ssh/sshd_config
service sshd restart
apt-get install vsftpd -y
sed -i 's/root/#root/g' /etc/ftpusers
service vsftpd restart
apt-get install software-properties-common -y
add-apt-repository ppa:certbot/certbot -y
apt-get update -y
apt-get install python-certbot-apache -y
service apache2 stop
certbot certonly --standalone --non-interactive --agree-tos -m EMAIL#mymail.com -d domain.com
adduser --quiet --disabled-password --shell /bin/bash --home /home/USERNAME --gecos "User" USERNAME
echo "USERNAME:PASSWORD" | chpasswd
usermod -aG sudo USERNAME
iptables -I INPUT 1 -p udp -m udp --dport 1900 -j DROP
crontab -l > mycron
echo "#daily letsencrypt renew --quiet && systemctl reload apache2" >> mycron
crontab mycron
rm mycron (WORKS BUT GIVES ERROR no crontab for root)
#sed -i "s|SSLCertificateFile=/etc/ssl/certs/ssl-cert-snakeoil.pem|SSLCertificateFile=/letsencrypt/live/domain.com/fullchain.pem|g" /etc/apache2/sites-enabled/default-ssl.conf (NOT WORKING)
#SSL_DEFAULT_CERT_PATH="SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem"
#SSL_CERT_PATH="SSLCertificateFile /letsencrypt/live/domain.com/fullchain.pem"
#sed -i "s|.*\b$SSL_DEFAULT_CERT_PATH\b.*|$SSL_CERT_PATH|" /etc/apache2/sites-enabled/default-ssl.conf (NOT WORKING)
service apache2 restart
these are the two i have tried but no luck
sed -i "s|SSLCertificateFile=/etc/ssl/certs/ssl-cert-snakeoil.pem|SSLCertificateFile=/letsencrypt/live/domain.com/fullchain.pem|g" /etc/apache2/sites-enabled/default-ssl.conf
does not work
SSL_DEFAULT_CERT_PATH="SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem"
SSL_CERT_PATH="SSLCertificateFile /letsencrypt/live/domain.com/fullchain.pem"
sed -i "s|.*\b$SSL_DEFAULT_CERT_PATH\b.*|$SSL_CERT_PATH|" /etc/apache2/sites-enabled/default-ssl.conf
does not work
original file SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
not sure if the spaces make a difference

Related

Cloud-init File Command line option 'S' [from -fsSL] is not understood in combination with the other options

i want to execute this cloud-init file and terraform file:
Cloud-init:
#cloud-config
runcmd:
- mkdir react
- cd react
- type -p curl >/dev/null || sudo apt install curl -y
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg \
&& sudo chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
&& sudo apt update \
&& sudo apt install gh -y
- curl -o actions-runner-linux-x64-2.301.1.tar.gz -L https://github.com/actions/runner/releases/download/v2.301.1/actions-runner-linux-x64-2.301.1.tar.gz
- tar xzf ./actions-runner-linux-x64-2.301.1.tar.gz
- yes "" | ./config.sh --url https://github.com/yuuval/react-deploy --token AVYXWHXNRBPIDXJDPUDK6QTD2LIPE
- sudo ./svc.sh install
- sudo ./svc.sh start
- yes "" | sudo apt install nginx
- gh auth login --hostname github.com --with-token <<< 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 --git-protocol https
- gh repo clone yuuval/react-deploy
- cd react-deploy
- gh workflow run node.js.yml
- sleep 70
- cd /etc/nginx/sites-available
- sudo rm default
- echo "server {
listen 80 default_server;
server_name _;
# react app & front-end files
location / {
root /home/ubuntu/react/_work/react-deploy/react-deploy/build;
try_files \$uri /index.html;
}
}" | sudo tee /etc/nginx/sites-available/default
- sudo service nginx restart
- sudo chmod +x /home
- sudo chmod +x /home/ubuntu
- sudo chmod +x /home/ubuntu/react
- sudo chmod +x /home/ubuntu/react/_work
- sudo chmod +x /home/ubuntu/react/_work/react-deploy
- sudo chmod +x /home/ubuntu/react/_work/react-deploy/react-deploy
- sudo chmod +x /home/ubuntu/react/_work/react-deploy/react-deploy/build
The terraform file isn't relevant i think. So when i run this whole thing with terraform init and terraform apply, its going threw but nothing is hapenning. In the /var/log in the file cloud-init-output file i found this error:
dd: unrecognized operand ‘ ’
Try 'dd --help' for more information.
E: Command line option 'S' [from -fsSL] is not understood in combination with the other options.
I guess its from this command, which should install gh cli (found here: https://github.com/cli/cli/blob/trunk/docs/install_linux.md):
type -p curl >/dev/null || sudo apt install curl -y
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg \
&& sudo chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
&& sudo apt update \
&& sudo apt install gh -y
If i do this whole cloud-init file manually it works. So i don't know what to do else.
You seem to be missing \ and && after install curl -y, since I just tried on two WSL machines (that's all I have with me right now) and it was just fine there.
So my suspicion is that your curl command got dazed inside, since you're not exactly running that smaller command and bigger one separately, but they should be rather sundered, so maybe give it a shot?
On this weird page (came up by exact search) https://ouyen.github.io/github/ I found no install curl -y but the next one, which clearly indicated it being ran separately, so I think your issue is just there.

Errors still print to the terminal

I'm writing a bash script here to install docker and send all outputs to the logs.txt file. But i still get errors such as the one below displayed on the terminal, what I'm i doing wrong here to get these errors?
E: Could not get lock /var/lib/dpkg/lock-frontend - open (11: Resource temporarily unavailable)
E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?
if [[ `command -v apt-get` ]]; then
echo -e "\n${GREEN}[${WHITE}+${GREENS}]${GREENS} Getting requirements....."
sleep 1;
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release >> logs.txt
echo -e "\n${GREEN}[${WHITE}+${GREENS}]${GREENS} Adding Docker’s official GPG key........"
sleep 1;
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo -e "\n${GREEN}[${WHITE}+${GREENS}]${GREENS} Installing Docker......."
sleep 1;
sudo apt-get install -y docker-ce docker-ce-cli containerd.io >> logs.txt
echo -e "\n${GREEN}[${WHITE}+${GREENS}]${GREENS} Docker version........"
sleep 1;
docker --version | head -n1

Installing dotnet Core on Ubuntu 16.04

I am trying to host my ASP.NET-Core WebApi on nginx on my ubuntu system version 16.04.
According to this and this documentation, I should enter the commands:
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg
However, the command curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg gives me the following error message:
(23) Failed writing body
Am I doing something wrong, or is the documentation wrong?
Update:
Thank you
Update:
I can execute the following commands:
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg
sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-trusty-prod trusty main" > /etc/apt/sources.list.d/dotnetdev.list'
sudo apt-get update
But when I enter sudo apt-get install dotnet-SDK-2.0.0, I am getting the following errors:
Update:

Why does this not work to configure node using nvm and yarn on remote VM?

I am trying to automate VM configuration with a script and am having some trouble getting access to some path variables that get set in either ~/.bashrc, ~/.bash_profile, or ~/.profile.
My remote VM is running ubuntu 14.04 LTS and I am deploying over ssh.
This is the array that gets joined together to be run as a bash command to configure the vm by installing nvm:
return [
rm -rf ~/.nvm,
sudo apt-get update,
sudo apt-get install -y build-essential libssl-dev,
curl -sL https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh -o install_nvm.sh,
bash install_nvm.sh,
echo "source ~/.nvm/nvm.sh" >> ~/.bash_profile
].join('\n');
return [
`rm -rf ~/.nvm`,
`sudo apt-get update`,
`sudo apt-get install -y build-essential libssl-dev`,
`curl -sL https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh -o install_nvm.sh`,
`bash install_nvm.sh`,
`echo "source ~/.nvm/nvm.sh" >> ~/.bash_profile`
].join('\n');
But when when I run the next script that actually installs node and yarn, it cannot find nvm:
return [
`nvm install ${config.node.version}`,
`nvm use ${config.node.version}`,
`echo "using node $(node -v) and npm $(npm -v)"`,
`curl -o- -L https://yarnpkg.com/install.sh | bash`,
'echo "export PATH="$HOME/.yarn/bin:$PATH"" >> ~/.bash_profile',
].join('\n');
This is the error:
bash: nvm: command not found
bash: line 1: nvm: command not found`
I don't want to ssh in and manually add anything to any of the various profiles. I'd like it all to be done by the script. I also want to avoid sourcing ~/.nvm/nvm.sh or sourcing any of the profiles when the ssh session begins. I was under the impression that an ssh session automatically sources ~/.bash_profile, which should then read from those variables correct? If not, then how else can I configure my deployment script to automatically have access to these variables?
Based on the fact that you are using && as you said in your comments I would add a line to actually source ~/.nvm/nvm.sh before running the nvm commands. You likely don't have the command available at the shell until that has been run.
Change this:
return [
`rm -rf ~/.nvm`,
`sudo apt-get update`,
`sudo apt-get install -y build-essential libssl-dev`,
`curl -sL https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh -o install_nvm.sh`,
`bash install_nvm.sh`,
`echo "source ~/.nvm/nvm.sh" >> ~/.bash_profile`
].join('\n');
To this:
return [
`rm -rf ~/.nvm`,
`sudo apt-get update`,
`sudo apt-get install -y build-essential libssl-dev`,
`curl -sL https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh -o install_nvm.sh`,
`bash install_nvm.sh`,
`echo "source ~/.nvm/nvm.sh" >> ~/.bash_profile`,
`source ~/.nvm/nvm.sh`
].join('\n');

cannot 'sudo' inside of bash if statement

I've dual linux boot i'm newbie in bash
when running the following script i got strange error:
if [[ 'grep -i fedora /etc/issue' ]]; then
echo "the OS is Fedora"
$(sudo yum update -y && sudo yum upgrade -y)
else
echo "the OS is Ubuntu"
$(sudo apt-get update && sudo apt-get upgrade -y && sudo apt-get dist-upgrade -y)
fi
error : ./server_update.sh: line 9: Loaded: command not found
It's attempting to execute the output of your apt-get/yum commands, lose the $(..)
You also have an issue at the start:
if [[ -n "$(grep -i fedora /etc/issue)" ]]; then
is the correct way to check if a string exists.
Your code should then look like this:
if [[ -n "$(grep -i fedora /etc/issue)" ]]; then
echo "the OS is Fedora"
sudo yum update -y && sudo yum upgrade -y
else
echo "the OS is Ubuntu"
sudo apt-get update && sudo apt-get upgrade -y && sudo apt-get dist-upgrade -y
fi

Resources