systemctl start service does not work in SPEC file - rhel

I met a problem that the command "sudo systemctl start xxx.service" in my SPEC file does not work when upgrading my RPM package, following is my %post script in SPEC file,
%post
echo "---------------------------- post $1 -------------------------------"
# refresh installation
if [ $1 == 1 ]; then
sudo echo "Installation finished."
# upgrade installation
elif [ $1 -gt 1 ]; then
sudo echo "Starting service xxx.service..."
sudo /usr/bin/systemctl enable xxx.service > /dev/null 2>&1
sudo /usr/bin/systemctl start xxx.service
sleep 10
sudo echo "Finished."
fi
exit 0
I'm sure that the service file already exists in directory /usr/lib/systemd/system, and I can start it manually using the command "sudo systemctl start xxx.service".
And I found that the "sleep 10" command does not work too.
Very appreciated if there is any suggestion about this issue, thanks.

Few issues:
You're not supposed to use sudo in scriplets, because 1) it may not be installed 2) rpm installation runs as superuser anyway
You should use the standard RPM macros for SystemD as opposed to reinventing the wheel.
Essentially that simply goes down to:
%{?systemd_requires}
BuildRequires: systemd
# ...
%post
%systemd_post %{name}.service
%preun
%systemd_preun %{name}.service
%postun
%systemd_postun_with_restart %{name}.service
# ...
Take note that the SystemD macros for CentOS/RHEL are within systemd package, while in Fedora they are now in systemd-rpm-macros.

Placing the service startup command in the scriptlet "%posttrans" resolves my problem, thanks for all your suggestions.

Related

How do you get a shell script to run a function with no input

I have an assignment to write a shell script that (among other things) when executed without any arguments, the script will perform the following steps:
Update all system packages
Install the Nginx software package
Configure nginx to automatically start at system boot up.
Copy the website documents to the web document root directory.
Start the Nginx service.
I have written the tasks that the script is supposed to do, but I don't know how to structure the script so that it does these tasks ONLY when it is run without any arguments.
Whenever I run the script - argument or not, these tasks are executed. I've wrapped the tasks in a function but I don't know how to prompt it to only run when executed without an argument:
#!/bin/bash
# assign variables
ACTION=${1}
Version=1.0.0
function default(){
sudo yum update -y
sudo yum install httpd -y
sudo yum install git -y
sudo amazon-linux-extras install nginx1.12 -y
sudo systemctl start nginx.service
sudo systemctl enable nginx.service
sudo aws s3 cp s3://index.html /usr/share/nginx/html/index.html
}
...
case "$ACTION" in
-h|--help)
display_help
;;
-r|--remove)
script_r_function
;;
-v|--version)
;;
show_version "Version"
;;
default
*)
echo "Usage ${0} {-h|-r|-v}"
exit 1
esac
Wrapping your "case" code inside if else :
#!/bin/bash
# assign variables
ACTION=${1}
if [ "$#" -eq 0 ]; then
echo "no arguments"
else
case "$ACTION" in
-h|--help)
echo "help"
;;
-r|--remove)
echo "remove"
;;
-v|--version)
echo "version"
;;
*)
echo "wrong aruments"
exit 1
esac
fi
When i give no arguments :
./boo
no arguments
when i give wrong argument :
./boo -wrong
wrong aruments
when i give valid argument (ex: version):
./boo -v
version

Ubuntu Script run a series of script one line by one line automatically

I am a new Ubuntu user.
Recently, I try to set up a server on Ubuntu.
I am wondering how to write a automatically script to run a series of script one by one.
For example, I need to install squid first, after that I need to make a copy of config file then modify the file. The following are the steps that I write in the command console. I wonder how to make a script to run that automatically.
sudo apt-get install squid -y;
cd /etc/squid3;
sudo cp squid.conf squid.conf.bak;
sudo rm -rf squid.conf;
sudo nano squid.conf
Just add a shebang, place everything in a ".sh" file, make the file executable, and run it...
Save this as test.sh
#!/bin/bash
sudo apt-get install squid -y;
cd /etc/squid3;
sudo cp squid.conf squid.conf.bak;
sudo rm -rf squid.conf;
sudo nano squid.conf
Make it executable chmod +x test.sh
Run it... ./test.sh
To edit the file from a terminal
Get a terminal on the box where you want the script to live. Probably you will SSH into it.
Then just cd to the path you want the script to live and do the following...
nano test.sh This opens the nano terminal text editor.
Copy the above test.sh commands, make sure to get the shebang (#!/bin/bash).
Paste the script into the nano editor, you'll need to use ctrl+v or cmd+v.
Hit the key combination of ctrl + o, hit the enter key.
Hit the key combination of ctrl + w. This exits nano. Proceed with the abov instructions.
I suggest you read up on nano so you can get more familiar with its abilities as it can save a lot of time!
I have wrote some script for my VPS and this is a example for Squid3
#!/bin/bash
function add_user () {
while true; do
echo -e "\nInsert a name for the Squid3 user (0=exit): \c"
read utente
case "$utente" in
0)
echo -e "\nGoodbye $USER!\n"
exit 0
;;
*\ *)
echo -e "\nYou can't use spaces in the name!"
sleep 2
continue
;;
*)
break
;;
esac
done
if [ ! -e '/etc/squid3/.passwd' ]; then
sudo htpasswd -c /etc/squid3/.passwd $utente
else
sudo htpasswd /etc/squid3/.passwd $utente
fi
}
function installer () {
sudo apt-get install squid3 apache2-utils -y
sudo bash -c "echo 'here
you
must
paste
your
configuration
file' > /etc/squid3/squid.conf"
sudo service squid3 restart
}
if ! [ "$(sudo which squid3)" ]; then
installer
add_user
else
add_user
fi
First run it install squid3 and apache2-utils (for htpasswd) and after make a new user.
If you run it again you can add more users.

Running bash script in another directory

I know this question has been asked many times before but I have not been able to get my code working.
I am using the Raspberry Pi 3, with a CAN-BUS Shield. As this will be going into a production environment I need the Pi setup to be nice and easy. I have started to write a bash script so the production staff can run the script and the Pi will update and install everything it needs from the one script.
I have been following this web site https://harrisonsand.com/can-on-the-raspberry-pi/ and I have run into a problem when it comes to compiling can-utils.
I am able to clone the can-utils.git from here https://github.com/linux-can/can-utils.git
by using sudo git clone https://github.com/linux-can/can-utils.git
but my issues come when I need to run the ./autogen.sh & the ./configure as these are located in the dir can-utils.
If I run this from the Pi terminal as described on the web site, it works fine as I change dir cd can-utils and then just sudo ./autogen.sh but it isn't working when I run it in the bash script.
Below is the script I have so far, I know that most of it is commented out this is so that I can test each part as I write it and don't need to constantly download and install stuff I already have
#!/bin/bash
## Change Password
#printf "***********************************************************************\n"
#printf "Changing Password\n"
#echo "pi:***********" | sudo chpasswd # Password hidden
#sleep 1
#printf "Password Changed\n"
## Update & Upgrade Pi
#printf "***********************************************************************\n"
#printf "Update & Upgrade Pi\n\n"
#sudo apt-get update && sudo apt-get upgrade -y
#sleep 1
## Upgrade dist
#printf "***********************************************************************\n"
#printf "Upgrade Dist\n\n"
#sudo apt-get dist-upgrade -y
#sleep 1
## Install libtools
#printf "***********************************************************************\n"
#printf "Installing libtools\n\n"
#sudo apt-get install git autoconf libtool -y
#sleep 1
## Download required files
#printf "***********************************************************************\n"
#printf "Downloading required files\n\n"
## can-utils
#sudo git clone https://github.com/linux-can/can-utils.git
#sleep 1
## Auto configure can-utils
printf "***********************************************************************\n"
printf "Auto Configure can-utils\n\n"
# Things I have tried and do not work
#(cd /c && exec /can-utils/autogen.sh)
#sudo source /can-utils/autogen.sh
#sudo ./can-utils.autogen.sh
sleep 1
When I try the sudo ./can-utils.autogen.shin the Pi terminal the script starts to work so I think this is sort of the right command I need but then I get an error autoreconf: 'configure.ac or 'configure.in' is required these files are in the can-utils dir but for some reason it can't find them. Please can someone help me I have been searching for the answer for the last 2 days
Thank you for your help, rightly or wrongly I have ended up using cd /home/pi/can-utils I had thought I had tried that in the past but I think cd ./can-utils which didn't work.
sudo with script is for me, a nightmare. I just read in the man of sudo of my fedora 25:
Running shell scripts via sudo can expose the same kernel bugs that make setuid shell scripts unsafe on some operating systems (if your OS has a /dev/fd/ directory, setuid shell scripts are generally safe).
sudo command should protect the root account to avoid to run scripts written by user to gain root privilege.
If you keep the use of sudo, my advise should to add a cd command on top of your script:
cd /where_everithing_is
to be sure to be in the right place.
But, may be, sudo will fight again against you !

bash execute code stopped after execute ' sudo -i'

# !/bin/bash
sudo -i
cd /etc/apt/sources.list.d
echo "deb http://archive.ubuntu.com/ubuntu/ precise main restricted universe multiverse" >ia32-libs-precise.list
sudo apt-get update
sudo apt-get install ia32-libs
sudo rm /etc/apt/sources.list.d/ia32-libs-raring.list
sudo apt-get update
when I execute this script ,it just do 'sudo -i ' then stop, who can help me ?
The sudo manpage says :
-i,--login
Run the shell specified by the target user's password data‐base entry as a login shell.
.
.
.
If no command is specified, an interactive shell is executed.
No wonder the execution of your script stops.
The commands
cd /etc/apt/sources.list.d
.
.
sudo apt-get update
are never reached because you have just spawned a new shell with sudo -i.
As [ #mona_sax ] suggested in comment,running a script as sudo may not be a good idea in the security context. It's not clear what your actual intention is, but if the intention is to run the script in background then remove sudo -i line and do :
./script 2>&1 1>/dev/null &
Because you don't specify a command to run as root, sudo invokes an interactive shell. It won't terminate until you exit from it (or it is killed by a signal, etc).
If you need it to return immediately, you could pass true as the command:
sudo true
However, in your case, it's probably better, given what you're doing, to just limit the script to only superusers:
#!/bin/sh
set -e
# check we are running as root
if [ $(id -u) != "0" ]; then
echo "ERROR: this script must be run as a superuser" >&2
exit 1
fi
Then it is up to the user to gain appropriate privileges, rather than encoding that into the script.

Apt-get not working after setting proxy in ubuntu server

Hi Im using ubuntu server.
I have set the proxies using :
echo "export http_proxy='http://username:password#proxyIP:port/'" | tee -a ~/.bashrc
Ive set up http, https and ftp proxies. Wget works fine but apt-get does not connect.
Please help!
I don't know about apt-get but I've never got yum to work with the environment variables. Instead, I've had to set the proxy via its config.
Here's a related post for the apt-get conf:
https://askubuntu.com/questions/89437/how-to-install-packages-with-apt-get-on-a-system-connected-via-proxy
Also... how awesome is plain-text passwords sitting in a file that's public-readable by default! (or environment variables and bash history for that matter)
You need to update .bashrc and apt.conf for this to work.
Following link would explain this in detail.
http://codewithgeeks.blogspot.in/2013/11/configure-apt-get-to-work-from-behind.html
apt-get is set to ignore the system default proxy settings.
To set the proxy you have to go to /etc/apt/apt.conf file and add following lines:
Acquire::http::proxy "http://username:password#proxyIP:port/";
Acquire::ftp::proxy "ftp://username:password#proxyIP:port/";
Acquire::https::proxy "https://username:password#proxyIP:port/";
You can create a script also to set this and unset this when ever you want.
Create a script aptProxyOn.sh
if [ $(id -u) -ne 0 ]; then
echo "This script must be run as root";
exit 1;
fi
if [ $# -eq 2 ]
then
printf \
"Acquire::http::proxy \"http://$1:$2/\";\n\
Acquire::ftp::proxy \"ftp://$1:$2/\";\n\
Acquire::https::proxy \"https://$1:$2/\";\n" > /etc/apt/apt.conf.d/01proxies;
sudo cp /etc/apt/apt.conf.d/01proxies /etc/apt/apt.conf
else
printf "Usage $0 <proxy_ip> <proxy_port>\n";
fi
To remove proxy, create a script with name aptProxyOff.sh
if [ $(id -u) -ne 0 ]; then
echo "This script must be run as root";
exit 1;
fi
printf "" > /etc/apt/apt.conf.d/01proxies;
sudo cp /etc/apt/apt.conf.d/01proxies /etc/apt/apt.conf
Give permission for both the files to run. chmod +x aptProxyOn.sh aptProxyOff.sh
You have to run them in the following way.
Proxy On -
sudo ./aptProxyOn.sh username:password#proxyIP port
Proxy Off -
sudo ./aptProxyOff.sh
Tip:
If you have # in your username or password, it will not work directly.
You have to use URL encoding of # which is %40. But while passing commandline arguments you can't use %40, you have to use %%40
This worked out best for me:
switch in to the /etc/apt directory
Edit the following file , if the file is not present this command will create one
gedit /apt.conf.d/apt.conf
and add the following line
Aquire::http::proxy "http://[proxy-server]:[port]";
Aquire::https::proxy "https://[proxy-server]:[port]";

Resources