Hi Im using ubuntu server.
I have set the proxies using :
echo "export http_proxy='http://username:password#proxyIP:port/'" | tee -a ~/.bashrc
Ive set up http, https and ftp proxies. Wget works fine but apt-get does not connect.
Please help!
I don't know about apt-get but I've never got yum to work with the environment variables. Instead, I've had to set the proxy via its config.
Here's a related post for the apt-get conf:
https://askubuntu.com/questions/89437/how-to-install-packages-with-apt-get-on-a-system-connected-via-proxy
Also... how awesome is plain-text passwords sitting in a file that's public-readable by default! (or environment variables and bash history for that matter)
You need to update .bashrc and apt.conf for this to work.
Following link would explain this in detail.
http://codewithgeeks.blogspot.in/2013/11/configure-apt-get-to-work-from-behind.html
apt-get is set to ignore the system default proxy settings.
To set the proxy you have to go to /etc/apt/apt.conf file and add following lines:
Acquire::http::proxy "http://username:password#proxyIP:port/";
Acquire::ftp::proxy "ftp://username:password#proxyIP:port/";
Acquire::https::proxy "https://username:password#proxyIP:port/";
You can create a script also to set this and unset this when ever you want.
Create a script aptProxyOn.sh
if [ $(id -u) -ne 0 ]; then
echo "This script must be run as root";
exit 1;
fi
if [ $# -eq 2 ]
then
printf \
"Acquire::http::proxy \"http://$1:$2/\";\n\
Acquire::ftp::proxy \"ftp://$1:$2/\";\n\
Acquire::https::proxy \"https://$1:$2/\";\n" > /etc/apt/apt.conf.d/01proxies;
sudo cp /etc/apt/apt.conf.d/01proxies /etc/apt/apt.conf
else
printf "Usage $0 <proxy_ip> <proxy_port>\n";
fi
To remove proxy, create a script with name aptProxyOff.sh
if [ $(id -u) -ne 0 ]; then
echo "This script must be run as root";
exit 1;
fi
printf "" > /etc/apt/apt.conf.d/01proxies;
sudo cp /etc/apt/apt.conf.d/01proxies /etc/apt/apt.conf
Give permission for both the files to run. chmod +x aptProxyOn.sh aptProxyOff.sh
You have to run them in the following way.
Proxy On -
sudo ./aptProxyOn.sh username:password#proxyIP port
Proxy Off -
sudo ./aptProxyOff.sh
Tip:
If you have # in your username or password, it will not work directly.
You have to use URL encoding of # which is %40. But while passing commandline arguments you can't use %40, you have to use %%40
This worked out best for me:
switch in to the /etc/apt directory
Edit the following file , if the file is not present this command will create one
gedit /apt.conf.d/apt.conf
and add the following line
Aquire::http::proxy "http://[proxy-server]:[port]";
Aquire::https::proxy "https://[proxy-server]:[port]";
Related
I met a problem that the command "sudo systemctl start xxx.service" in my SPEC file does not work when upgrading my RPM package, following is my %post script in SPEC file,
%post
echo "---------------------------- post $1 -------------------------------"
# refresh installation
if [ $1 == 1 ]; then
sudo echo "Installation finished."
# upgrade installation
elif [ $1 -gt 1 ]; then
sudo echo "Starting service xxx.service..."
sudo /usr/bin/systemctl enable xxx.service > /dev/null 2>&1
sudo /usr/bin/systemctl start xxx.service
sleep 10
sudo echo "Finished."
fi
exit 0
I'm sure that the service file already exists in directory /usr/lib/systemd/system, and I can start it manually using the command "sudo systemctl start xxx.service".
And I found that the "sleep 10" command does not work too.
Very appreciated if there is any suggestion about this issue, thanks.
Few issues:
You're not supposed to use sudo in scriplets, because 1) it may not be installed 2) rpm installation runs as superuser anyway
You should use the standard RPM macros for SystemD as opposed to reinventing the wheel.
Essentially that simply goes down to:
%{?systemd_requires}
BuildRequires: systemd
# ...
%post
%systemd_post %{name}.service
%preun
%systemd_preun %{name}.service
%postun
%systemd_postun_with_restart %{name}.service
# ...
Take note that the SystemD macros for CentOS/RHEL are within systemd package, while in Fedora they are now in systemd-rpm-macros.
Placing the service startup command in the scriptlet "%posttrans" resolves my problem, thanks for all your suggestions.
I am a new Ubuntu user.
Recently, I try to set up a server on Ubuntu.
I am wondering how to write a automatically script to run a series of script one by one.
For example, I need to install squid first, after that I need to make a copy of config file then modify the file. The following are the steps that I write in the command console. I wonder how to make a script to run that automatically.
sudo apt-get install squid -y;
cd /etc/squid3;
sudo cp squid.conf squid.conf.bak;
sudo rm -rf squid.conf;
sudo nano squid.conf
Just add a shebang, place everything in a ".sh" file, make the file executable, and run it...
Save this as test.sh
#!/bin/bash
sudo apt-get install squid -y;
cd /etc/squid3;
sudo cp squid.conf squid.conf.bak;
sudo rm -rf squid.conf;
sudo nano squid.conf
Make it executable chmod +x test.sh
Run it... ./test.sh
To edit the file from a terminal
Get a terminal on the box where you want the script to live. Probably you will SSH into it.
Then just cd to the path you want the script to live and do the following...
nano test.sh This opens the nano terminal text editor.
Copy the above test.sh commands, make sure to get the shebang (#!/bin/bash).
Paste the script into the nano editor, you'll need to use ctrl+v or cmd+v.
Hit the key combination of ctrl + o, hit the enter key.
Hit the key combination of ctrl + w. This exits nano. Proceed with the abov instructions.
I suggest you read up on nano so you can get more familiar with its abilities as it can save a lot of time!
I have wrote some script for my VPS and this is a example for Squid3
#!/bin/bash
function add_user () {
while true; do
echo -e "\nInsert a name for the Squid3 user (0=exit): \c"
read utente
case "$utente" in
0)
echo -e "\nGoodbye $USER!\n"
exit 0
;;
*\ *)
echo -e "\nYou can't use spaces in the name!"
sleep 2
continue
;;
*)
break
;;
esac
done
if [ ! -e '/etc/squid3/.passwd' ]; then
sudo htpasswd -c /etc/squid3/.passwd $utente
else
sudo htpasswd /etc/squid3/.passwd $utente
fi
}
function installer () {
sudo apt-get install squid3 apache2-utils -y
sudo bash -c "echo 'here
you
must
paste
your
configuration
file' > /etc/squid3/squid.conf"
sudo service squid3 restart
}
if ! [ "$(sudo which squid3)" ]; then
installer
add_user
else
add_user
fi
First run it install squid3 and apache2-utils (for htpasswd) and after make a new user.
If you run it again you can add more users.
I have this script:
#!/bin/bash
VAR="eric.sql"
sudo mysqldump -c -u username -p1234 dbname > $VAR
But if i run this script I get this error:
: Protocol error 3: mysql-export.sh: cannot create eric.sql
But if I don't use the variable, but just this:
#!/bin/bash
VAR="eric.sql"
sudo mysqldump -c -u username -p1234 dbname > eric.sql
... it is working well. What do I wrong?
The problem was that the script had Windows style line breaks (I used notepad). After I used Nano the write the script it was solved.
Thanks for the answers!
sudo can change $PATH variable, depend on your security policy.
-E The -E (preserve environment) option will override the env_reset
option in sudoers(5)). It is only available when either the match-
ing command has the SETENV tag or the setenv option is set in sudo-
ers(5).
You could add the full path of the file, or remove sudo in that script.
This should also work:
sudo PATH="$PATH" mysqldump -c -u username -p1234 dbname > $VAR
i made a web server to show my page locally, because is located in a place with a poor connection so what i want to do is download the page content and replace the old one, so i made this script running in background but i am not very sure if this will work 24/7 (the 2m is just to test it, but i want it to wait 6-12 hrs), so, ¿what do you think about this script? is insecure? or is enough for what i am doing? Thanks.
#!/bin/bash
a=1;
while [ $a -eq 1 ]
do
echo "Starting..."
sudo wget http://www.example.com/web.zip --output-document=/var/www/content.zip
sudo unzip -o /var/www/content.zip -d /var/www/
sleep 2m
done
exit
UPDATE: This code i use now:
(Is just a prototype but i pretend not using sudo)
#!/bin/bash
a=1;
echo "Start"
while [ $a -eq 1 ]
do
echo "Searching flag.txt"
if [ -e flag.txt ]; then
echo "Flag found, and erasing it"
sudo rm flag.txt
if [ -e /var/www/content.zip ]; then
echo "Erasing old content file"
sudo rm /var/www/content.zip
fi
echo "Downloading new content"
sudo wget ftp://user:password#xx.xx.xx.xx/content/newcontent.zip --output-document=/var/www/content.zip
sudo unzip -o /var/www/content.zip -d /var/www/
echo "Erasing flag.txt from ftp"
sudo ftp -nv < erase.txt
sleep 5s
else
echo "Downloading flag.txt"
sudo wget ftp://user:password#xx.xx.xx.xx/content/flag.txt
sleep 5s
fi
echo "Waiting..."
sleep 20s
done
exit 0
erase.txt
open xx.xx.xx.xx
user user password
cd content
delete flag.txt
bye
I would suggest setting up a cron job, this is much more reliable than a script with huge sleeps.
Brief instructions:
If you have write permissions for /var/www/, simply put the downloading in your personal crontab.
Run crontab -e, paste this content, save and exit from the editor:
17 4,16 * * * wget http://www.example.com/web.zip --output-document=/var/www/content.zip && unzip -o /var/www/content.zip -d /var/www/
Or you can run the downloading from system crontab.
Create file /etc/cron.d/download-my-site and place this content into in:
17 4,16 * * * <USERNAME> wget http://www.example.com/web.zip --output-document=/var/www/content.zip && unzip -o /var/www/content.zip -d /var/www/
Replace <USERNAME> with a login that has suitable permissions for /var/www.
Or you can put all the necessary commands into single shell script like this:
#!/bin/sh
wget http://www.example.com/web.zip --output-document=/var/www/content.zip
unzip -o /var/www/content.zip -d /var/www/
and invoke it from crontab:
17 4,16 * * * /path/to/my/downloading/script.sh
This task will run twice a day: at 4:17 and 16:17. You can set another schedule if you'd like.
More on cron jobs, crontabs etc:
Add jobs into cron
CronHowto on Ubuntu
Cron(Wikipedia)
Simply unzipping the new version of your content overtop the old may not be the best solution. What if you remove a file from your site? The local copy will still have it. Also, with a zip-based solution, you're copying EVERY file each time you make a copy, not just the files that have changed.
I recommend you use rsync instead, to synchronize your site content.
If you set your local documentroot to something like /var/www/mysite/, an alternative script might then look something like this:
#!/usr/bin/env bash
logtag="`basename $0`[$$]"
logger -t "$logtag" "start"
# Build an array of options for rsync
#
declare -a ropts
ropts=("-a")
ropts+=(--no-perms --no-owner --no-group)
ropts+=(--omit-dir-times)
ropts+=("--exclude ._*")
ropts+=("--exclude .DS_Store")
# Determine previous version
#
if [ -L /var/www/mysite ]; then
linkdest="$(stat -c"%N" /var/www/mysite)"
linkdest="${linkdest##*\`}"
ropts+=("--link-dest '${linkdest%'}'")
fi
now="$(date '+%Y%m%d-%H:%M:%S')"
# Only refresh our copy if flag.txt exists
#
statuscode=$(curl --silent --output /dev/stderr --write-out "%{http_code}" http://www.example.com/flag.txt")
if [ ! "$statuscode" = 200 ]; then
logger -t "$logtag" "no update required"
exit 0
fi
if ! rsync "${ropts[#]}" user#remoteserver:/var/www/mysite/ /var/www/"$now"; then
logger -t "$logtag" "rsync failed ($now)"
exit 1
fi
# Everything is fine, so update the symbolic link and remove the flag.
#
ln -sfn /var/www/mysite "$now"
ssh user#remoteserver rm -f /var/www/flag.txt
logger -t "$logtag" "done"
This script uses a few external tools that you may need to install if they're not already on your system:
rsync, which you've already read about,
curl, which could be replaced with wget .. but I prefer curl
logger, which is probably installed in your system along with syslog or rsyslog, or may be part of the "unix-util" package depending on your Linux distro.
rsync provides a lot of useful functionality. In particular:
it tries to copy only what has changed, so that you don't waste bandwidth on files that are the same,
the --link-dest option lets you refer to previous directories to create "links" to files that have not changed, so that you can have multiple copies of your directory with only single copies of unchanged files.
In order to make this go, both the rsync part and the ssh part, you will need to set up SSH keys that allow you to connect without requiring a password. That's not hard, but if you don't know about it already, it's the topic of a different question .. or a simple search with your favourite search engine.
You can run this from a crontab every 5 minutes:
*/5 * * * * /path/to/thisscript
If you want to run it more frequently, note that the "traffic" you will be using for every check that does not involve an update is an HTTP GET of the flag.txt file.
anybody is familiar with the etcd project? Or we'd better forget the project when talk about this issue. The issue is
$ build
ln: `gopath/src/github.com/coreos/etcd': cannot overwrite directory
when exec the build shell
and the content is:
#!/bin/sh -e
if [ ! -h gopath/src/github.com/coreos/etcd ]; then
mkdir -p gopath/src/github.com/coreos/
ln -s ../../../.. gopath/src/github.com/coreos/etcd
fi
export GOBIN=${PWD}/bin
export GOPATH=${PWD}/gopath
export GOFMTPATH="./bench ./config ./discovery ./etcd ./error ./http ./log main.go ./metrics ./mod ./server ./store ./tests"
# Don't surprise user by formatting their codes by stealth
if [ "--fmt" = "$1" ]; then
gofmt -s -w -l $GOFMTPATH
fi
go install github.com/coreos/etcd
go install github.com/coreos/etcd/bench
Some addition:
My system is windows 7
I run the shell on git bash.
issue reproduce:
step1: open the git bash
step2: git clone git#github.com:coreos/etcd.git
step3: cd etcd
step4: build
As mentioned in "Git Bash Shell fails to create symbolic links" (since you are using the script in a git bash on Windows 7)
the ln that shipped with msysGit simply tries to copy its arguments, rather than fiddle with links. This is because links only work (sort of) on NTFS filesystems, and the MSYS team didn't want to reimplement ln.
A workaround is to run mklink from Bash.
This also allows you to create either a Symlink or a Junction.
So 'ln' wouldn't work as expected by default, in the old shell that ships with Git for Windows.
Here's solution. Tbh it is a workaround, but since you're on Windows, I don't see another way.
Start a command line, and enter there to the directory with the script. There should be a path gopath/src/github.com/coreos/ (if no such a path, you must create it). Next issue a command
mklink /D "gopath/src/github.com/coreos/etcd" "../../../../"
Next you should edit the build script to delete a lines with creation symlink and a directory. E.g.
#!/bin/sh -e
export GOBIN=${PWD}/bin
export GOPATH=${PWD}/gopath
export GOFMTPATH="./bench ./config ./discovery ./etcd ./error ./http ./log main.go ./metrics ./mod ./server ./store ./tests"
# Don't surprise user by formatting their codes by stealth
if [ "--fmt" = "$1" ]; then
gofmt -s -w -l $GOFMTPATH
fi
go install github.com/coreos/etcd
go install github.com/coreos/etcd/bench
Note, that I am just removed 4 lines of code. Next you run the script, and this should work.
You shouldn't be using git clone and the build sh script. Use the go get command. For example, on Windows 7,
Microsoft Windows [Version 6.1.7601]
C:\>set gopath
GOPATH=C:\gopath
C:\>go version
go version go1.3 windows/amd64
C:\>go get -v -u github.com/coreos/etcd
github.com/coreos/etcd (download)
github.com/coreos/etcd/third_party/bitbucket.org/kardianos/osext
github.com/coreos/etcd/pkg/strings
github.com/coreos/etcd/error
github.com/coreos/etcd/third_party/github.com/coreos/go-etcd/etcd
github.com/coreos/etcd/http
github.com/coreos/etcd/third_party/github.com/coreos/go-log/log
github.com/coreos/etcd/third_party/github.com/rcrowley/go-metrics
github.com/coreos/etcd/mod/dashboard/resources
github.com/coreos/etcd/log
github.com/coreos/etcd/third_party/github.com/gorilla/context
github.com/coreos/etcd/third_party/github.com/gorilla/mux
github.com/coreos/etcd/mod/dashboard
github.com/coreos/etcd/discovery
github.com/coreos/etcd/pkg/btrfs
github.com/coreos/etcd/pkg/http
github.com/coreos/etcd/third_party/code.google.com/p/gogoprotobuf/proto
github.com/coreos/etcd/mod/leader/v2
github.com/coreos/etcd/mod/lock/v2
github.com/coreos/etcd/metrics
github.com/coreos/etcd/third_party/github.com/mreiferson/go-httpclient
github.com/coreos/etcd/mod
github.com/coreos/etcd/third_party/github.com/BurntSushi/toml
github.com/coreos/etcd/third_party/github.com/goraft/raft/protobuf
github.com/coreos/etcd/third_party/github.com/goraft/raft
github.com/coreos/etcd/store
github.com/coreos/etcd/server/v1
github.com/coreos/etcd/server/v2
github.com/coreos/etcd/store/v2
github.com/coreos/etcd/server
github.com/coreos/etcd/config
github.com/coreos/etcd/etcd
github.com/coreos/etcd
C:\>