I want to take a snapshot of one database running in app www and put it into app staging. When I do that with clone or create/import none of the data is available.
How am I meant to do it?
matt#server:~$ dokku run www curl http://www:password#dokku-couchdb-www:5555/www
{"db_name":"www","doc_count":4966,"doc_del_count":232,"update_seq":46475,"purge_seq":0,"compact_running":false,"disk_size":3071180923,"data_size":334987077,"instance_start_time":"1500006610823893","disk_format_version":6,"committed_update_seq":46475}
So from that you can see there are 4966 documents.
matt#server:~$ dokku couchdb:clone www staging_www
-----> Starting container
Waiting for container to be ready
=====> CouchDB container created: staging_www
DSN: http://staging_www:password#dokku-couchdb-staging-www:5555/staging_www
-----> Copying data from www to staging_www
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1110M 0 1110M 0 0 30.4M 0 --:--:-- 0:00:36 --:--:-- 31.9M
-----> Done
So there are no errors in the clone. Then I run
dokku couchdb:link staging_www staging
dokku couchdb:promote staging_www staging
And there are no errors, but if I check the DB:
matt#server:~$ dokku run staging curl http://staging_www:password#dokku-couchdb-staging-www:5555/staging_www
{"db_name":"staging_www","doc_count":1,"doc_del_count":0,"update_seq":1,"purge_seq":0,"compact_running":false,"disk_size":4188,"data_size":342,"instance_start_time":"1509536857606369","disk_format_version":6,"committed_update_seq":1}
The doc count is 1 and I can't access any of the data in the staging app.
Equally I have tried
dokku couchdb:export www > www.couch
dokku couchdb:create staging_www
dokku couchdb:import staging_www < www.couch
dokku couchdb:link staging_www staging
dokku couchdb:promote staging_www staging
There are no errors, but again I end up with 1 doc in the database.
What am I meant to do?
With dokku 0.9.4 and 'dokku couchdb service plugin' 1.0.0
Solution is quite simple. After the first attempt
root#dokku01:~# dokku couchdb:clone www staging_www
and fail to clone db. You need to destroy staging_www
root#dokku01:~# dokku couchdb:destroy staging_www
and do cloning again.
root#dokku01:~# dokku couchdb:clone www staging_www
Now it will works as expected. You can check new db with
root#dokku01:~# curl -X GET 'http://staging_www:password#dokku-couchdb-staging-www:5555/staging_www/_all_docs?include_docs=true&attachments=true'
Exporting from www and then importing a dump in newly created staging_www also works.
It's a bug in CouchDB plugin and it will be quite fun to find what is the root cause of it.
UPDATE
The root cause of this bug is bash script for backup and restore 'couchdb-backup'. In some cases a 'curl calls' inside the script fail to reach the db and consequently a restore operation doesn't work.
Cloning db is: create new instance, export (backup) old data and then import (restore) that backup in the new db.
Script below is updated 'clone' script from couchdb service plugin. It's made more resilient to 'couchdb-backup' import quirks.
#!/usr/bin/env bash
source "$(dirname "$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)")/config"
set -eo pipefail; [[ $DOKKU_TRACE ]] && set -x
source "$PLUGIN_BASE_PATH/common/functions"
source "$(dirname "$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)")/functions"
service-clone-cmd() {
#E you can clone an existing service to a new one
#E dokku $PLUGIN_COMMAND_PREFIX:clone lolipop lolipop-2
#A service, service to run command against
#A new-service, name of new service
declare desc="create container <new-name> then copy data from <name> into <new-name>"
local cmd="$PLUGIN_COMMAND_PREFIX:clone" argv=("$#"); [[ ${argv[0]} == "$cmd" ]] && shift 1
declare SERVICE="$1" NEW_SERVICE="$2" CLONE_FLAGS_LIST="${#:3}"
[[ -z "$SERVICE" ]] && dokku_log_fail "Please specify a name for the service"
[[ -z "$NEW_SERVICE" ]] && dokku_log_fail "Please specify a name for the new service"
verify_service_name "$SERVICE"
PLUGIN_IMAGE=$(service_version "$SERVICE" | grep -o "^.*:" | sed -r "s/://g")
PLUGIN_IMAGE_VERSION=$(service_version "$SERVICE" | grep -o ":.*$" | sed -r "s/://g")
service_create "$NEW_SERVICE" "${#:3}"
dokku_log_info1 "Copying data from $SERVICE to $NEW_SERVICE"
attempts=5
attemptcount=0
R=2
succ_str=' Successfully.'
until [[ $R = 0 || $R = 1 ]]; do
attemptcount=$((attemptcount+1))
STDOUT1=$(service_export "$SERVICE" | service_import "$NEW_SERVICE" 2>&1) || true
if [[ ! "$STDOUT1" = *"${succ_str}" ]]; then
if [ $attemptcount = $attempts ]; then
R=1
echo -e "\nERROR: CouchDB Import failed - Stopping\n"
else
echo -e "\nWARN: CouchDB Import Reported an error - Attempt ${attemptcount}/${attempts} - Retrying...\n"
sleep 1
fi
else
R=0
fi
done
dokku_log_info1 "Done"
exit $R
}
service-clone-cmd "$#"
Old version at /var/lib/dokku/plugins/available/couchdb/subcommands/clone should be replaced with this new one.
I usually just use the couchdb dokku plugin found here
Curl can have a tendency to timeout with multiple files, so I use the plugin to stage. Easy peasy
Related
I have a pipeline which pushes the packages to a server. In the logs of that script, I can see that even when it has some errors into it(and when I check into that server, the file doesn't appear there, which means there is definetly an error), the job still gets succeeded.
Below is the part of the logs:
ECHO sudo aptly repo add stable abc.deb
Loading packages...
[!] Unable to process abc.deb: stat abc.deb: no such file or directory
[!] Some files were skipped due to errors:
abc.deb
ECHO sudo aptly snapshot create abc-stable_2023.01.02-09.23.36 from repo stable
Snapshot abc-stable_2023.01.02-09.23.36 successfully created.
You can run 'aptly publish snapshot abc-stable_2023.01.02-09.23.36' to publish snapshot as Debian repository.
ECHO sudo aptly publish -passphrase=12345 switch xenial abc-stable_2023.01.02-09.23.36
ERROR: some files failed to be added
Loading packages...
Generating metadata files and linking package files...
Finalizing metadata files...
gpg: WARNING: unsafe permissions on configuration file `/home/.gnupg/gpg.conf'
gpg: WARNING: unsafe enclosing directory permissions on configuration file `/home/.gnupg/gpg.conf'
Signing file 'Release' with gpg, please enter your passphrase when prompted:
Clearsigning file 'Release' with gpg, please enter your passphrase when prompted:
gpg: WARNING: unsafe permissions on configuration file `/home/.gnupg/gpg.conf'
gpg: WARNING: unsafe enclosing directory permissions on configuration file `/home/.gnupg/gpg.conf'
Cleaning up prefix "." components main...
Publish for snapshot ./xenial [amd64] publishes {main: [abc-stable_2023.01.02-09.23.36]: Snapshot from local repo [stable]: Repository} has been successfully switched to new snapshot.
Cleaning up project directory and file based variables
00:00
Job succeeded
Any idea how to fix this? Like it should fail if there is an error! Can anyone also please explain why it has this behaviour whereas it's the default thing that gitlab pipeline shows error whenever one job fails.
Edit 1:
here is the job which is causing the issue
deploy-on:
stage: deploy
image: ubuntu:20.04
before_script:
- apt-get update
- apt-get install sshpass -y
- apt-get install aptly -y
- apt-get install sudo -y
script:
- pOSOP=publisher
- unstableOrStable=stable
- chmod +x ./pushToServer.sh
- ./publishToServer.sh
here is the pushToServer.sh
#!/bin/bash
cat build.env
DebFileNameW=$(cat build.env | grep DebFileNameW | cut -d = -f2)
echo "DebFileNameW=" $DebFileNameW
sshpass -p pass ssh -oStrictHostKeyChecking=no $pOSOP '
echo "ECHO mkdir -p /home/packages/"
mkdir -p /home/packages/
exit
'
sshpass -p pass scp -oStrictHostKeyChecking=no build/$DebFileNameW.deb $pOSOP:/home/packages/
echo "making time"
file_name=$DebFileNameW
current_time=$(date "+%Y.%m.%d-%H.%M.%S")
sshpass -p pass ssh -t -oStrictHostKeyChecking=no $pOSOP '
echo "doing cd"
cd /home/packages
echo "ECHO sudo aptly repo add '$unstableOrStable' '$file_name'.deb"
sudo aptly repo add '$unstableOrStable' '$file_name'.deb
'
Edit 2:
At the second last line in pushToServer.sh file, i.e., after line sudo aptly repo add '$unstableOrStable' '$file_name'.deb and before last line which is ', I added these two ways get this done, but still it is not working:
Way 1:
if [[ ! $? -eq 0 ]]; then
print_error "The last operation failed."
exit 1
fi
Ways 2:
retVal=$?
echo "ECHOO exit status" $retVal
if [ $retVal -ne 0 ]; then
echo "<meaningful message>"
exit $retVal
fi
With both the ways it is not working. And the same error.
Output:
ECHO sudo aptly repo add stable abc.deb
Loading packages...
[!] Unable to process abc.deb: stat abc.deb: no such file or directory
ERROR: some files failed to be added
[!] Some files were skipped due to errors:
abc.deb
ECHO sudo aptly snapshot create abc-stable_2023.01.05-05.59.44 from repo stable
Snapshot abc-stable_2023.01.05-05.59.44 successfully created.
You can run 'aptly publish snapshot abc-stable_2023.01.05-05.59.44' to publish snapshot as Debian repository.
ECHO sudo aptly publish -passphrase=12345 switch xenial abc-stable_2023.01.05-05.59.44
Loading packages...
Generating metadata files and linking package files...
Finalizing metadata files...
Signing file 'Release' with gpg, please enter your passphrase when prompted:
gpg: WARNING: unsafe permissions on configuration file `/home/publisher/.gnupg/gpg.conf'
gpg: WARNING: unsafe enclosing directory permissions on configuration file `/home/publisher/.gnupg/gpg.conf'
gpg: WARNING: unsafe permissions on configuration file `/home/publisher/.gnupg/gpg.conf'
gpg: WARNING: unsafe enclosing directory permissions on configuration file `/home/publisher/.gnupg/gpg.conf'
Clearsigning file 'Release' with gpg, please enter your passphrase when prompted:
Cleaning up prefix "." components main...
Publish for snapshot ./xenial [amd64] publishes {main: [abc-stable_2023.01.05-05.59.44]: Snapshot from local repo [stable]: Repository} has been successfully switched to new snapshot.
ECHOO exit status 0
Cleaning up project directory and file based variables
00:00
Job succeeded
Please note: I have done echo "ECHOO exit status" $retVal statement, and it shows exit status 0, which means $? doesn't have the right value itself. I have expecting $retVal which is '$?', to be 1 or something other than 0(Success) to get it worked.
Any Pointers?
So I have been trying with multiple different ways mentioned in comment replies and my Edits. Nothing worked. So I was able to solve it this way as mentioned here.
I just put these lines at the starting of my script after #!/bin/bash
#!/bin/bash
set -e
set -o pipefail
I used the following script (autoupdate.sh) to update my git repository automatically using SSH whenever I make a change to the local repository in raspberry pi 3B.
#!/usr/bin/env bash
DATADIR="/home/pi/data"
cd $DATADIR
if [[ -n $(git status -s) ]]; then
echo "Changed found. Pushing changes..."
git add -A && git commit -m "$1: Update files" && git push origin main
else
echo "No changes found. Skip pushing."
fi
Then I call a script measurement.sh that calls the above script whenever the internet is connected ( I used 4G dongle USB). Something like
...
cd ~/data; bash autoupdate.sh $DATE
...
However, when I run sudo bash measurement.sh it encountered the errors (It has made a commit but not push). Without sudo it works fine.
Permission denied(public key)
...
I checked from GitHub document https://docs.github.com/en/github/authenticating-to-github/troubleshooting-ssh/error-permission-denied-publickey by regenerating the ssh key as well as verified the public key but it did not solve at all. When I pushed commits in a separate terminal it works fines so I do not think the issue relates to the SSH key. I doubt that to run the script successfully it with sudo, the SSH keygen must also be generated with sudo at first.
What could be the reasons for it?
Without sudo it works fine.
So why use sudo in the first place.
As commented, using sudo alone means using commands as root.
At least, a sudo -u <auser> would means the ~ in cd ~/data would be resolved by the appropriate /home/auser, instead of /root.
With Azure DevOps, when using the hosted agents, is there a way to build a project/solution into a container (Docker), then extract the build artifacts and publish them (not as an docker image).
You could certainly do that, easiest way would be building with something like:
- script: |
mkdir -p /docker-volume/npm
cp -R $(Build.SourcesDirectory)/. /docker-volume/npm
docker run -v /agent/npm:/npm node:10.15 bash \
-c "cd /npm && npm ci && npm run web-build"
exitcode=$?
if [ $exitcode -ne 0 ]; then
rm -rf /docker-volume/npm
exit $exitcode
fi
cp -R /docker-volume/npm/build $(Build.SourcesDirectory)
rm -rf /docker-volume/npm
basically launch a container and map a volume to the container. build stuff inside the container and push it to the volume, then grab the results from the volume and do what you need with them
I have a website running on cloud server. Can I link the related files to my github repository. So whenever I make any changes to my website, it get auto updated in my github repository?
Assuming you have your cloud server running an OS that support bash script, add this file to your repository.
Let's say your files are located in /home/username/server and we name the file below /home/username/server/AUTOUPDATE.
#!/usr/bin/env bash
cd $(dirname ${BASH_SOURCE[0]})
if [[ -n $(git status -s) ]]; then
echo "Changes found. Pushing changes..."
git add -A && git commit -m 'update' && git push
else
echo "No changes found. Skip pushing."
fi
Then, add a scheduled task like crontab to run this script as frequent as you want your github to be updated. It will check if there is any changes first and only commit and push all changes if there is any changes.
This will run every the script every second.
*/60 * * * * /home/username/server/AUTOUPDATE
Don't forget to give this file execute permission with chmod +x /home/username/server/AUTOUPDATE
This will always push the changes with the commit message of "update".
i made a web server to show my page locally, because is located in a place with a poor connection so what i want to do is download the page content and replace the old one, so i made this script running in background but i am not very sure if this will work 24/7 (the 2m is just to test it, but i want it to wait 6-12 hrs), so, ¿what do you think about this script? is insecure? or is enough for what i am doing? Thanks.
#!/bin/bash
a=1;
while [ $a -eq 1 ]
do
echo "Starting..."
sudo wget http://www.example.com/web.zip --output-document=/var/www/content.zip
sudo unzip -o /var/www/content.zip -d /var/www/
sleep 2m
done
exit
UPDATE: This code i use now:
(Is just a prototype but i pretend not using sudo)
#!/bin/bash
a=1;
echo "Start"
while [ $a -eq 1 ]
do
echo "Searching flag.txt"
if [ -e flag.txt ]; then
echo "Flag found, and erasing it"
sudo rm flag.txt
if [ -e /var/www/content.zip ]; then
echo "Erasing old content file"
sudo rm /var/www/content.zip
fi
echo "Downloading new content"
sudo wget ftp://user:password#xx.xx.xx.xx/content/newcontent.zip --output-document=/var/www/content.zip
sudo unzip -o /var/www/content.zip -d /var/www/
echo "Erasing flag.txt from ftp"
sudo ftp -nv < erase.txt
sleep 5s
else
echo "Downloading flag.txt"
sudo wget ftp://user:password#xx.xx.xx.xx/content/flag.txt
sleep 5s
fi
echo "Waiting..."
sleep 20s
done
exit 0
erase.txt
open xx.xx.xx.xx
user user password
cd content
delete flag.txt
bye
I would suggest setting up a cron job, this is much more reliable than a script with huge sleeps.
Brief instructions:
If you have write permissions for /var/www/, simply put the downloading in your personal crontab.
Run crontab -e, paste this content, save and exit from the editor:
17 4,16 * * * wget http://www.example.com/web.zip --output-document=/var/www/content.zip && unzip -o /var/www/content.zip -d /var/www/
Or you can run the downloading from system crontab.
Create file /etc/cron.d/download-my-site and place this content into in:
17 4,16 * * * <USERNAME> wget http://www.example.com/web.zip --output-document=/var/www/content.zip && unzip -o /var/www/content.zip -d /var/www/
Replace <USERNAME> with a login that has suitable permissions for /var/www.
Or you can put all the necessary commands into single shell script like this:
#!/bin/sh
wget http://www.example.com/web.zip --output-document=/var/www/content.zip
unzip -o /var/www/content.zip -d /var/www/
and invoke it from crontab:
17 4,16 * * * /path/to/my/downloading/script.sh
This task will run twice a day: at 4:17 and 16:17. You can set another schedule if you'd like.
More on cron jobs, crontabs etc:
Add jobs into cron
CronHowto on Ubuntu
Cron(Wikipedia)
Simply unzipping the new version of your content overtop the old may not be the best solution. What if you remove a file from your site? The local copy will still have it. Also, with a zip-based solution, you're copying EVERY file each time you make a copy, not just the files that have changed.
I recommend you use rsync instead, to synchronize your site content.
If you set your local documentroot to something like /var/www/mysite/, an alternative script might then look something like this:
#!/usr/bin/env bash
logtag="`basename $0`[$$]"
logger -t "$logtag" "start"
# Build an array of options for rsync
#
declare -a ropts
ropts=("-a")
ropts+=(--no-perms --no-owner --no-group)
ropts+=(--omit-dir-times)
ropts+=("--exclude ._*")
ropts+=("--exclude .DS_Store")
# Determine previous version
#
if [ -L /var/www/mysite ]; then
linkdest="$(stat -c"%N" /var/www/mysite)"
linkdest="${linkdest##*\`}"
ropts+=("--link-dest '${linkdest%'}'")
fi
now="$(date '+%Y%m%d-%H:%M:%S')"
# Only refresh our copy if flag.txt exists
#
statuscode=$(curl --silent --output /dev/stderr --write-out "%{http_code}" http://www.example.com/flag.txt")
if [ ! "$statuscode" = 200 ]; then
logger -t "$logtag" "no update required"
exit 0
fi
if ! rsync "${ropts[#]}" user#remoteserver:/var/www/mysite/ /var/www/"$now"; then
logger -t "$logtag" "rsync failed ($now)"
exit 1
fi
# Everything is fine, so update the symbolic link and remove the flag.
#
ln -sfn /var/www/mysite "$now"
ssh user#remoteserver rm -f /var/www/flag.txt
logger -t "$logtag" "done"
This script uses a few external tools that you may need to install if they're not already on your system:
rsync, which you've already read about,
curl, which could be replaced with wget .. but I prefer curl
logger, which is probably installed in your system along with syslog or rsyslog, or may be part of the "unix-util" package depending on your Linux distro.
rsync provides a lot of useful functionality. In particular:
it tries to copy only what has changed, so that you don't waste bandwidth on files that are the same,
the --link-dest option lets you refer to previous directories to create "links" to files that have not changed, so that you can have multiple copies of your directory with only single copies of unchanged files.
In order to make this go, both the rsync part and the ssh part, you will need to set up SSH keys that allow you to connect without requiring a password. That's not hard, but if you don't know about it already, it's the topic of a different question .. or a simple search with your favourite search engine.
You can run this from a crontab every 5 minutes:
*/5 * * * * /path/to/thisscript
If you want to run it more frequently, note that the "traffic" you will be using for every check that does not involve an update is an HTTP GET of the flag.txt file.