innobackupex skipp table when backuping - percona

I'm trying to backup a single table and restore to another clean server. I do it like this:
innobackupex --tables(or include)='db.table1' --compress --stream=xbstream ./ | ssh user#ip \ "xbstream -x -C /var/lib/mysql/partial-backup/"
In the output I see:
Skipping ./db/table1.ibd.
No errors occur. What could be the reason for this?

Related

Replicate-couchdb-cluster is not skipping the dbs

I'm trying to run the replicate-couchdb-cluster -d -v -s http://couch-instance1:5984 -t http://couch-instance2:5984 -i -users,_replicator,_global_changes from powershell but it is not skipping these databases. Any reason why this is happening? I'm able to skip the dbs when running from command prompt.
with -i option I was expecting the dbs to be skipped when executing from powershell

Execution CMake from within a Bash Script

I've built a script to automate a CMake build of OpenCV4. The relevant part of the script is written as:
install.sh
#!/bin/bash
#...
cd /home/pi/opencv
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
-D ENABLE_NEON=ON \
-D ENABLE_VFPV3=ON \
-D BUILD_TESTS=OFF \
-D OPENCV_ENABLE_NONFREE=ON \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D BUILD_EXAMPLES=OFF ..
This part of the code is first executed from /home/pi/ directory. If I execute these lines within the cli they work and the cmake file is built without error. If I run this same code form a bash script it results in the cmake command resulting in -- Configuring incomplete, errors occurred!.
I believe this is similar to these two SO threads (here and here) in as much as they both describe situations where calling a secondary script from the first script creates a problem (or at least that's what I think they are saying). If that is the case, how can you start a script from /parent/, change to /child/ within the script, execute secondary script (CMake) as though executed from the /child/ directory?
If I've missed my actual problem - highlighting taht would be even more helpful.
Update with Full Logs
Output log for CMakeOutput.log and CMakeError.log as unsuccessfully run from the bash script.
When executed from the cli the successful logs are success_CMakeOutput.log and success_CMakeError.log
Update on StdOut
I looked through the files above and they look the same... Here is the failed screen output (noting the bottom lines) and the successful screen output.
You are running your script as the root user with the /root home directory, while the opencv_contrib directory is in /home/pi directory. The /home/pi is most probably the home directory of the user pi.
Update the:
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
With proper path to opencv_contrib. Either provide opencv_contrib in the home directory of the root user, if you aim to run the script as root, or provide full, non-dependent on HOME, path to opencv_contrib directory.
-D OPENCV_EXTRA_MODULES_PATH=/home/pi/opencv_contrib/modules \

crontab bash script not running

I updated the script with the absolute paths. Also here is my current cronjob entry.
I went and fixed the ssh key issue so I know it works know, but might still need to tell rsync what key to use.
The script runs fine when called manually by user. It looks like not even the rm commands are being executed by the cron job.
UPDATE
I updated my script but basically its the same as the one below. Below I have a new cron time and added an error output.
I get nothing. It looks like the script doesn't even run.
crontab -e
35 0 * * * /bin/bash /x/y/z/s/script.sh 2>1 > /tmp/tc.log
#!/bin/bash
# Clean up
/bin/rm -rf /z/y/z/a/b/current/*
cd /z/y/z/a/to/
/bin/rm -rf ?s??/D????
cd /z/y/z/s/
# Find the latest file
FILE=`/usr/bin/ssh user#server /bin/ls -ht /x/y/z/t/a/ | /usr/bin/head -n 1`
# Copy over the latest archive and place it in the proper directory
/usr/bin/rsync -avz -e /urs/bin/ssh user#server:"/x/y/z/t/a/$FILE" /x/y/z/t/a/
# Unzip the zip file and place it in the proper directory
/usr/bin/unzip -o /x/y/z/t/a/$FILE -d /x/y/z/t/a/current/
# Run Dev's script
cd /x/y/z/t/
./old.py a/current/ t/ 5
Thanks for the help.
I figured it out, I'm use to working in cst and the server was in gmt time.
Thanks everybody for the help.

sidekiq.yml file is not being considered

I have installed gitlab community edition on my raspberry pi 3. Everything is working fine. But when the application is up there are 25 sidekiq threads. It's eating up my memory and I don't want so many threads.
I tried controlling by adding the file /opt/gitlab/embedded/service/gitlab-rails/config/sidekiq.yml.
# Sample configuration file for Sidekiq.
# Options here can still be overridden by cmd line args.
# Place this file at config/sidekiq.yml and Sidekiq will
# pick it up automatically.
---
:verbose: false
:concurrency: 5
# Set timeout to 8 on Heroku, longer if you manage your own systems.
:timeout: 30
# Sidekiq will run this file through ERB when reading it so you can
# even put in dynamic logic, like a host-specific queue.
# http://www.mikeperham.com/2013/11/13/advanced-sidekiq-host-specific-queues/
:queues:
- critical
- default
- <%= `hostname`.strip %>
- low
# you can override concurrency based on environment
production:
:concurrency: 5
staging:
:concurrency: 5
I have restarted the application many times and even ran "reconfigure". It's not helping. It's not considering the sidekiq.yml file at all.
Can anybody please let me know where I am going wrong?
i found your question by searching for a solution for the same problem. All i found doesn't work. So i tried bye myself and found the right place for reducing sidekiq from 25 to 5. I use the gitlab omnibus version. I think the path is idetical to yours:
/opt/gitlab/sv/sidekiq/run
In this file you find the following code:
#!/bin/sh
cd /var/opt/gitlab/gitlab-rails/working
exec 2>&1
exec chpst -e /opt/gitlab/etc/gitlab-rails/env -P \
-U git -u git \
/opt/gitlab/embedded/bin/bundle exec sidekiq \
-C /opt/gitlab/embedded/service/gitlab-rails/config/sidekiq_queues.yml \
-e production \
-r /opt/gitlab/embedded/service/gitlab-rails \
-t 4 \
-c 25
Change the last line to "-c 5". The result should look like this:
#!/bin/sh
cd /var/opt/gitlab/gitlab-rails/working
exec 2>&1
exec chpst -e /opt/gitlab/etc/gitlab-rails/env -P \
-U git -u git \
/opt/gitlab/embedded/bin/bundle exec sidekiq \
-C /opt/gitlab/embedded/service/gitlab-rails/config/sidekiq_queues.yml \
-e production \
-r /opt/gitlab/embedded/service/gitlab-rails \
-t 4 \
-c 5
Last but no least yout have to resart gitlab service
sudo gitlab-ctl restart
No idea, what happening on the gitlab update. I think i have to change this value again. It would be nice, if the gitlab developers add this option to gitlab.rb in /etc/gitlab directory.

How to use lftp to transfer segmented files?

I want to transfer a file from my server to another.The network between these servers isn't very well,so I want to use lftp to speed up.My script is like this:
lftp -u user,password -e "set sftp:connect-program 'ssh -a -x -i /key'; mirror --use-pget=5 -i data.tar.gz -r -R /data/ /tmp; quit" sftp://**.**.**.**:22
I found data.tar.gz wasn't segmented, But When I use it to download a file, that will works.
What should I do?
Segmented uploads are not implemented in lftp. If you have ssh access to the server, login there and use lftp to download the file. If there were many files, you could also upload different files in parallel using -P mirror option.

Resources