mysqldump export not complete - cron

My hosting provider had a huge corruption issue last week following a power failure. As a result, I had to import all databases again from a backup.
Our backup runs daily (using cron.daily) and each database is exported using mysqldump. Each database is gzipped and sent to storage.
Unfortunately, one of the databases have been incorrectly exporting for some time which I've just found out now. It's not the largest export at around 130MB (uncompressed).
The below is the output from mysqldump, I get the same output from SSH and from cron.daily. Obviously, this is incomplete!
I'm just trying to identify why this may have happened so I can avoid it in the future.
-- MySQL dump 10.13 Distrib 5.5.24, for debian-linux-gnu (x86_64)
--
-- Host: localhost Database: *********************
-- ------------------------------------------------------
-- Server version 5.5.24-0ubuntu0.12.04.1
/*!40101 SET #OLD_CHARACTER_SET_CLIENT=##CHARACTER_SET_CLIENT */;
/*!40101 SET #OLD_CHARACTER_SET_RESULTS=##CHARACTER_SET_RESULTS */;
/*!40101 SET #OLD_COLLATION_CONNECTION=##COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
/*!40103 SET #OLD_TIME_ZONE=##TIME_ZONE */;
/*!40103 SET TIME_ZONE='+00:00' */;
/*!40014 SET #OLD_UNIQUE_CHECKS=##UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET #OLD_FOREIGN_KEY_CHECKS=##FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET #OLD_SQL_MODE=##SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
/*!40111 SET #OLD_SQL_NOTES=##SQL_NOTES, SQL_NOTES=0 */;

Related

storing Output of plsql query in shell script

I'm not getting any output to my FILE for query. while I'm running in oracle I can see the counts.
Can someone please let me know what am I doing wrong?
#!/bin/bash
ORACLE_HOME=*path*
TNS_ADMIN=*path*
export ORACLE_HOME
export TNS_ADMIN
FILE="/tmp/score_cnt.txt"
sqlplus -S user/pass#service<< EOF
spool $FILE
select count(*) from score_tbl
spool off
exit;
EOF
Seems like passing the *nix variable to SQL*Plus is messing the things around here.
As far as I understand, you wish to pass the file name in the script, then the easiest mechanism would be to define it as SPOOL File directly (instead of getting it parametrized).
Moreover you can add some really useful SET parameters to beautify the output.
sqlplus -S user/pass#service << EOF
SET LINESIZE 32000
SET PAGESIZE 0
SET TRIMSPOOL ON
SET TRIMOUT ON
SET WRAP OFF
SET TERMOUT OFF
spool /tmp/score_cnt.txt
select count(*) from score_tbl;
spool off
EOF
PS - EOF doesn't require additional exit in sqlplus & select
statement must end with a ;

how to select special character in oracle under linux environment

Can you please help me regarding
How to select below query in Oracle.
I am trying to spool a special character from SQL*plus. But it is showing like ????
select 'ยง' from dual;
Unless your database character set is defined to US7ASCII this should be no problem.
You local character set has to match setting of NLS_LANG.
Example:
$ locale charmap
UTF-8
$ echo $LANG
en_US.UTF-8
Then NLS_LANG environment variable should be set to
NLS_LANG={your_language}_{your_country}.AL32UTF8
Then SQL*Plus should work fine.

Procmail variable based if condition

I am running procmail recipe that would trigger some of my applications the moment I receive a specific email. I have the whole thing working but now I need to build conditions into the recipe as to not make it run again and again as to avoid multiple instances of the same program since I have procmail trigger every 10minutes. Problem is i'm not exactly sure how 'if' sentences are made in procmail.
Here is the recipe I have so far:
:0
* ^Subject: .*Email Subject!
| export DISPLAY=:0.0;
xrandr --size 1360x768;\
firefox "link"; \
timeout 10s recordmydesktop --fps 30; \
xrandr --size 1366x768
The easy and idiomatic way to have a critical section in Procmail is to use a lock file.
# Notice the second colon and the name of the lock file to use
:0:firefox.lock
* ^Subject: .*Email Subject!
| export DISPLAY=:0.0;
xrandr --size 1360x768;\
firefox "link"; \
timeout 10s recordmydesktop --fps 30; \
xrandr --size 1366x768
This will create $MAILDIR/firefox.lock when the recipe is evaluated, and remove it when the recipe finishes. If the file already exists, Procmail will wait until it disappears, or eventually time out (which could cause the incoming message to bounce).
If you need a critical section spanning multiple recipes, you can assign to the "magical" variable LOCKFILE and set it to an empty value when you are done.
LOCKFILE=firefox.lock
# ... Your recipes here ...
LOCKFILE=
(Obscurely, the equals sign on the last line of this example is optional; but I recommend against that usage.)
See man 5 procmailrc for (much) more, including LOCKSLEEP and LOCKTIMEOUT.
The trivial answer to "how to say 'if' in Procmail" is to use a condition. You already have one; the action will only trigger if the message's headers match the regular expression ^Subject:.*Email Subject!. You can nest these conditions, test variables, external commands, etc. Here's a silly made-up example to demonstrate them all.
# If $FOO is set and non-empty
:0
* FOO ?? .
{
# ... then enter this nested block
# Does $HOME/bar exist?
:0
* ? test -e $HOME/bar
barista
# Otherwise, unconditionally deliver to foolish
:0
foolish
}
The block is entered if the variable FOO is set. Procmail uses your environment variables, so you can set it before invoking Procmail (depending on Procmail's options; it will only inherit a sanitized copy of your environment by default) or on its command line as well as in your recipe file.

Need help - Getting an error: xrealloc: subst.c:4072: cannot reallocate 1073741824 bytes (0 bytes allocated)

Checking if anybody else had the similar issue.
Code in the shell script:
## Convert file into Unix format first.
## THIS is IMPORTANT.
#####################
dos2unix "${file}" "${file}";
#####################
## Actual DB Change
db_change_run_op="$(ssh -qn ${db_ssh_user}#${dbserver} "sqlplus $dbuser/${pswd}#${dbname} <<ENDSQL
#${file}
ENDSQL
")";
Summary:
1. From a shell script (on a SunOS source server) I'm running a sqlplus session via ssh on a target machine to run a .sql script.
2. Output of this target ssh session (running sqlplus) is getting stored in a variable within the shell script. Variable name: db_change_run_op (as shown above in the code snapshot).
3. Most of the .sql scripts (that the variable "${file}" stores) that I'm running, shell script runs it fine and returns me the output of the .sql file (ran on target server via ssh from source server) provided, if the .sql file contains something which doesn't take much time to complete -or generates reasonable amount of output log/lines.
for ex: Let's assume if .sql I want to run does the following, then it runs fine.
select * from database123;
udpate table....
alter table..
insert ....
...some procedure .... which doesn't take much time to create....
...some more sql commands which complete..within few minutes to an hour....
4. Now, the issue I'm facing is:
Let's assume I have a .sql file where a single select command from a table have couple of hundred thousands - upto 1-5millions of lines i.e.
select * from database321;
assume the above generates the above bullet 4 condition.
In this case, I'm getting the following error message thrown by the shell script (running on the source server).
Error:
*./db_change_load.sh: xrealloc: subst.c:4072: cannot reallocate 1073741824 bytes (0 bytes allocated)*
My questions:
1. Did the .sql script complete - I assume yes. But, how can I get the output LOG file of the .sql file generated on the target server directly. If this can be done, then I won't need the variable to hold the output of whole ssh session sqlplus command and then create a log file on source server by doing [ echo "${db_change_run_op}" > sql.${file}.log ] way.
I assume the error is coming as the output or no. of lines generated by the ssh session i.e. by the sqlplus is so big that it can't fit Unix/Linux BASH variable's limit and thus, xrealloc error.
Please advise if on the above 2 questions if you have any experience or how can i solve this.
I assume, I'll try using " | tee /path/on.target.ssh.server/sql.${file}.log" soon after << ENDSQL or final close of ENDSQL (here doc keyword), wondering if that would work or not..
OK. got it working. No more store stuff in a var and then echo $var to a file.
Luckily, I had a same mount point on both source and target server i.e. if I go to /scm on source and on target, the mount (df -kvh .) shows same output for Share/NAS mount value.
Filesystem size used avail capacity Mounted on
ServerNAS02:/vol/vol1/scm 700G 560G 140G 81% /scm
Now, instead of using the variable to store the whole output of ssh session calling sqlplus session, all I did is was to create a file on the remote server using the following code.
## Actual DB Change
#db_change_run_op="$(ssh -qn ${pdt_usshu_dbs}#${dbs} "sqlplus $dbu/${pswd}#$dbn <<ENDSQL | tee "${sql_run_output_file}".ssh.log
#set echo off
#set echo on
#set timing on
#set time on
#set serveroutput on size unlimited
##${file}
#ENDSQL
#")";
ssh -qn ${pdt_usshu_dbs}#${dbs} "sqlplus $dbu/${pswd}#$dbn <<ENDSQL | tee "${sql_run_output_file}".ssh.log
set echo off
set echo on
set timing on
set time on
set serveroutput on size 1000000
#${file}
ENDSQL
"
seems like unlimited doesn't work in 11g so I had to use the 1000000 value (these small sql cmds help to show command with its output, show clock time for each output line etc).
But basically, in the above code, I'm calling the ssh command directly without using a variable="$(.....)" way.. and after the <
Even if I wouldn't have the same mount, I could have tee'd the output to a file on the remote server path (which is not available from source server) but atleast I can see upto what level the .sql command completed or generated output as now output is going directly to a file on remote server and Unix/Linux doesn't care much about the file size until there's no space left.

perforce backup question

for safety purpose, is it enough to backup all the files under perforce server directory?
Short answer: No
Long answer: All you need to know about backup and recovery of Perforce data is detailed in the Manual. In a nutshell for the impatient:
p4 verify //...
(Verify the integrity of your server)
p4 admin checkpoint
(Make a checkpoint; make sure that this step is successful)
back up the checkpoint file and the old journal file
(if you run Perforce with Journal files, which you should)
back up your versioned files
(that's the actual data, not to be confused with the db.* files in the Perforce server directory.)
But please do read the manual, especially about the various restore scenarios. Remember:
Backups usually work fine, it's the restore that fails.
In addition to jhwist's correct from the p4 manual answer (permalink) I would like to add a few things that I've learnt during using Perforce for several years.
...
Depending on the size of your repository performing a verify on the p4 database can take several hours, which during it will be locked and no one will be able to perform any queries. Locking the P4 database can have several on flow effects to your users, for example: if someone is using or attempts to use P4 during this time a P4SCC plug-in (ie. for visual studio integration) it will spin and the user will eventually have to force quit to regain control.
Solution
Spawn a second instance of P4D on a different port (p4d_2)
Suspend/terminate the main instance (p4d_1).
Perform the p4 verify //... and checkpoint using p4d_2.
Backup the physical version files on the storage array.
Kill p4d_2.
Restart p4d_1.
Also: As this will be more than likely be an automated process run at night or over the weekend can cannot stress enough that you need to thoroughly read the checkpoint log file to ensure that it was successful otherwise you will be in a difficult spot when you need to perform a restore (read the next point). Backup should not be a set and forget procedure.
Further information about Perforce backup can be found in Perforce whitepaper: High Availability And Disaster Recovery Solutions For Perforce.
HTH,
FWIW I have used an additional backup strategy on my own development workstation. I have a perl script that runs every night and finds all files that I have checked out of Perforce from a given list of workspaces. That list of files is then backed up as part of my normal workstation backup procedure. The Perl script to find the files that are checked out looks pretty tricky to me. I did not write it and am not particularly familiar with Perl.
If anyone is interested, I can post the script here along with how I call it.
Note that this script was developed before Perforce came out with its "shelving" capability. I might be better off now to have a script that "shelves" my work every night (either in addition to my current backup strategy or in place of it).
Here is the script:
# This script copies any files that are opened for any action (other than
# delete) in the specified client workspace to another specified directory.
# The directory structure of the workspace is duplicated in the target
# directory. Furthermore, a file is not copied if it already exists in the
# target directory unless the file in the workspace is newer than the one
# in the target directory.
# Note: This script looks at *all* pending changelists in the specified
# workspace.
# Note: This script uses the client specification Root to get the local
# pathname of the files. So if you are using a substituted drive for the
# client root, it must be properly substituted before running this script.
# Argument 1: Client workspace name
# Argument 2: Target directory (full path)
use File::Path;
# use File::Copy;
use File::Basename;
use Win32;
if ($#ARGV != 1) {
die("usage: $0 client_name target_directory\n");
}
my $client = shift(#ARGV);
my $target_dir = shift(#ARGV);
my #opened_files = ();
my $client_root = "";
my $files_copied = 0;
# I need to know the root directory of the client, so that I can derive the
# local pathname of the file. Strange that "p4 -ztag opened" doesn't give
# me the local pathname; I would have expected it to.
open(CLIENT_SPEC, "p4 -c $client client -o|")
|| die("Cannot retrieve client specification: $!");
while (<CLIENT_SPEC>) {
my ($tag, $value) = split(/\s/, $_, 2);
if ($tag eq "Root:") {
$value = chop_line($value);
$client_root = $value;
}
}
close(CLIENT_SPEC);
if ($client_root eq "") {
die("Unable to determine root of client $client\n");
} elsif (substr($client_root, -1) ne "\\") {
$client_root = $client_root . "\\";
}
# Use the -ztag option so that we can get the client file path as well as
# the depot path.
open(OPENED_FILES, "p4 -c $client -ztag opened|")
|| die("Cannot get list of opened files: $!");
while (<OPENED_FILES>) {
# What we do is to get the client path and append it onto the
# #opened_files array. Then when we get the action, if it is a delete,
# we pop the last entry back off the array. This assumes that the tags
# come out with clientFile before action.
$_ = chop_line($_);
my ($prefix, $tag, $value) = split(/\s/, $_, 3);
if ($tag eq "clientFile") {
push(#opened_files, $value);
}
if ( ($tag eq "action") && ($value eq "delete") ) {
pop(#opened_files);
}
}
close(OPENED_FILES);
# Okay, now we have the list of opened files. Process each file to
# copy it to the destination.
foreach $client_path (#opened_files) {
# Trim off the client name and replace it with the client root
# directory. Also replace forward slashes with backslashes.
$client_path = substr($client_path, length($client) + 3);
$client_path =~ s/\//\\/g;
my $local_path = $client_root . $client_path;
# Okay, now $client_path is the partial pathname starting at the
# client's root. That's the path we also want to use starting at the
# target path for the destination.
my $dest_path = $target_dir . "\\" . $client_path;
my $copy_it = 0;
if (-e $dest_path) {
# Target exists. Is the local path newer?
my #target_stat = stat($dest_path);
my #local_stat = stat($local_path);
if ($local_stat[9] > $target_stat[9]) {
$copy_it = 1;
}
} else {
# Target does not exist, definitely copy it. But we may have to
# create some directories. Use File::Path to do that.
my ($basename, $dest_dir) = fileparse($dest_path);
if (! (-e $dest_dir)) {
mkpath($dest_dir) || die("Cannot create directory $dest_dir\n");
}
$copy_it = 1;
}
if ($copy_it) {
Win32::CopyFile($local_path, $dest_path, 1)
|| warn("Could not copy file $local_path: $!\n");
$files_copied++;
}
}
print("$files_copied files copied.\n");
exit(0);
################ Subroutines #########################################
# chop_line removes any trailing carriage-returns or newlines from its
# argument and returns the possibly-modified string.
sub chop_line {
my $string = shift;
$string =~ s/[\r\n]*\z//;
return $string;
}
To run:
REM Make sure that we are pointing to the current Perforce server
P4 set -s P4PORT=MyPerforceServer:ThePortThatPerforceIsOn
p4 set p4client=MyPerforceWorkspace
REM Copy checked out files to a local directory that will be backed up
.\p4backup.pl MyPerforceWorkspace c:\PerforceBackups\MyPerforceWorkspace_backup

Resources