MySQL backup with CronJOb - cron

I am trying to export MySQL data with CronJob in cPanel and I am adding bellow code into command line:
/usr/bin/mysqldump --opt -u krystald_fred -p'tY$645=&nm' max_joomla > /home/max/db-backup.sql
After CronJob runs and when I check db-backup.sql file I am getting blank file with no data inside .sql file.
What's wrong with this command line. Can anyone guide me to fix this.
Thanks.

Not enough info there. There could be many issues. Just run from command line with -v verbose flag and see. You might also need to connect via the 3306 port and specify hostname. It all depends on your installation.

Related

Nagios - unable to read output

I make custom bash script to monitor ssh failed logins - which locally runs ok - on nagios server and remote hosts.
root#xxx:/usr/local/nagios/libexec# ./check_bruteforce_ssh.sh -c 20 -w 50
OK - no constant bruteforce attack
But on nagios page - shows Unable to read output
I make so changes in configs to verify form https://support.nagios.com/kb/article/nrpe-nrpe-unable-to-read-output-620.html what's going wrong and I cannot find out where is the problem.
Script runs via nrpe which run on all machine
root#test:/usr/local/nagios/libexec# ./check_nrpe -H test1
NRPE v3.2.1
When I tested script via nrpe I've got problem with
NRPE: Command 'check_bruteforce_ssh' not defined
which is defined in nrpe.cfg
command[check_bruteforce_attack]=/usr/local/nagios/libexec/check_bruteforce_attack.sh -w 20 -c 50
All permissions for user nagios is added - in sudoers etc.
Where can I find the solution or somedoby got similar problem?
You have an error in your definition.
Replace check_bruteforce_attack in nrpe.cfg with check_bruteforce_ssh and it will work ;-)

Cannot Connect to Linux Oracle Databse with Perl Script after connecting with PuTTY

I have the following problem:
I currently connect to one of our Linux servers using PuTTY on my Windows 10 machine. If I use a ‘standard’ PuTTY connection I have no problem: I can log in and run my Perl script to access an Oracle database on the Linux server. However, recently I have set up a new PuTTY connection (I copied the original working copy used above). The only difference from the original is that I have entered the following in the section Connection->SSH->Remote command of the PuTTY configuration window:
cd ../home/code/project1/scripts/perl ; /bin/bash
(I have done this so I arrive directly in the folder containing all my scripts.)
I can still log into the server with no problems and it takes me straight to the folder that contains my Perl scripts. However, when I run the script to access the Oracle database I get the following error:
DBI connect('server1/dbname','username',...) failed: ERROR OCIEnvNlsCreate. Check ORACLE_HOME (Linux) env var or PATH (Windows) and or NLS settings, permissions, etc. at PerlDBFile1.pl line 10.
impossible de se connecter à server1 / dbname at PerlDBFile1.pl line 10, <DATA> line 1.
In addition, if I run the env command on the server the variable $ORACLE_HOME is not listed (If I run the same env command on the server with the standard PuTTY connection the $ORACLE_HOME variable is present.)
Just to note: Running any other Perl script on the server (that does NOT access the Oracle database) through either of the PuTTY sessions I have created works with no problems.
Any help much appreciated.
When you set the remote command in PuTTY, it skips running of .bash_profile that is present in your default $HOME directory. This is why you are getting the error.
To resolve it, either place a copy of .bash_profile in your perl directory, or add a command to execute .bash_profile in remote command
OK, I have the solution!...Thanks to everyone who replied.
Basically, I originally had the command:
cd ../home/code/project1/scripts/perl ; /bin/bash (See original post)
To get it to work I replaced the above with
cd ../home/code/project1/scripts/perl; source ~/.bash_profile; /bin/bash
I also tried:
cd ../home/code/project1/scripts/perl; /bin/bash; source ~/.bash_profile
But that did NOT work.
Hope this helps someone.
Gauss76

Unable to Automate PostgreSQL Login on Linux Server

I've seen a few related questions here and tried pretty much all the solutions but there's obviously something silly I'm still missing. I'm trying to bypass the need to log in to PostgreSQL using a script. Its a pretty basic script to copy a text file on the server into a table in my database, and works fine except it prompts me to log in to PostgreSQL every time I run it. I intend to write a cron job that performs this task daily so the process needs to be automatic. Here's the script.
export PGPORT=5432
export PGDATABASE=db_name
export PGUSER=user
export PGPASSWORD=my_password
psql -h host_name -p 5432 -U user -W db_name -c "\COPY schema.table_name (col1, col2, col3, col4)
from path_to_txt_file with DELIMITER '^'"
I also went down the ".pgpass file" route to no avail. I saved it in /home/usr/.pgpass, and gave it the following credentials
*:*:*:user:my_password
saved it and then gave it permissions as follows
sudo chmod 600 .pgpass
I'm not sure if this is relevant but what I have as "usr" in my file path to the .pgpass file is different to my database username; what I have here as "user". Also the script I am running is in a completely different directory on the server to the .pgpass file. These are all novice points i'm sure but for the sake of being complete I thought I'd add them.
If there was a way to modify the existing script so that it didn't prompt me for a password that would be great, otherwise if anyone has any guidance on what I might be doing wrong with the .pgpass file I'd appreciate it.
Thanks in advance
I think the issue is option "-W" in the document of PostgreSQL, "-W" means "Force psql to prompt for a password before connecting to a database."
I suggest you to use this
export PGPORT=5432
export PGDATABASE=db_name
export PGUSER=user
export PGPASSWORD=my_password
psql -h host_name -p $PGPORT -U $PGUSER -d $PGDATABASE -c "\COPY schema.table_name(col1,
col2, col3, col4) from path_to_txt_file with DELIMITER '^'"
In addition to what Mabu said:
Postgresql has all it takes to automate logins and to be able to keep the connection parameters out of your code.
In addition to the .pgpass file for the password you can define all your connection parameters in a service file
I think the issue here is that my pg_hba.conf file needed to be edited to include the username that I am trying to access the database with. Unfortunately the database sits on an AWS RDS server instance, where the pg_hba.conf file is not editable. As well as manually going into the instance and trying to edit it without success, there is also mention of this here: https://serverfault.com/questions/560596/how-do-a-i-edit-conf-file-for-a-postgres-aws-rds. It will probably come down to building an EC2 instance where these configuration files are accessable.
I stand corrected with my answer above. The .pgpass file was just located in the wrong directory. I log into my linux server with the user name ubuntu, so after moving the .pgpass file into /home/ubuntu and deleting the PGPASSWORD line from the script above (the .pgpass file will be ignored if this is left in the script..) it now works perfectly. Looking at it now, it all seems quite obvious. I hope this might save someone a bit of stress in the future.

Why is my cron job not running?

I am trying to run a cron job at 5:20pm as below but it's not working.
20 17 * * 1 /usr/bin/php /home/myacc/public_html/job/generate.php
I basically did crontab -e and entered the above line there and saved the file.
If I try running the command directly from the command line as below, it works fine:
php /home/myacc/public_html/job/generate.php
What am I doing wrong?
Also, how do I send a message from the cron to either a log file or email so that I know what's going on?
You can record all the logs of your cronjobs by installing postfix and configuring it as local domain during installation. Such that you can see the logs of your cron jobs by vi /var/mail/<user-name>. Try looking the logs and figure out the problem. If you don't find any solution post the log so that it will help others to diagnose the problem. I'm adding this as an answer instead of comment because i think I'm answering your second query.

how to load schema file into Cassandra with cqlsh

I have a schema file for Cassandra. I'm using a windows 7 machine (Cassandra on this machien as well - 1 node). I want to load the schema with cqssh. So far I have not been able to find how. I was hoping to be able to pass the file to cqlsh: cqlsh mySchemaFile. However since I run in windows, to start cqlsh I do the following
python "C:\Program Files (x86)\DataStax Community\apache-cassandra\bin\cqlsh" localhost 9160
Even though I have csqsh in my path, when called like this from python it needs the full path.
I tried to add in there the file name but no luck so far.
Is this even possible?
cqlsh takes a file to execute via the -f or --file option, not as a positional argument (like the host and port), so the correct form would be:
python "C:\Program Files (x86)\DataStax Community\apache-cassandra\bin\cqlsh" localhost 9160 -f mySchemaFile
Note: I'm not 100% sure about whether you'd use -f or \f in Windows.

Resources