mysqldump for AWS RDS `flush tables` error on linux only - linux

I have a process that exports the data from an AWS RDS MariaDB using mysqldump which has been running succesfully in a docker-image on Concourse for years.
Since two nights ago the process has started failing with the error:
mysqldump: Couldn't execute 'FLUSH TABLES WITH READ LOCK': Access denied for user 'admin'#'%' (using password: YES) (1045)
The official AWS explanation seems to be that because they do not allow super privileges to the master user or GLOBAL READ LOCK the mysqldump fails if the --master-data option is set.
I do not have that option set. I'm running with these flags:
mysqldump -h ${SOURCE_DB_HOST} ${SOURCE_CREDENTIALS} ${SOURCE_DB_NAME} --single-transaction --compress | grep -v '^SET .*;$' > /tmp/dump.sql
mysqldump works fine when executed from my local Mac. It fails with the error that it couldn't execute FLUSH TABLES WITH READ LOCK only from the linux environment.
My question is, does anyone know how to disable the FLUSH TABLES WITH READ LOCK command in mysqldump on linux?
EDIT: Happy to accept #sergey-payu answer below as having fixed my problem, but here's a link to the MySQL bug report for the benefit of anyone else coming across this issue https://bugs.mysql.com/bug.php?id=109685

I faced the same issue couple of days ago. My mysqldump script had been working for years just fine until it started to give me the Access denied; you need (at least one of) the RELOAD privilege(s) for this operation error. My first instinct was to grant this privilege. But after that I started to get Access denied for user 'user'#'%' (using password: YES) (1045) error, which is documented in AWS docs. After couple hours of investigation it turned out to be a bug of the most recent 5.7.41 version of mysql (It was released 17th of January, exactly when we started to get errors). Downgrading to 5.7.40 solved the problem. It's interesting that 5.7.41 changelog doesn't list anything close to the FLUSH TABLES WITH READ LOCK or to default values.

According to the 5.7.41 changelog,
The data and the GTIDs backed up by mysqldump were inconsistent when the options --single-transaction and --set-gtid-purged=ON were both used. It was because GTID_EXECUTED was fetched at the end of the dump, at which point the GTIDs on the server could have increased already. With this fixed, a FLUSH TABLES WITH READ LOCK is performed at the beginning of the dump and GTID_EXECUTED was fetched right after, to ensure its value is consistent with the snapshot taken by mysqldump.

Related

Why did the permissions of my mongodb .sock file change automatically?

Today, my MongoDB database went down after weeks of it being up. After some digging around, I realized that the permissions of my mongodb-27017.sock file were incorrect.
Running chown mongod:mongod mongodb-27017.sock resolved the issue.
My MongoDB instance was running perfectly fine for weeks. How did the permissions all of a suddenly change? How can I prevent myself from running into this issue again?
For context: I'm running an Amazon Linux 2 instance on AWS.
After almost one year of flawless working, one of our replica members received an error and it was related to this, mongo.sock owner changed from mongod:mongod to root:root.
I started searching that if I can find what changed the files owner but unfortunately there's no way to find it after it happened.
So my search lead me to auditctl.
According to man page the description is, used to control the behavior, get status, and add or delete rules.
By setting up it like audictl -w /tmp -p rwxa -k keyname, I started waiting.
Wrote a simple shell script to notify me when audictl finds out what's changing the ownership of the file. After couple hours, I received it.
With the output of the audictl, you can find information like pid, syscall and uid etc.
But the most important one is comm which tells you which program used and caused this change.
For my situation by following the audictl logs, I found out that co-worker of mine just created a cronjob that effects mongo.sock file.

databases and data not saving ~Linux QubeOS~

when i create a postgresql database and create tables and columns and even insert data into the columns. I cant restart my machine without losing the created databases and all the data.
i have tried changing a coupe things in the configuration file but nothing helped.
I also have to reset the password for postgres everytime I restart my machine. I mainly use mongodb I am just learning postgreSQL just so I can use it if I ever need to in the future. I am runing a Linux machine QubesOS. I have a few problems like this useing QubesOS. every tutorial I watch everybody uses Macs. Which a mac seems good and all kinda a mix between windows and linux The best of both worlds. Easy package installs and terminal control but I dont want to trade my linux machine for a Mac I would much rather just fix these problems I am having with PostgreSQL on my linux machine
You ran into an important security feature of QubesOS: All data modifications are discarded on shutdown of a so called "Qube". They are reset to their original state.
But there is the exception of data kept in some very special directories.
If you convince your data base packages to put their data into these directories, it will be preserved over reboots of your data base Qube:
Read this documentation for more information.

Can I change Linux time in a Postgresql production server?

Today I found out that a linux's server clock, hosting a PostgreSql cluster in a production environment, is late and I need take it to the current time.
I've used these lines in my local machine:
sudo date --set="2017-01-19 12:09:59.990"
sudo hwclock --systohc
I've listed the time on PostgreSql
select now()
before and after the changes, and everything worked fine.
Is there any impact that I should look for ?
Am I overthinking this ?
I cannot think of any problems that PostgreSQL itself would have with that, and jumping forward in time should not be a problem anyway – it is much like a time with zero activity.
Jumping back in time may not be a problem for PostgreSQL, but it might confuse an application to find future timestamps in the database.

CouchDB econnrefused

I am having trouble adding an external process to my CouchDB database. Currently the database contains a few records, all of which have standalone attachments in the form of PNG or JPG. I want to add the Couch_Image_Resizer (by KlausTrainer) to the database so that I can use the queries offered by the Image Resizer to dynamically resize the images on request. However currently it only returns an error when the URL command is used:
http://virtualMachineAddress/_image/archive/test/the_starry_night_painting.jpg?resize=500x500
{"error":"error","reason":"{conn_failed,{error,econnrefused}}"}
I have followed the instructions to the letter, replacing any instance of localhost or 127.0.0.1 with the IP address of my virtual machine (which has been made elastic so should never change) where needed.
I have also altered the local.ini file as was instructed so that it includes the following:
[httpd_global_handlers]
_image = {couch_httpd_proxy, handle_proxy_req, <<"http://127.0.0.1:5985">>}
Finally I have ensured that the program is running via the ./start.sh command. If this is run more than once it returns the following, I am usure as to if it is relevant:
root#couchdb couchdb/couch_image_resizer# ./start.sh
Crash dump was written to: erl_crash.dump
Kernel pid terminated (application_controller) {application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}})
Crash dump was written to: erl_crash.dump
Kernel pid terminated (application_controller) {application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}})`
Some info that might be helpful
erl_crash.dump: pastebin
Server is a virtual AWS machine running Debian 7.9 Wheezy.
The database is hosted externally on this server.
CouchDB version: 1.2.0
The database is not in Admin Party mode, accounts with permissions are in use.
GitHub link: Couch_Image_Resizer
Erlang: erts-5.9.1 64-bit
ImageMagick: 6.8.9-9
I am clearly missing something here, if you need anything else just ask. If anyone can shed any light on what I am missing I would greatly appreciate it!
I have found a solution to this although there may be others.
Stop the service, set its permissions to be owned exclusively by the couchdb user and then adding the start.sh file path to the [osdaemon] section of CouchDB's local.ini before restarting the database and also launching the application as a root user. Doing this was able to kick-start the service and it now functions normally and as intended.

linux gedit: I always get "GConf Error: failed to contact configuration server ..."

How come I always get
"GConf Error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://projects.gnome.org/gconf/ for information. (Details - 1: Failed to get connection to session: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.)"
when I start 'gedit' from a shell from my superuser account?
I've been using GUI apps as a logged-in user and as a secondary user for 15+ years on various UNIX machines. There's plenty of good reasons to do so (remote shell, testing of configuration files, running multiple sessions of programs that only allow one instance per user, etc).
There's a bug at launchpad that explains how to eliminate this message by setting the following environment variable.
export DBUS_SESSION_BUS_ADDRESS=""
The technical answer is that gedit is a Gtk+/Gnome program, and expects to find a current gconf session for its configuration. But running it as a separate user who isn't logged in on the desktop, you don't find it. So it spits out a warning, telling you. The failure should be benign though, and the editor will still run.
The real answer is: don't do that. You don't want to be running GUI apps as anything but the logged-in user, in general. And you never want to be running any GUI app as root, ever.
For some (RHEL, CentOS) you may need to install the dbus-x11 package ...
sudo yum install dbus-x11
Additional details here.
Setting and exporting DBUS_SESSION_BUS_ADDRESS to "" fixed the problem for me. I only had to do this once and the problem was permanently solved. However, if you have a problem with your umask setting, as I did, then the GUI applications you are trying to run may not be able to properly create the directories and files they need to function correctly.
I suggest creating (or, have created) a new user account solely for test purposes. Then you can see if you still have the problem when logged in to the new user account.
I ran into this issue myself on several different servers. It I tried all of the suggestions listed here: made sure ~/.dbus had proper ownership, service messagbus restart, etc.
I turns out that my ~/.dbus was mode 755 and the problem went away when I changed the mode to 700. I found this when comparing known working servers with servers showing this error.
I understand there are several different answers to this problem, as I have been trying to solve this for 3 days.
The one that worked for me was to
rm -r .gconf
rm -r .gconfd
in my home directory. Hope this helps somebody.

Resources