After a restore in Percona 5.7 from another Percona 5.7 server full backups are failing with the following error:
xtrabackup: cd to /data/mysql
xtrabackup: open files limit requested 0, set to 819200
xtrabackup: using the following InnoDB configuration:
xtrabackup: innodb_data_home_dir = .
xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend
xtrabackup: innodb_log_group_home_dir = ./
xtrabackup: innodb_log_files_in_group = 2
xtrabackup: innodb_log_file_size = 20971520
xtrabackup: using O_DIRECT
InnoDB: Number of pools: 1
InnoDB: Invalid redo log header checksum.
MySQL service is running in a master-slave replication and I don't see any error in logs regarding corruption or anything else.
I still don't get it since innodb_log_file_size = 20971520 match with size on disk:
-rw-r----- 1 mysql mysql 20971520 Apr 17 10:33 ib_logfile0
-rw-r----- 1 mysql mysql 20971520 Apr 17 10:33 ib_logfile1
Related
I am got an issue when running the whole pipeline of ChIP-seq using profile singularity on my local PC (window but subsystem Linux)
Error executing process > 'output_documentation'
Caused by:
Failed to pull singularity image
command: singularity pull --name nfcore-chipseq-1.2.2.img.pulling.1630098407814 docker://nfcore/chipseq:1.2.2 > /dev/null
status : 255
message:
INFO: Using cached SIF image
FATAL: While making image from oci registry: error copying image out of cache: could not open temporary file for copy: failed to change permission of ./tmp-copy-2575820807: chmod ./tmp-copy-2575820807: operation not permitted
I'm using singularity 3.8.2
I also have specified NXF_SINGULARITY_CACHEDIR to a hard drive instead of /home/.singularity
I also checked the folder to make sure all the file can be accessed
total 0
drwxrwxrwx 1 root root 4096 Aug 28 05:06 .
drwxrwxrwx 1 root root 4096 Aug 28 04:47 ..
-rwxrwxrwx 1 root root 0 Aug 28 04:53 tmp-copy-2299332276
-rwxrwxrwx 1 root root 0 Aug 28 05:06 tmp-copy-2575820807
Need: create keyspace on alternate device
Problem: service aborts on startup with dir-create failure messages below.
INFO [main] 2017-01-06 00:45:03,300 ViewManager.java:137 - Not submitting build tasks for views in keyspace system_schema as storage service is not initialized
ERROR [main] 2017-01-06 00:45:03,393 Directories.java:239 - Failed to create /var/lib/cassandra/data/opus/aa-15be7240d3db11e6ad0eed0a1d791016 directory
ERROR [main] 2017-01-06 00:45:03,397 DefaultFSErrorHandler.java:92 - Exiting forcefully due to file system exception on startup, disk failure policy "stop"
Context: Cassandra 3.9 single-node ubuntu 16.04; directory perms are below.
01:52 opus/ cd /var/lib/cassandra/data
01:52 opus/ ls -l
total 24
drwxr-xr-x 3 cassandra cassandra 4096 Jan 6 00:41 opus
drwxr-xr-x 24 cassandra cassandra 4096 Jan 5 23:49 system
drwxr-xr-x 6 cassandra cassandra 4096 Jan 5 23:50 system_auth
drwxr-xr-x 5 cassandra cassandra 4096 Jan 5 23:50 system_distributed
drwxr-xr-x 12 cassandra cassandra 4096 Jan 5 23:50 system_schema
drwxr-xr-x 4 cassandra cassandra 4096 Jan 5 23:50 system_traces
01:52 opus/ cd opus
01:52 opus/ ls -l
total 4
drwxr-xr-x 3 cassandra cassandra 4096 Jan 6 00:41 aa-15be7240d3db11e6ad0eed0a1d791016
when the link is installed
01:57 data/ ls -l
total 20
lrwxrwxrwx 1 root root 35 Jan 6 01:57 opus -> /media/opus/quantdrive/opus
Steps:
Vanilla install of cassandra 3.9;
Create keyspace in cqlsh create keyspace opus with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
Create table use opus; create table aa(aa int, primary key(aa));
Stop cassandra
Move keyspace dir mv /var/lib/cassandra/data/opus /media/opus/quantdrive
Create symbolic link ln -s /media/opus/quantdrive/opus /var/lib/cassandra/opus
Start cassandra [FAILS AS ABOVE] with create directory, when directory already present
No change in perms on opus keyspace directory, I just moved it. When I move it back, cassandra starts fine.
I would be grateful for any help with this and I apologize in advance if I the solution to my problem is described elsewhere or if I'm missing the obvious.
Move the mount point for the target drive from a user-owned directory to a root-owned one. I moved the mount-point in my case from /media/opus/quantdrive which is owned by user opus to /mnt/quantdrive which is owned by root and everything worked fine.
Well it seems I've hit my first issue with my BigInsights Image, not a massive problem, but something to think about. On my Ambari browser services page it was showing that the Kafka service was not running, I tried a restart a number of times, but this seemed to continuously fail. So I figured that I best look into it a bit further. In this case the issue was on the Ambari Master server which has the most services running on it.
So first call of action is to see if maybe Ambari is not making the call correctly:
[root#master ~]# kafka
Usage: /usr/bin/kafka {start|stop|status|clean}
[root#master ~]# kafka status
Kafka is not running.
[root#master ~]# kafka start
Starting Kafka succeeded with PID=15815.
[root#master ~]# kafka status
Kafka is not running.
Next I tired a clean start, not that I figured it would make much difference, but maybe there was a issue with the logs not allowing it to restart:
[root#master ~]# kafka clean
Removed the Kafka PID file: /var/run/kafka/kafka.pid.
Removed the Kafka OUT file: /var/log/kafka/kafka.out.
Removed the Kafka ERR file: /var/log/kafka/kafka.err.
[root#master ~]# kafka status
Kafka is not running. No pid file found.
[root#master ~]# kafka start
Starting Kafka succeeded with PID=15875.
[root#master-01 ~]# kafka status
Kafka is not running.
So lets take a proper look at the logs:
[root#master ~]# ls -ltr /var/log/kafka/
-<cut>-
-rw-r--r-- 1 kafka hadoop 6588 Aug 11 13:55 controller.log.2015-08-11-13
-rw-r--r-- 1 kafka hadoop 6000 Aug 11 13:59 server.log.2015-08-11-13
-rw-r--r-- 1 kafka hadoop 6588 Aug 11 14:55 controller.log
-rw-r--r-- 1 kafka hadoop 5700 Aug 11 14:56 server.log
-rw-r--r-- 1 root root 284 Aug 11 15:09 kafka.err
-rw-r--r-- 1 root root 522 Aug 11 15:09 kafka.out
-rw-r--r-- 1 kafka hadoop 707 Aug 11 15:09 kafkaServer-gc.log
Lets look at the error and out files:
[root#master ~]# cat /var/log/kafka/kafka.err
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c5330000, 986513408, 0) failed; error='Cannot allocate memory' (errno=12)
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c5330000, 986513408, 0) failed; error='Cannot allocate memory' (errno=12)
[root#master ~]# cat /var/log/kafka/kafka.out
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 986513408 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /root/hs_err_pid15875.log
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 986513408 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /root/hs_err_pid16305.log
Ah, that's odd, as I asked for at least 4GB of memory for my VMs, lets check:
[root#master ~]# cat /proc/meminfo
MemTotal: 1922260 kB
MemFree: 278404 kB
Buffers: 8600 kB
Cached: 43384 kB
Best get some more memory allocated!
Normally the minimum that you should install BigInsights with, as recommended by the IBM support pages is 8GB, so this gives you rather a insight into why. At least 2GB of it is just to run the installed services on the system, even before you start loading the DB and running queries.
I have tried to create a ceph filesystem in a single host, for testing purposes, with the following conf file
[global]
log file = /var/log/ceph/$name.log
pid file = /var/run/ceph/$name.pid
[mon]
mon data = /srv/ceph/mon/$name
[mon.mio]
host = penny
mon addr = 127.0.0.1:6789
[mds]
[mds.mio]
host = penny
[osd]
osd data = /srv/ceph/osd/$name
osd journal = /srv/ceph/osd/$name/journal
osd journal size = 1000 ; journal size, in megabytes
[osd.0]
host = penny
devs = /dev/loop1
/dev/loop1 is formatted with XFS and is actually a file with 500Mbs (although that shouldn't matter much) Everything works pretty much OK, and health shows:
sudo ceph -s
2013-12-12 21:14:44.387240 pg v111: 198 pgs: 198 active+clean; 8730 bytes data, 79237 MB used, 20133 MB / 102 GB avail
2013-12-12 21:14:44.388542 mds e6: 1/1/1 up {0=mio=up:active}
2013-12-12 21:14:44.388605 osd e3: 1 osds: 1 up, 1 in
2013-12-12 21:14:44.388738 log 2013-12-12 21:14:32.739326 osd.0 127.0.0.1:6801/8834 181 : [INF] 2.30 scrub ok
2013-12-12 21:14:44.388922 mon e1: 1 mons at {mio=127.0.0.1:6789/0}
but when I try to mount the filesystem
sudo mount -t ceph penny:/ /mnt/ceph
mount error 5 = Input/output error
Usual answers point to ceph-mds not running, but it's actually working:
root 8771 0.0 0.0 574092 4376 ? Ssl 20:43 0:00 /usr/bin/ceph-mds -i mio -c /etc/ceph/ceph.conf
In fact, I managed to make it work previously using these instructions http://blog.bob.sh/2012/02/basic-ceph-storage-kvm-virtualisation.html verbatim previously, but after I tried again I obtained the same problem. Any idea of what might have failed?
Update as indicated by the comment, dmesg shows a problem
[ 6715.712211] libceph: mon0 [::1]:6789 connection failed
[ 6725.728230] libceph: mon1 127.0.1.1:6789 connection failed
Try to use 127.0.0.1. It looks like the kernel is resolving the hostname, but 127.0.1.1 is weird, and maybe it isn't responding to IPv6 loopback.
I'm receiving this error
BLKRASET: Inappropriate ioctl for device
when trying to run
sudo blockdev --setra 256 /data
on my Linux server. The server is being used as a MongoDB server and /data is where it stores it's data.
I initially tried to run this command when I received this warning when starting my MongoDB shell:
Wed Mar 20 22:40:49.850 [initandlisten]
Wed Mar 20 22:40:49.850 [initandlisten] ** WARNING: Readahead for
/data/db is set to 2048KB
Wed Mar 20 22:40:49.850 [initandlisten] ** We suggest setting it to
256KB (512 sectors) or less
Wed Mar 20 22:40:49.850 [initandlisten] **
http://dochub.mongodb.org/core/readahead
The blockdev --setra command is supposed to set the readahead value for that directory and resolve the issue but I'm running into this issue
The blockdev command operates on block devices (disks), not directories. You need to pass it the name of the device in /dev/ where your data directory is stored. If you df /data it will tell you which device is currently mounted there. Then you can run blockdev --setra 512 /dev/whatever