How to recover a dropped database on MySQL - database-administration

I accidentaly dropped a database on MySQL yog ultimate. Also, I found that the IT guy uninstalled MySQL yog from the machine.
Now am working on two machines which includes the one from which database was dropped and mysql was uninstalled.
Is there a way to recover the dropped databases.

You said in a comment that you have a backup from a couple of hours prior to the data loss.
If you also have binary logs, you can restore the backup, and then reapply changes from the binary logs.
Here is documentation on this operation: http://dev.mysql.com/doc/refman/5.6/en/point-in-time-recovery.html
You can even filter the binary logs to reapply changes for just one database (mysqlbinlog --database name). For example you may have other databases that were not dropped on the same instance, and you wouldn't want to reapply changes to those other databases.
Recovering two hours worth of binary logs won't take "a very long amount of time." The trickiest part is figuring out the start point to begin replaying the binary logs. If you were lucky enough to include the binary log position with the backup, this will be simpler and very precise. If you have to go by timestamp, it's less precise and you probably cannot hope to do an exact recovery.
If you didn't have binary logs enabled on this instance since you backed up the database, it's a lot trickier to do a data recovery of lost files. You might be able to use a filesystem undelete tool like the EaseUS Data Recovery Wizard (though I can't say I have experience using that tool).
Reconstructing the files you recover is not for the faint of heart, and it's too much to get into here. You might want to get help from a professional MySQL consulting firm. I work for one such firm, Percona, who offers data recovery services.

There's really only one word: Backups.

After MySQL drops database the data is still on the media for a while. So you can fetch records and rebuild database with DBRECOVER.
mysql> drop database employees;
Query OK, 14 rows affected (0.16 sec)
#sync
#sync
select drop database recovery
select MYSQL VERSION as you used; Page Size should be left as 16k
click select directory , and input the ##datadir directory
!!!caution: you should input the ##datadir directory here. pls don't copy the ##datadir directory to any other filesystem or mount point , and use the copy one . The software need to scan the orginal filesystem or mount point ,otherwise it can't work.You'd better set #datadir mount point as read only, avoid any more disk write is necessary. And don't locate DBRECOVER software package on the same filesystem.
https://youtu.be/ao7OY8IbZQE

Related

cassandra: restoring partially lost data

Theoretical question:
Lets say I have a cassandra cluster with some data in it.
Backups are created on a daily basis.
Now a subset of data is being lost, either by application error or manual deletion.
What is the best way to restore data from existing backup?
I can think of starting a separate node with the backup disk attached, then export data manually through selects and reimport into the prod database.
That would work but sounds complicated, is there a more straight forward solution for such problems?
If its a single partition probably best bet is to use sstabledump or something like sstable-tools to read from it and just manually reinstert. If ok with restoring everything deleted from time of snapshot: reduce gcgrace to purge any tombstones with a force compact (or else they will continue to shadow the restored data) and use the sstable loader or if the token ranges are the same copy the backed up sstables back in the data directory.

How to migrate data from one exasol version 5 to exasol version 6 without using files?

I wish to migrate data from exasol to exasol, but do not wish to use files as it would take a lot of time to move terabytes of data. I am totally new to exasol and have never worked on migration. Script is given on github (https://github.com/EXASOL/database-migration/blob/master/exasol_to_exasol.sql) but that is again using file import. Any lead would be appreciated!
thanks
Ok, we did this migration for ~80Tb compressed size (~400Tb raw size) database.
First of all, Exasol v6 works with data volumes created in v5 without any problems. There is no need to make this migration ASAP.
The simplest way is:
Upgrade to Exasol v6.
Create an archive volume, make full backup.
Create a data volume, restore backup.
Create new ExaSolution instance pointing to restored data volume.
If everything is ok, drop old Exasol instance and old data volume.
This is the fastest and easiest method, but you'll need a lot of disk space. It is a good idea to drop all indexes and truncate all staging tables to reduce size of backup.

memsql how to free space from plancachedir

I have a very small memsql instance which have on 200 tables 200MB data in total. The plancachedir kept fulling the file system (25GB+). I tried to shutdown the databases, deleted files under plancachedir. but after restarted database, all files came back. "show plancache" show 0 entries so there's no plans to be deleted.
Would anyone let me know the best way to manage the plancachedir space consumption?
Thanks in advance.
So, if you are comfortable turning off your machine and deleting the plancache directory, try just running SNAPSHOT <db_name> on each database before turning off the server and deleting the plancache.
Otherwise the queries will be recompiled for every write query and alter table you ran during recovery.
25 Gigs is a lot though...
To be honest, MemSQL is not optimized for the case of many 1-meg tables...
Depending on your use case, it might be worth investigating our JSON datatype, or rethinking your schemas.

Cassandra - Delete Old Versions of Tables and Backup Database

Looking in my keyspace directory I see several versions of most of my tables. I am assuming this is because I dropped them at some point and recreated them as I was refining the schema.
table1-b3441432142142sdf02328914104803190
table1-ba234143018dssd810412asdfsf2498041
These created tables names are very cumbersome to work with. Try changing to one of the directories without copy pasting the directory name from the terminal window... Painful. So easy to mistype something.
That side note aside, how do I tell which directory is the most current version of the table? Can I automatically delete the old versions? I am not clear if these are considered snapshots or not since each directory also can contain snapshots. I read in another post you can stop autosnapshot, but I'm not sure I want that. I'd rather just automatically delete any tables not being currently used (i.e.: that are not the latest version).
I stumbled across this trying to do a backup. I realized I am forced go to every table directory and copy out the snapshot files (there are like 50 directories..not including all the old table versions) which seems like a terrible design (maybe I'm missing something??).
I assumed I could do a snapshot of the whole keyspace and get one file back or at least output all the files to a single directory that represents the snapshot of the entire keyspace. At the very least it would be nice knowing what the current versions are so I can grab the correct files and offload them to storage somewhere.
DataStax Enterprise has a backup feature but it only supports AWS and I am using Azure.
So to clarify:
How do I automatically delete old table versions and know which is
the current version?
How can I backup the most recent versions of the tables and output the files to a single directory that I can offload somewhere? I only have two nodes, so simply relying on the repair is not a good option for me if a node goes down.
You can see the active version of a table by looking in the system keyspace and checking the cf_id field. For example, to see the version for a table in the 'test' keyspace with table name 'temp', you could do this:
cqlsh> SELECT cf_id FROM system.schema_columnfamilies WHERE keyspace_name='test' AND columnfamily_name='temp' allow filtering;
cf_id
--------------------------------------
d8ea9830-20e9-11e5-afc0-c381f961c62a
As far as I know, it is safe to delete (rm -r) outdated table version directories that are no longer active. I imagine they don't delete them automatically so that you can recover the data if you dropped them by mistake. I don't know of a way to have them removed automatically even if auto snapshot is disabled.
I don't think there is a command to write all the snapshot files to a single directory. According to the documentation on snapshot, "After the snapshot is complete, you can move the backup files to another location if needed, or you can leave them in place." So it's left up to the application developer how they want to handle archiving the snapshot files.

Split large database using Lotus Domino replica

I've a 34Gb database (on server A), and i want to delete part of its documents to improve performance, after creating a replica of database itself.
Followed these steps:
create a local replica of database
deleted several documents from original database
I want to be sure to recover deleted documents into original database, if needed, using replica database.
So i try to use a pull into database from local replica, or a push from replica to database.
Nothing happened, 0 documents added, i'm not able to "re-import" documents.
What's wrong?
They're not supposed to come back! Replication goes both ways, and the most recent change to a document overwrites an older version, but deletion always wins.
Well... almost always.
When a document is deleted in one replica, a 'deletion stub' is left in its place. As long as that stub exists in the replica, a version of that document in another replica will not replicate back. The stub blocks it. That's why deletion wins.
But stubs are purged after a period of time called the 'purge interval'. The default purge interval is 30 days. After a stub has been purged from a replica, deletion can't win any more because there is nothing left to block an old revision from replicating back from another replica. The thing is, usually this is a Bad Thing. Usually when documents are deleted, you want them to stay deleted. You don't want them to reappear just because somebody kept a replica off-line for 31 days.
Now, there are some ways that you can try and control this process carefully, purging stubs and using something else (e.g., selective replication settings) to prevent deletions from coming back except when you want them to. There are ways to try, but one slip up with one setting in one replica, and boom! Bad things happen. And that includes any replica, including ones that you are not controlling carefully. It's a bad idea. I agree completely with #Karl-Henry on this.
Also, selective replication is evil and should be avoided at all costs. That's just my opinion, anyhow, but I have a lot of scars left over from the days before I came to that conclusion.
Here are two Lotus tech notes about replica stubs and the purge interval: Purging documents in Lotus Notes, How to purge document deletion stubs immediately. Please use what you learn from these technotes wisely. I urge you not to to use this knowledge to try and construct a replication-based backup/restore scheme!
I would be very careful using a replica as an archive like that. I could see someone replicating the wrong way, and that would cause some issues...
I have designed some archive solutions for several of my big databases here at work. I simply have a separate database (same design) designated as archive. I then have a manually triggered or scheduled agent (different in different databases) that identify the document to be archived and moved them from the production database to the archive. I then have functions to move documents back into production of needed.

Resources