Why is perforce giving this error when I try to create a checkpoint? Can I restore the entire database from just a checkpoint file and the journal file? What am I doing wrong, how does this work? Why the perforce user guide a giant book, and there are no video tutorials online?
Why is perforce giving this error when I try to create a checkpoint?
You specified an invalid prefix (//. is not a valid filename). If you want to create a checkpoint that doesn't have a particular prefix, just omit that argument:
p4d -jc
This will create a checkpoint called something like checkpoint.8 in the server root directory (P4ROOT), and back up the previous journal file (journal.7) in the same location.
Can I restore the entire database from just a checkpoint file and the journal file?
Yes. The checkpoint is a snapshot of the database at the moment in time when you took the checkpoint. The journal file records all transactions made after that point.
If you restore from just the checkpoint, you will recover the database as of the point in time at which the checkpoint was taken. If you restore from your latest checkpoint plus the currently running journal, you can recover the entire database up to the last recorded transaction.
The old journal backups that are created as part of the checkpoint process provide a record of everything that happened in between checkpoints. You don't need these to recover the latest state, but they can be useful in extraordinary circumstances (e.g. you discover that important data was permanently deleted by a rogue admin a month ago and you need to recover a copy of the database to the exact moment in time before that happened).
The database (and hence the checkpoint/journal) does not include depot file content! Make sure that your depots are located on reasonably durable storage (e.g. a mirrored RAID) and/or have regular backups (ideally coordinated with your database checkpoints so that you can restore a consistent snapshot in the event of a catastrophe).
https://www.perforce.com/manuals/v15.1/p4sag/chapter.backup.html
Related
We are scanning ADLS Gen 2 data lake successfully with Purview. However, if a folder is deleted in the lake and you re-scan, the scan does not remove the deleted folder. The deleted folder remains in Purview, but the last modified date (from the scan) remains as the previous scan date/time from when it was present. How can I purge these now invalid entries? Removing the previous scan does not work. Removing the entire source from Purview leaves the scan results behind in the register and a new scan does not clean them up. There is also no manual delete/purge option. The only option seems to be to remove the entire purview account from Azure, redeploy and reconfigure everything.
Am I missing a trick?
Reading to this https://learn.microsoft.com/en-us/azure/purview/concept-detect-deleted-assets this mostly seems like a expected behavior, did you try scanning more than twice post 5mins intervals.
To keep deleted files out of your catalog, it's important to run regular scans. The scan interval is important, because the catalog can't detect deleted assets until another scan is run. So, if you run scans once a month on a particular store, the catalog can't detect any deleted data assets in that store until you run the next scan a month later. 😕
I'm having trouble in figuring out on how to take the backup of Janusgraph database which is backed by persistent storage Apache Cassandra.
I'm looking for correct methodology on how to perform backup and restore tasks. I'm very new to this concept and have no idea on how to do this. It will be highly appreciated if someone explain the correct approach or point me to rightful documentation to safely execute the tasks.
Thanks a lot for your time.
Cassandra can be backed up a few ways. One way is called a "snapshot". You can issue this via "nodetool snapshot" command. What cassandra will do is to create a "snapshots" sub-directory, if it doesn't already exist, under each table that's being "backed up" (each table has its own directory where it stores its data) and then it will create the specific snapshot directory for this particular occurrence of the snapshot (either you can name the directory with the "nodetool snapshot" parameter or let it default). Cassandra will then create soft links to all of the sstables that exist for that particular table - looping through each table, keyspace or database - depending on your "nodetool snapshot" parameters. It's very fast as creating soft links takes almost 0 time. You will have to perform this command on each node in the cassandra cluster to back up all of the data. Each node's data will be backed up to the local host. I know DSE, and possibly Apache, are adding functionality to back up to object storage as well (I don't know if this is an OpsCenter-only capability or if it can be done via the snapshot command as well). You will have to watch the space consumption on this as there are no processes to clean these up.
Like many database systems, you can also purchase/use 3rd party software to perform backups (e.g. Cohesity (formally Talena), Rubrik, etc.). We use one such product in our environments and it works well (graphical interface, easy-to-use point-in-time recoveryt, etc.). They also offer easy-to-use "refresh" capabilities (e.g. refresh your PT environment from, say, production backups).
Those are probably the two best options.
Good luck.
In my sandbox S, I have created a changelist X and it gets submitted to perforce as Y. From Y , I want to get the exact creation time of X. That is the first time this changelist was created.
The unit of versioning in Perforce is the submitted changelist; there is not generally a detailed record of everything that happened in the workspace prior to the submit, including edits made to the changelist while it was in a pending state. (If you want more fine-grained versioning, submit more fine-grained changelists.)
That said, if you're willing to do the work, you can parse this information out of the server journal files (which are primarily used for server recovery rather than end-user consumption, but since they represent a plaintext record of every database transaction you can mine a LOT of data out of them if you've got access and a good understanding of the server database schema). Look for modifications to the db.change table; each one is timestamped. If you need to know when files were opened prior to the creation of the changelist, those updates are in db.working.
I have a master couchdb, which is replicated to a local db every time the local application starts.
The user can modify the local docs, but I want these docs to be deleted when the replication starts if they have disappeared from the master db.
How can I achieve that?
This is already how replication works. When a document is modified (including deletion), that change gets replicated.
The only possible problem you may encounter is that if a local change is made at the same time a deletion occurs, then upon sync, there will be a conflict.
So you need your local app to do some sort of conflict resolution, which selects the deleted revision. I suggest reading about the CouchDB Replication and Conflict Model as a starting place.
I am trying to upload a 2.6 GB iso to Azure China Storage using AZCopy from my machine here in the USA. I shared the file with a colleague in China and they didn't have a problem. Here is the command which appears to work for about 30 minutes and then fails. I know there is a "Great Firewall of China" but I'm not sure how to get around the problem.
C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy> .\AzCopy.exe
/Source:C:\DevTrees\MyProject\Layout-Copy\Binaries\Iso\Full
/Dest:https://xdiso.blob.core.chinacloudapi.cn/iso
/DestKey:<my-key-here>
The network between the azure server and your local machine should be very slow, and AzCopy use default 8*core threads to do data transfer which might be too aggressive for the slow network.
I would suggest you reduce the thread number by set parameter "/NC:", you can set it as a smaller number as "/NC:2" or "/NC:5", and see if the transfer will be more stable.
BTW, when the timeout issue repro again, please resume with same AzCopy command line, then you can always make progress with resume, instead of start from beginning.
Since you're experiencing a timeout, you could try AZCopy with in re-startable mode like this:
C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy> .\AzCopy.exe
/Source:<path-to-my-source-data>
/Dest:<path-to-my-storage>
/DestKey:<my-key-here>
/Z:<path-to-my-journal-file>
The path to your journal file is arbitrary. For instance, you could site it to C:\temp\azcopy.log if you'd like.
Assume an interrupt occurs while copying your file, and 90% of the file has been transferred to Azure already. Then upon restarting, we will only transfer the remaining 10% of the file.
For more information, type .\AzCopy.exe /?:Z to find the following info:
Specifies a journal file folder for resuming an operation. AzCopy
always supports resuming if an operation has been interrupted.
If this option is not specified, or it is specified without a folder path,
then AzCopy will create the journal file in the default location,
which is %LocalAppData%\Microsoft\Azure\AzCopy.
Each time you issue a command to AzCopy, it checks whether a journal
file exists in the default folder, or whether it exists in a folder
that you specified via this option. If the journal file does not exist
in either place, AzCopy treats the operation as new and generates a
new journal file.
If the journal file does exist, AzCopy will check whether the command
line that you input matches the command line in the journal file.
If the two command lines match, AzCopy resumes the incomplete
operation. If they do not match, you will be prompted to either
overwrite the journal file to start a new operation, or to cancel the
current operation.
The journal file is deleted upon successful completion of the
operation.
Note that resuming an operation from a journal file created by a
previous version of AzCopy is not supported.
You can also find out more here: http://blogs.msdn.com/b/windowsazurestorage/archive/2013/09/07/azcopy-transfer-data-with-re-startable-mode-and-sas-token.aspx