Simple example to implement hive based registry?
I have an appliaction in which i should maintain the value of volume up and down, everytime when i start or turn off the application, for this i came across hive based registry. But dont know how to implement and how to use?
Please Reply
Thanks in advance
You can start by reading this post: http://geekswithblogs.net/BruceEitman/archive/2009/08/11/windows-ce-what-is-hive-registry.aspx and then read the MSDN article on setting the Hive Based Registry.
Don't change public code though, just make the changes in your platform.reg and they will take affect over the common.reg file.
Basically, you will need to encapsulate all the registry settings that are a must for boot time in you ; HIVE BOOT SECTION + ; END HIVE BOOT SECTION(don't mistake this for an ordinary reg file comment - it is a flag for the build system).
Take a look at the BSP of MAINSTONEIII, it should have it implemented at WINCE600\PLATFORM\MAINSTONEIII\FILES.
Your other option for your requirement is to keep a configuration file on a persistent storage - seems to me like a more trivial solution and more adequate in case you only want persistent registry for the volume setting.
Related
We have recently setup greenplum. Now the major concern is to setup strategy for PITR. Postgres provides PITR capability but I am a little confused as how will it work in greenplum as each segment will have it's own log directory and config file
We recently introduced the concept of named restore point to serve as a building block for PITR for greenplum. In order to use this you will need to use the catalog function gp_create_restore_point() which internally creates a cluster wide consistency point across all the segments. This function returns all the restore point location () for each segment and the master. Using these recovery points you will be able to configure the recover.conf in your PITR cluster
To demonstrate how Greenplum named restore points work, a new test
directory src/test/gpdb_pitr has been added. The test showcases WAL
Archiving in conjunction with the named restore points to do
Point-In-Time Recovery.
In case you are most interested in the details, please refer to the following two commits that discusses this functionality in detail https://github.com/greenplum-db/gpdb/commit/47896cc89b4935199aa7d97043f2b7572a71042b
https://github.com/greenplum-db/gpdb/commit/40e0fd9ce6c7da3921f0b12e55118320204f0f6d
I'm having trouble in figuring out on how to take the backup of Janusgraph database which is backed by persistent storage Apache Cassandra.
I'm looking for correct methodology on how to perform backup and restore tasks. I'm very new to this concept and have no idea on how to do this. It will be highly appreciated if someone explain the correct approach or point me to rightful documentation to safely execute the tasks.
Thanks a lot for your time.
Cassandra can be backed up a few ways. One way is called a "snapshot". You can issue this via "nodetool snapshot" command. What cassandra will do is to create a "snapshots" sub-directory, if it doesn't already exist, under each table that's being "backed up" (each table has its own directory where it stores its data) and then it will create the specific snapshot directory for this particular occurrence of the snapshot (either you can name the directory with the "nodetool snapshot" parameter or let it default). Cassandra will then create soft links to all of the sstables that exist for that particular table - looping through each table, keyspace or database - depending on your "nodetool snapshot" parameters. It's very fast as creating soft links takes almost 0 time. You will have to perform this command on each node in the cassandra cluster to back up all of the data. Each node's data will be backed up to the local host. I know DSE, and possibly Apache, are adding functionality to back up to object storage as well (I don't know if this is an OpsCenter-only capability or if it can be done via the snapshot command as well). You will have to watch the space consumption on this as there are no processes to clean these up.
Like many database systems, you can also purchase/use 3rd party software to perform backups (e.g. Cohesity (formally Talena), Rubrik, etc.). We use one such product in our environments and it works well (graphical interface, easy-to-use point-in-time recoveryt, etc.). They also offer easy-to-use "refresh" capabilities (e.g. refresh your PT environment from, say, production backups).
Those are probably the two best options.
Good luck.
I'm trying to access BigTable from Spark (Dataproc). I tried several different methods and SHC seems to be the cleanest for what I am trying to do and performs well.
https://github.com/GoogleCloudPlatform/cloud-bigtable-examples/tree/master/scala/bigtable-shc
However this approach requires that I put the Google cloud project ID in hbase-site.xml which means I need to build separate versons of the fat jar file with my spark code for each env I run on (prod, staging, etc.) which is something I'd like to avoid.
Is there a way for me to pass in the google cloud project id at runtime?
As far as I can tell, the SHC library does not let you pass through hbase configs (looking in here).
The easiest thing would be to run an init action that gets the VM's project id from VM metadata and sets it in hbase-site.xml. We are working on an initialization that does that and installs the Hbase client for Bigtable. Check out the in-progress pull request, which would be a good starting point if you needed to write one immediately. Otherwise, I expect the PR to get merged in the next couple weeks.
Alternatively, consider adding an option in SHC for passing through properties to the HBaseConfiguration it creates. That would be a valuable feature for the broader community.
I am looking for a best solution for a problem where lets say an application has to access a csv file (say employee.csv) and does some operations such as getEmployee or updateEmployee etc.
Which Volume is best suitable for this and why?
Please note that employee.csv will have some pre-loaded data already.
Also to be precise we are using azure-cli for handling kubernetes.
Please Help!!
My first question would be: is your application meant to be scalable (i.e. have multiple instances running at the same time)? If that is the case, then you should choose a volume that can be written by multiple instances at the same time (ReadWriteMany, https://kubernetes.io/docs/concepts/storage/persistent-volumes/). As you are using Azure, the AzureFile volume could fit your case. However, I am concerned that there could be a conflict with multiple writers (and some data may be lost). My advice would be to better use a Database System so you avoid this kind of situations.
If you only want to have one writer, then you could use pretty much any of them. However, if you use local volumes you could have problems when a pod get rescheduled on another host (it would not be able to retrieve the data). Given the requirements that you have (a simple csv file), the reason I would give you for using one PersistentVolume provider instead of another would be the less painful to setup. In this sense, just like before, if you are using Azure you could simply use an AzureFile volume type, as it should be more straightforward to configure in that cloud: https://learn.microsoft.com/en-us/azure/aks/azure-files
I am thinking of writing a system life saver application for ubuntu, which can restore system to an earlier state. This could be much useful in situations of system break.
User can create restore point before and then use them to restore their system.
This would be used for packages initially and then later on for restoring previous versions of files,somewhat like system restore functionality in microsoft windows.
Here is the idea page Idea page
I have gone through some ideas of implementing it like that which is done in windows, by keeping information about the files in the filesystem, the filesystem is intelligent enough to be used for this feature. But we don't have such file system available in linux, one such file system is brtfs but using this will lead to users creating partitions, which will be cumbersome. So I am thinking of a "copy-on-write and save-on-delete" approach. When a restore point is created I will create a new directory for backup like "backup#1" in the restore folder created by application earlier and then create hard links for the files needed to be restored. Now if any file is deleted from its original location I would have its hard link with me which can be used to restore the file, when needed. But this approach doesn't work on modification. For modification I am thinking of creating hooks in the file system (using redirfs ) which will call my attached callbacks which will check for the modifications in various parts of the files. I will keep these all changes in the database and then reverse the changes as soon as a restore is needed.
Please suggest me some efficient approaches for doing this.
Thanks
Like the comments suggested, the LVM snapshot ability provides a good basis for such an undertaking. It would work on a per-partition level and saves only sectors changed in comparison with the current system state. The LVM howto gives a good overview.
You'll have to set up the system from the very start with LVM, though, and leave sufficient space for snapshots.