How to add entries with slapadd in openldap replication(synrepl) - linux

We have openldap replication with syncrepl, I don't know how to add slapadd entries into it.
On standalone it works fine. but when i add entries in one of the machine in replication, second machines fails to start slapd.
Thanks

Unfortunately slapadd doesn't write to the accesslog and thus the modifications aren't replicable. This is especially bad, because some attributes can't be modified via ldapadd.
If you only need ordinary attributes, use ldapadd instead.
UPDATE:
It looks like you can use the -w switch:
Write syncrepl context information. After all entries are added, the
contextCSN will be updated with the greatest CSN in the database.

Related

How can I make usbmon log file (*.mon)?

I'm trying to vusb-analyzer.
It requires *.mon log file.
How can I make usbmon log file (*.mon)?
https://www.kernel.org/doc/Documentation/usb/usbmon.txt
The document you linked in your question is actually the answer, please see the sections 1-3.
In section 3, it says:
# cat /sys/kernel/debug/usb/usbmon/0u > /tmp/1.mon.out
This will create a text file 1.mon.out. Its structure is also described in the same document.
Now, how do I know that this is the file to be opened by vusb-analyzer? From what I see, the website of this project doesn't make it clear what the *.mon file is.
However, you can see it in the source code:
https://github.com/scanlime/vusb-analyzer/blob/master/VUsbTools/Log.py#L498
It clearly states, that the program uses the syntax described in the document that you already know:
https://www.kernel.org/doc/Documentation/usb/usbmon.txt
The name of your file doesn't really matter, but if you want it to end with ".mon", you could simply use:
# cat /sys/kernel/debug/usb/usbmon/0u > ~/somefile.mon
Two warnings:
The line with cat I posted here is just an example and in order to use it, you will need to follow the steps in the document (it won't work without enabling usbmon first)
vusb-analyzer hasn't been updated for years and I wasn't able to run it on my machine. Its website mentions Ubuntu 8.10 so I wouldn't be surprised if others had problems running it, too. (For example, in order to reproduce your problem, provide more help).

PTC Integrity batch update member revision

Is there a way to update the member revision of a big list of files via command line?
I can't use :working or :head but have to specify a different revision for each file.
As far as I know --selectionFile only takes paths as input, but not the revision numbers.
edit: I wanted to set member a very big list of files and I wanted to avoid writing the command si updaterevision ... for every file, as it takes ages to complete for that many files. Instead I wanted to know if there is a more advanced method to specify a list of files and their revisions to be able to run the updaterevision only once (like it is with :working) for the whole list of files.
But as it is said in the comment there is no such possibility.
edit2: I use MKS for a couple of years now and as I now know, there is no such possibility (at least up to MKS 11.6) to update many files to different revisions with one single command line call. But using one call per member, as was proposed, made the whole operation take up to several hours as I had many thousands of members in the sandbox and MKS needs some time to complete each sicommand.
Some time already passed since you asked for this question, here is my comment in case it could still be useful for you in the future.
First, It is not completely clear what you want to achieve. Please be more descriptive and if possible provide example.
What I understand as of now is you need to set bunch of files listed as member revision thru the command line. This is fairly simple, the most complicated is actually to have the list of files to be updated to member and the revision that you want to set as member.
I recommend you to create a batch file with the commands to make each file member. You can use Regex to do it very quick and without much trouble.
Here is an example for updating one file member revision:
si updaterevision --hostname=servername --port=portnumber --user=username --changepackageid=5873763:2 --revision=:working myfile_a1.c
where
servername = the name of the server where your sandbox is located
portnumber = the port that provides access to the server for your sandbox
username = your login user id
changepackageid = here you change the number to use your defined TASK:ChangePackage for this changes
revision = if you have a working revision that you want now to become member, just use "working" as revision, otherwise you can define specific revision number, e.g. revision=1.2
At the end you define the name of the file you want to update.
Go to you sandbox root folder, open CMD window, and run the batch file. It will execute each line applying your changes.
If you have a list of files with the revision you want as member, you can use REGEX to convert it into a batch file.
Example list of files in text file:
file1.c 1.10
file3.c 1.19
sec_file1.c 1.1.2.1
support.h 1.7
Use notepad++ or other text editor with regex support and run this search:
Once you know which regex apply, you can now use it in the notepad++ to do a simple search and replace:
Search = ([\w].[\D])\s+([\d.]+).*
Replace = si updaterevision --hostname=servername --port=portnum --user=userid --changepackageid=6123933:4 --revision=\2 \1
\1 => FileName
\2 => File revision
See image below as example:
Finally just save doc as batch file and run it.
Just speculating that if you have a large list of members along with the member revision you want to update to, then you also have an sandbox that served you to generate this list.
If so my approach would be
c:\MySandbox> si updaterevision --recurse --revision=:working
If your member/revision list come from a development path you could first have a sandbox targeting that devpath, resync, (close thesandbox if opened in gui), retarget the sandbox to the destination devpath (or mainline) you want and then issue the command above.
For an single member approach I would use 'si rlog' to generate a list of si-commands directly
si rlog -R --noheaderformat --notrailerformat --revision=:working --format="si updaterevision {membername} --revision={revision}\r\n" > updaterevs.bat.txt
Review updaterevs.bat.txt rename it to updaterevs.bat and ecxecute it.
(Be careful if using it on other sandboxes)
Other interesting readings here might be the "snapshot sandbox" feature,
checkpointing in general and variants rsp. devpaths.
Using only these features might be politically more correct in the philosophy of Integrity.

Programmatic way of modifying cassandra.yaml

We are creating a cassandra cluster at runtime and information like "cluster-name" and IP addresses of "seeds" are available only at runtime. Is there a java wrapper for cassandra.yaml that allows setters and getters for cassandra.yaml and saves it to disk? I understand that I can always create a wrapper myself but wanted to know if theres already one available.
Is there a java wrapper for cassandra.yaml that allows setters and getters for cassandra.yaml and saves it to disk?
Not that I am aware of. Although, that's a good idea for an OpenSource project!
I have done this a few different ways in the past. One is with a combination of Chef and consul-template. Essentially, your cassandra.yaml contains variable place-holders, which are filled by a combination of default attributes (Chef) and cluster-specific settings (consul-template) when your deployment recipe runs.
I have also done this with a Bash script using sed (for a couple of our non-Chef environments). This is an excerpt from a script I wrote to migrate a DataStax Enterprise installation to an Apache Cassandra (open source) install:
#!/bin/bash
cp /etc/dse/cassandra/cassandra.yaml /etc/cassandra/conf
#set GossipingPropertyFileSnitch in cassandra.yaml
sed -i 's/endpoint_snitch: com.datastax.bdp.snitch.DseDelegateSnitch/endpoint_snitch: GossipingPropertyFileSnitch/' /etc/cassandra/conf/cassandra.yaml
#set truststore location
sed -i 's/truststore: \/etc\/dse\/cassandra\//truststore: \/etc\/cassandra\/conf\//g' /etc/cassandra/conf/cassandra.yaml
#set keystore location
sed -i 's/keystore: \/etc\/dse\/cassandra\//keystore: \/etc\/cassandra\/conf\//g' /etc/cassandra/conf/cassandra.yaml
Essentially here, you're doing a regex-replace for specific yaml property settings. Specifically, I needed to update the snitch, and the locations of the keystore/truststore. It's not pretty, but it works.

CouchDB Replication overwrites Documents

I know that when you create a Document on Database A, replicate the Database, then make changes to it on DB A and DB B and THEN replicate again, you’ll get a conflict but both Versions exist in the Revision Tree.
But when you create a Doc with an Id XY on DB A and then create a Doc with the same Id but different content on DB B and then replicate, only one of the Version exists. The other one gets overwritten.
Is the reason for that, that because both documents have no version they descend from and so the Replication Algorithm can’t know that they both exist?
And if yes is there a way of saving both Versions?
Use Case is that there are two databases, one local, one online. they biderectionally sync. On both DBs User create docs. But I need to make sure IF the connection fails for a while that both CAN still create docs and I can merge them whenever the connection is back. I guess the hard Part here is the CREATE instead of UPDATE right?
Firstly, and for total clarity, CouchDB does not overwrite data. The only way for data you've written to be forgotten is to make a successful update to a document.
CouchDB will introduce new branches (aka conflicts) during replication to preserve all divergences of content. If what you've seen is reproducible then it's a bug. Below is my transcript though which shows that CouchDB indeed preserves both revisions as expected;
curl 127.0.0.1:5984/db1 -XPUT
{"ok":true}
curl 127.0.0.1:5984/db2 -XPUT
{"ok":true}
curl 127.0.0.1:5984/db1/mydoc -XPUT -d '{"foo":true}'
{"ok":true,"id":"mydoc","rev":"1-89248382088d08ccb7183515daf390b8"}
curl 127.0.0.1:5984/db2/mydoc -XPUT -d '{"foo":false}'
{"ok":true,"id":"mydoc","rev":"1-1153b140e4c8674e2e6425c94de860a0"}
curl 127.0.0.1:5984/_replicate -Hcontent-type:application/json -d '{"source":"db1","target":"db2"}'
{"ok":true,...}
curl '127.0.0.1:5984/db2/mydoc?conflicts=true'
{"_id":"mydoc","_rev":"1-89248382088d08ccb7183515daf390b8","foo":true,"_conflicts":["1-1153b140e4c8674e2e6425c94de860a0"]}

sync two vobs file (by clearfsimport) without checking in the updated file

I am using following command to sync B vob files from A vob
clearfsimport -master -follow -nsetevent -comment $2 /vobs/A/xxx/*.h /vobs/B/xxx/
It works fine. But it will check in all the changes automatically. Is there a way to do the same task but leave the update files in a check out status?
I want to update the file for B from A. Build my programme, and then re-cover the branch. So if the updated files is an check out status, I can do unco later. Well with my command before, everything is checked in. I cann't re-cover my branch then.
Thanks.
As VonC said, it's impossible to prevent "clearfsimport" to do the check in. And he suggested to use a label to recover back.
For me, the branch where I did "clearfsimport" is branched from a label.Let's call it LABEL_01. So I guess I can use that label for recovery. Is there an easy way (one command) to recover the files under /vobs/B/xxx/ to label LABEL_01 ? I want to do it in my bash script, so the less/easy the command is, the better.
Thanks.
After having a look at the man page for clearfsimport, no, it isn't possible to prevent the checkins.
I would set a label before the clearfsimport, and modify the config spec for the new version to be created in a branch (similar to this config spec).
That way, "re-cover" the initial branch would be easy: none of the new version would have been created in it.

Resources