I know this is more like a serverfault question than a stackoverflow question, but since serverfault isn't up yet, here I go:
I'm supposed to move an application from one redhat server to another, and without very good knowledge of the internal workings of the application, how would I move the OpenLDAP database from the one machine to the other, with schemas and all.
What files would I need to copy over? I believe the setup is pretty standard.
The problem with SourceRebels' answer is that slapcat(8) does not guarantee that the data is ordered for ldapadd(1)/ldapmodify(1).
From man slapcat (from OpenLDAP 2.3) :
The LDIF generated by this tool is suitable for use with slapadd(8).
As the entries are in database order, not superior first order, they
cannot be loaded with ldapadd(1) without first being reordered.
(FYI: In OpenLDAP 2.4 that section was rephrased and expanded.)
Plus using a tool that uses the backend files to dump the database and then using a tool that loads the ldif through the ldap protocol is not very consistent.
I'd suggest to use a combination of slapcat(8)/slapadd(8) OR ldapsearch(1)/ldapmodify(1). My preference would go to the latter as it does not need shell access to the ldap server or moving files around.
For example, dump database from a master server under dc=master,dc=com and load it in a backup server
$ ldapsearch -Wx -D "cn=admin_master,dc=master,dc=com" -b "dc=master,dc=com" -H ldap://my.master.host -LLL > ldap_dump-20100525-1.ldif
$ ldapadd -Wx -D "cn=admin_backup,dc=backup,dc=com" -H ldap://my.backup.host -f ldap_dump-20100525-1.ldif
The -W flag above prompts for ldap admin_master password however since we are redirecting output to a file you wont see the prompt - just an empty line. Go ahead and type your ldap admin_master password and enter and it will work. First line of your output file will need to be removed (Enter LDAP Password:) before running ldapadd.
Last hint, ldapadd(1) is a hard link to ldapmodify(1) with the -a (add) flag turned on.
ldapsearch and ldapadd are not necessarily the best tools to clone your LDAP DB. slapcat and slapadd are much better options.
Export your DB with slapcat:
slapcat > ldif
Import the DB with slapadd (make sure the LDAP server is stopped):
slapadd -l ldif
Some appointments:
Save your personalized schemas and objectclasses definitions on your new server. You can look for your included files at slapd.conf to obtain it, for example (this is a part of my slapd.conf):
include /etc/ldap/schema/core.schema
Include your personalized schemas and objectclasses in your new openLDAP installation.
Use slapcat command to export your full LDAP tree to a single/various ldif files.
Use ldapadd to import the ldif files on to your new LDAP installation.
I prefer copy the database through the protocol:
first of all be sure you have the same schemas on both servers.
dump the database with ldapsearch:
ldapsearch -LLL -Wx -D "cn=admin,dc=domain" -b "dc=domain" > domain.ldif
and import it in the new server:
ldapmodify -Wx -D "cn=admin,dc=domain" -a -f domain.ldif
in one line:
ldapsearch -LLL -Wx -D "cn=admin,dc=domain" -b "dc=domain" | ldapmodify -w pass -x -D "cn=admin,dc=domain" -a
By using the bin/ldap* commands you are talking directly with the server while using bin/slap* commands you are dealing with the backend files
(Not enough reputation to write a comment...)
Ldapsearch opens a connection to the LDAP server.
Slapcat instead accesses the database directly, and this means that ACLs, time and size limits, and other byproducts of the LDAP connection are not evaluated, and hence will not alter the data. (Matt Butcher, "Mastering OpenLDAP")
Thanks, Vish. Worked like a charm! I edited the command:
ldapsearch -z max -LLL -Wx -D "cn=Manager,dc=domain,dc=fr" -b "dc=domain,dc=fr" >/tmp/save.ldif
ldapmodify -c -Wx -D "cn=Manager,dc=domain,dc=fr" -a -f /tmp/save.ldif
Just added the -z max to avoid the size limitation and the -c to go on even if the target domain already exists (my case).
Related
I am running a test script where files needs to be copied to the target embedded system.But when this command of copying the files to remote target system is run from the script I was prompted for the administrator password of the Target Board.How can I automate the script in such a way that the script will pick the password by itself(from within the script) and I don't have to put the password manually every-time i run the script.
Snippet form the script is as below :
scp test.file1 <Target ip-address>:/home/bot21/test/.
Password is prompted when the above command is run.
The right way to do this is with key-based authentication.
Read about it here.
If that link ever breaks, just google it: "ssh passwordless" or "ssh key authentication". Despite Toby's comment, I think linking to or instructing how to search it yourself is better than repeating what others can say better and in more depth than I can.
Use the -i option. Manpage says:
-i identity_file
Selects the file from which the identity (private key) for public
key authentication is read. This option is directly passed to
ssh(1).
That is do :
scp -i /path/to/identity_file test.file1 <Target ip-address>:/home/bot21/test/
The process of creating the identity file is well described [ here ].
The access-permissions for the identity file should be configured in such a way that potential users in your system who may access this file should be able to read it. Also mind that these users should be able to traverse the path ie /path/to in our example to reach the file
You can use sshpass to pass the password to the scp. Something like
sshpass -p passw0rd scp test.file1 <Target ip-address>:/home/bot21/test/
But as already mentioned, using the keys is recommended. I added the following notes to SO Documentation before it was retired.
Connecting from script using password
When you really need to script ssh connection, piping the password into the ssh command does not work (echo passw0rd | ssh host). It is because the password is not read from standard input, but directly from TTY (teleprinter, teletypewriter, Teletype for historical reasons).
But there is sshpass tool which works around this problem. It can read the password from parameter, file or environment variable. But note that none of these options does not satisfy the security requirements for a passwords!
$ sshpass -p passw0rd ssh host
$ sshpass -f /secret/filename ssh host
$ SSHPASS=passw0rd sshpass -e ssh host
The command line options can be seen by other users in ps (during runtime it is masked, but not during start time and you can't rely on it):
... 23624 6216 pts/5 Ss Aug30 0:00 \_ /bin/bash
... 12812 1988 pts/5 S+ 08:50 0:00 | \_ sshpass -p passw0rd ssh host
... 45008 5796 pts/15 Ss+ 08:50 0:00 | \_ ssh host
Note, that environemnet variables of a process are also accessible by other processes on the system using /proc/PID/environ file.
Finally, storing the password in the file might look like the best possible idea, but still using keys as described in the other examples is preferred way to use ssh.
I want to update passwords [user's already existing in ldap] of the user by importing data from /etc/passwd & /etc/shadow
How to achieve this ?
I will give the overview of my setup.
nodes user id & password managed by management node [xcat], ldap not used for this purpose.
We have imported the user's from management node to ldap server by following the below given steps:-
Copied /etc/passwd, /etc/group & /etc/shadow from management node.
getent passwd > /tmp/passwd.out getent shadow > /tmp/shadow.out
cd /usr/share/migrationtools/ ./migrate_passwd.pl /tmp/passwd.out > /tmp/passwd.ldif
ldapadd -x -W -D "cn=Manager,dc=aadityaldap,dc=com" -f /tmp/passwd.ldif
Now we want to update the passwords frequently and keep the ldap server sync with out management node. please give me idea how to achive this.
I tried the same way i imported users into ldap but it gives me an error.
[root#iitmserver2 migrationtools]# ldapmodify -x -W -D "cn=Manager,dc=aadityaldap,dc=com" -f /tmp/passwd.ldif
Enter LDAP Password:
ldapmodify: modify operation type is missing at line 2, entry "uid=pharthiphan,ou=People,dc=aadityaldap,dc=com"
I have new goal. Be able to create users of openam with ssoadm.
I have read the documentation of Openam
https://wikis.forgerock.org/confluence/display/openam/ssoadm-identity#ssoadm-identity-create-identity
However, I don't know how to create a user and then assign it a password. For now I just can create users by openam web, but is not desirable, I want to automatize.
Somebody know how can I create a normal user with ssoadm?
./ssoadm create-identity ?
./ssoadm create-agent ?
UPDATE: I have continued with my investigation :) I think I'm closer than before
$ ./ssoadm create-identity -u amadmin -f /tmp/pwd.txt -e / -i Test -t User
Minimum password length is 8.
But where is the parameter for password?
Thanks!
To create a new user in the configured data stores you could execute the following ssoadm command:
$ openam/bin/ssoadm create-identity -e / -i helloworld -t User -u amadmin -f .pass -a givenName=Hello sn=World userPassword=changeit
Here you can see that I've defined the password as the userPassword attribute, which is data store dependent really. For my local OpenDJ this is perfectly legal, but if you are using a database or something else, then you'll have to adjust the command accordingly.
If you don't want to provide the attributes on the command line, then you could put all the values into a properties file, for example:
$ echo "givenName=Hello
sn=World
userPassword=changeit" > hello.txt
$ openam/bin/ssoadm create-identity -e / -i helloworld -t User -u amadmin -f .pass -D hello.txt
But I must say that using OpenAM for identity management is not recommended, you should use your data store's own tools to manage identities (i.e. use an LDAP client within your app, or just simply use the ldap* CLI tools). You may find that OpenAM doesn't handle all the different identity management related tasks as normally people would expect, so to prevent surprises use something else for identity management.
Is there a way to try multiple passwords when using sshpass command? I have a txt file named hosts.txt listing multiple system IPaddresses and each system uses different passwords (for example - 'mypasswd', 'newpasswd, nicepasswd'). The script reads the hosts.txt file and execute a set of commands on each system. Since I don't know which system uses which of these given passwords, i wanted to try all these set along with sshpass command and execute the script with the password that works.. Is that possible?
#!/bin/bash
while read host; do
sshpass -p 'mypasswd' ssh -o StrictHostKeyChecking=no -n root#$host 'ls;pwd;useradd test'
done < hosts.txt
Instead of trying to get password based authentication, isn't it an option to setup key based auth? You can then either add your one public key to ll systems or optionally generate different ones and use the -i keyfile option or create an entry in the ssh configuration file as below.
Host a
IdentityFile /home/user/.ssh/host-a-key
What would be the most efficient way for me to export 200 databases with a total of 40GB of data, and import them into another server? I was originally planning on running a script that would export each DB to their own sql file, and then import them into the new server. If this is the best way, are there some additional flags i can pass to the mysqldump that will speed it up?
The other option I saw was to directly pipe the mysqldump into an import over SSH. Would this be a better option? If so could you provide some info on what the script might look like?
If the servers can ping each other you could use PIPES to do so:
mysqldump -hHOST_FROM -u -p db-name | mysql -hHOST_TO -u -p db-name
Straightforward!
[EDIT]
Answer for your question:
mysqldump -hHOST_FROM -u -p --all | mysql -hHOST_TO -u -p
The quick and fastest way is to use percona xtrabackup to take hot backups. It is very fast and you can use it on live system whereas mysqldump can cause locking. Please avoid copying /var/lib directory to other server in case of Innodb, this would have very bad effects.
Try percona xtrabackup, here is some more information on this on installation and configuration. Link here.
If both mysql servers will have same dbs and config I think the best method is to copy the /var/lib/mysql dir using rsync. Stop servers before doing the copy to avoid table corruption
Export MySQL database using SSH with the command
mysqldump -p -u username database_name > dbname.sql
Move the dump using wget from the new server SSH.
wget http://www.domainname.com/dbname.sql
Import the MySQL database using SSH with the command
mysql -p -u username database_name < file.sql
Done!!