Get a persistant string in and out of the TPM2 module - tpm

I'm trying to save a small amount of data in the TPM2 over power cycles. So that this small string will only be tied to one specific machine. Here is what I have working.
# put data in file that is to be sealed
echo "my sealed data" > seal.dat
# create a primary key
tpm2_createprimary -c primary.ctx
# create a child key in public and private parts
tpm2_create -C primary.ctx -u obj.pub -r obj.priv
# create a sealed object
tpm2_create -C primary.ctx -i seal.dat -u obj.pub -r obj.priv
# load the private and public portions into the TPM
tpm2_load -C primary.ctx -u obj.pub -r obj.priv -c key.ctx
# unseal the data
tpm2_unseal -c key.ctx
But after a power cycle if I enter:
'tpm2_unseal -c key.ctx'
I get the following error:
WARNING:esys:src/tss2-esys/api/Esys_ContextLoad.c:279:Esys_ContextLoad_Finish() Received TPM Error
ERROR:esys:src/tss2-esys/api/Esys_ContextLoad.c:93:Esys_ContextLoad() Esys Finish ErrorCode (0x000001df)
ERROR: Esys_ContextLoad(0x1DF) - tpm:parameter(1):integrity check failed
ERROR: Invalid item handle authorization
ERROR: Unable to run tpm2_unseal
I am using the tpm_server (emulator) if that makes any difference.
So what is the best way to load a small string into the tpm2 and have power loss persistence?

Sealing an object does not store anything in the TPM's NV memory. It encrypts the data with a key that's only accessible to the TPM, but it is saved in two files on your file system -- nothing is saved in the TPM.
To store some data in the TPM's memory, you would define the memory index and then save to it, for example:
tpm2_nvdefine -Q $nv_test_index -C o -s 32 -a "ownerread|policywrite|ownerwrite"
echo "please123abc" > nv.test_w
tpm2_nvwrite -Q $nv_test_index -C o -i nv.test_w
And then to read the data back:
tpm2_nvread -Q $nv_test_index -C o -s 32 -o 0
(sample code from tpm2-tools test script)

Related

Failed to Find Keys at Zone Apex

I'm trying to create a signed zone file, but previous examples are not working for me. See below
[root#dnsserv1 named]# dnssec-keygen -r /dev/random -a RSASHA1 -b 1024 -n ZONE example.edu
Generating key pair.........................+++++ ..............................................+++++
example.edu.+005+56778
# Create KSK
[root#dnsserv1 named]# dnssec-keygen -r /dev/random -a RSASHA1 -b 1024 -n ZONE -f KSK example.edu
Generating key pair........................................................................+++++ ..........+++++
example.edu.+005+27182
[root#dnsserv1 named]# dnssec-signzone -o example.edu -k 'example.edu.+005+56778' example.edu.zone 'example.edu.+005+27182.key'
dnssec-signzone: warning: example.edu.zone:1: no TTL specified; using SOA MINTTL instead
dnssec-signzone: fatal: failed to find keys at the zone apex: not found
This "failed to find keys at the zone apex: not found" error doesn't seem like it's common. If I search for it on Google, almost nothing comes up. Did I forget to do something? I've tried many different variants on what's shown above.

Tmux link-pane with format variables

I am trying to link a window from another session by specifying target session using format variable. In that way I hope to get it always linked next to the current active window.
The hard coded version of the working command:
:link-window -a -s 1:remote -t 0:2
in which case I specify a target pane literaly. When I try any of:
:link-window -a -s 1:remote -F -t "#{session_name}":"#{window_index}"
:link-window -a -s 1:remote -F "#{session_name}":"#{window_index}"
:link-window -a -s 1:remote -t "#{session_name}":"#{window_index}"
I got an error. The notable part here is that when I do use -F flag, the usage for link-window command is displayed. And when I omit it and use only -t, the error is cann't find window #{session_name}
Does it mean that link-window command simply doesn't support format variables?
-t does not support format variables and link-window does not support -F. run-shell will expand so you can do it by doing, for example:
run "tmux linkw -t '#{session_name}'"

Multiple downloads with wget at the same time

I have a link.txt with multiple links for download,all are protected by the same username and password.
My intention is to download multiple files at the same time, if the file contains 5 links, to download all 5 files at the same time.
I've tried this, but without success.
cat links.txt | xargs -n 1 -P 5 wget --user user007 --password pass147
and
cat links.txt | xargs -n 1 -P 5 wget --user=user007 --password=pass147
give me this error:
Reusing existing connection to www.site.com HTTP request sent,
awaiting response... 404 Not Found
This message appears in all the links i try to download, except for the last link in the file which starts to download.
i am currently use, but this download just one file at the time
wget -user=admin --password=145788s -i links.txt
Use wget's -i and -b flags.
-b
--background
Go to background immediately after startup. If no output file is specified via the -o, output is redirected to wget-log.
-i file
--input-file=file
Read URLs from a local or external file. If - is specified as file, URLs are read from the standard input. (Use ./- to read from a file literally named -.)
Your command will look like:
wget --user user007 --password "pass147*" -b -i links.txt
Note: You should always quote strings with special characters (eg: *).

Trouble understanding ssh key gen man page - Specify location and password

This is my code:
ssh-keygen -t rsa -C "$APP"
This works perfectly. However it then asks me to specify location and password. I was hoping I can automate this all in one go, however this command fails:
ssh-keygen -t rsa -C "$APP" -P "$SSHKEYPASS" -T ~/.ssh/id_rsa.pub
This command seems to fail though, when I specify the password I want for the key and location in the same line. I don't really understand the man page:
http://linux.die.net/man/1/ssh-keygen
Can anyone tell me where I have gone wrong?
-P is for the old passphrase, to create a key I assume you want -N for the new passphrase.
-T is for DH group test output it appears (not that I know what that is exactly).
You want -f to specify the key filename. And you specify the private key file not the public key file.
So try:
ssh-keygen -t rsa -C "$APP" -N "$SSHKEYPASS" -f ~/.ssh/id_rsa

How do I clone an OpenLDAP database

I know this is more like a serverfault question than a stackoverflow question, but since serverfault isn't up yet, here I go:
I'm supposed to move an application from one redhat server to another, and without very good knowledge of the internal workings of the application, how would I move the OpenLDAP database from the one machine to the other, with schemas and all.
What files would I need to copy over? I believe the setup is pretty standard.
The problem with SourceRebels' answer is that slapcat(8) does not guarantee that the data is ordered for ldapadd(1)/ldapmodify(1).
From man slapcat (from OpenLDAP 2.3) :
The LDIF generated by this tool is suitable for use with slapadd(8).
As the entries are in database order, not superior first order, they
cannot be loaded with ldapadd(1) without first being reordered.
(FYI: In OpenLDAP 2.4 that section was rephrased and expanded.)
Plus using a tool that uses the backend files to dump the database and then using a tool that loads the ldif through the ldap protocol is not very consistent.
I'd suggest to use a combination of slapcat(8)/slapadd(8) OR ldapsearch(1)/ldapmodify(1). My preference would go to the latter as it does not need shell access to the ldap server or moving files around.
For example, dump database from a master server under dc=master,dc=com and load it in a backup server
$ ldapsearch -Wx -D "cn=admin_master,dc=master,dc=com" -b "dc=master,dc=com" -H ldap://my.master.host -LLL > ldap_dump-20100525-1.ldif
$ ldapadd -Wx -D "cn=admin_backup,dc=backup,dc=com" -H ldap://my.backup.host -f ldap_dump-20100525-1.ldif
The -W flag above prompts for ldap admin_master password however since we are redirecting output to a file you wont see the prompt - just an empty line. Go ahead and type your ldap admin_master password and enter and it will work. First line of your output file will need to be removed (Enter LDAP Password:) before running ldapadd.
Last hint, ldapadd(1) is a hard link to ldapmodify(1) with the -a (add) flag turned on.
ldapsearch and ldapadd are not necessarily the best tools to clone your LDAP DB. slapcat and slapadd are much better options.
Export your DB with slapcat:
slapcat > ldif
Import the DB with slapadd (make sure the LDAP server is stopped):
slapadd -l ldif
Some appointments:
Save your personalized schemas and objectclasses definitions on your new server. You can look for your included files at slapd.conf to obtain it, for example (this is a part of my slapd.conf):
include /etc/ldap/schema/core.schema
Include your personalized schemas and objectclasses in your new openLDAP installation.
Use slapcat command to export your full LDAP tree to a single/various ldif files.
Use ldapadd to import the ldif files on to your new LDAP installation.
I prefer copy the database through the protocol:
first of all be sure you have the same schemas on both servers.
dump the database with ldapsearch:
ldapsearch -LLL -Wx -D "cn=admin,dc=domain" -b "dc=domain" > domain.ldif
and import it in the new server:
ldapmodify -Wx -D "cn=admin,dc=domain" -a -f domain.ldif
in one line:
ldapsearch -LLL -Wx -D "cn=admin,dc=domain" -b "dc=domain" | ldapmodify -w pass -x -D "cn=admin,dc=domain" -a
By using the bin/ldap* commands you are talking directly with the server while using bin/slap* commands you are dealing with the backend files
(Not enough reputation to write a comment...)
Ldapsearch opens a connection to the LDAP server.
Slapcat instead accesses the database directly, and this means that ACLs, time and size limits, and other byproducts of the LDAP connection are not evaluated, and hence will not alter the data. (Matt Butcher, "Mastering OpenLDAP")
Thanks, Vish. Worked like a charm! I edited the command:
ldapsearch -z max -LLL -Wx -D "cn=Manager,dc=domain,dc=fr" -b "dc=domain,dc=fr" >/tmp/save.ldif
ldapmodify -c -Wx -D "cn=Manager,dc=domain,dc=fr" -a -f /tmp/save.ldif
Just added the -z max to avoid the size limitation and the -c to go on even if the target domain already exists (my case).

Resources