Gsoap to generate xsi:type without -t option (soapcpp2 -t) - xsd

I have a wsdl ans xsd file and want to generate a xml with xsi:type only for particular tags.
I used the soapcpp2 -t but it creates xsi:type for all types.
i.e for namespace(xsi:type="ns:anything") as well as primitive data types(xsi:type="xsd:string").
I dont want those primitive data type in xml.
Commands I used
wsdl2h -s -x -y -t typemap.dat -o abc_wsdl.h
soapcpp2 -C -L abc_wsdl.h

Related

Tmux link-pane with format variables

I am trying to link a window from another session by specifying target session using format variable. In that way I hope to get it always linked next to the current active window.
The hard coded version of the working command:
:link-window -a -s 1:remote -t 0:2
in which case I specify a target pane literaly. When I try any of:
:link-window -a -s 1:remote -F -t "#{session_name}":"#{window_index}"
:link-window -a -s 1:remote -F "#{session_name}":"#{window_index}"
:link-window -a -s 1:remote -t "#{session_name}":"#{window_index}"
I got an error. The notable part here is that when I do use -F flag, the usage for link-window command is displayed. And when I omit it and use only -t, the error is cann't find window #{session_name}
Does it mean that link-window command simply doesn't support format variables?
-t does not support format variables and link-window does not support -F. run-shell will expand so you can do it by doing, for example:
run "tmux linkw -t '#{session_name}'"

Get a persistant string in and out of the TPM2 module

I'm trying to save a small amount of data in the TPM2 over power cycles. So that this small string will only be tied to one specific machine. Here is what I have working.
# put data in file that is to be sealed
echo "my sealed data" > seal.dat
# create a primary key
tpm2_createprimary -c primary.ctx
# create a child key in public and private parts
tpm2_create -C primary.ctx -u obj.pub -r obj.priv
# create a sealed object
tpm2_create -C primary.ctx -i seal.dat -u obj.pub -r obj.priv
# load the private and public portions into the TPM
tpm2_load -C primary.ctx -u obj.pub -r obj.priv -c key.ctx
# unseal the data
tpm2_unseal -c key.ctx
But after a power cycle if I enter:
'tpm2_unseal -c key.ctx'
I get the following error:
WARNING:esys:src/tss2-esys/api/Esys_ContextLoad.c:279:Esys_ContextLoad_Finish() Received TPM Error
ERROR:esys:src/tss2-esys/api/Esys_ContextLoad.c:93:Esys_ContextLoad() Esys Finish ErrorCode (0x000001df)
ERROR: Esys_ContextLoad(0x1DF) - tpm:parameter(1):integrity check failed
ERROR: Invalid item handle authorization
ERROR: Unable to run tpm2_unseal
I am using the tpm_server (emulator) if that makes any difference.
So what is the best way to load a small string into the tpm2 and have power loss persistence?
Sealing an object does not store anything in the TPM's NV memory. It encrypts the data with a key that's only accessible to the TPM, but it is saved in two files on your file system -- nothing is saved in the TPM.
To store some data in the TPM's memory, you would define the memory index and then save to it, for example:
tpm2_nvdefine -Q $nv_test_index -C o -s 32 -a "ownerread|policywrite|ownerwrite"
echo "please123abc" > nv.test_w
tpm2_nvwrite -Q $nv_test_index -C o -i nv.test_w
And then to read the data back:
tpm2_nvread -Q $nv_test_index -C o -s 32 -o 0
(sample code from tpm2-tools test script)

Multiple downloads with wget at the same time

I have a link.txt with multiple links for download,all are protected by the same username and password.
My intention is to download multiple files at the same time, if the file contains 5 links, to download all 5 files at the same time.
I've tried this, but without success.
cat links.txt | xargs -n 1 -P 5 wget --user user007 --password pass147
and
cat links.txt | xargs -n 1 -P 5 wget --user=user007 --password=pass147
give me this error:
Reusing existing connection to www.site.com HTTP request sent,
awaiting response... 404 Not Found
This message appears in all the links i try to download, except for the last link in the file which starts to download.
i am currently use, but this download just one file at the time
wget -user=admin --password=145788s -i links.txt
Use wget's -i and -b flags.
-b
--background
Go to background immediately after startup. If no output file is specified via the -o, output is redirected to wget-log.
-i file
--input-file=file
Read URLs from a local or external file. If - is specified as file, URLs are read from the standard input. (Use ./- to read from a file literally named -.)
Your command will look like:
wget --user user007 --password "pass147*" -b -i links.txt
Note: You should always quote strings with special characters (eg: *).

FIPS Capable OpenSSL cross-compiled: incore fingerprint issue

I'm having an issue trying to use the OpenSSL shared library (libcrypto) compiled to be FIPS capable on a MIPS device.
I cross-compiled the FIPS Object Module and then the OpenSSL library in the following way (summarizing):
export FIPS_SIG=<my_path>/incore
./config fips --with-fipsdir=<my_path>/fips-2.0
make depend
make
make install
I did all the needed steps, so I'm able to compile and install the library.
The issue appears when I try to run the FIPS_mod_set(1) API from an application linking the OpenSSL library.
The FIPS mode initialization fails receiving this error:
2010346568:error:2D06B06F:lib(45):func(107):reason(111):NA:0:
Debugging the FIPS code, I found that the issue is inside the FIPS_check_incore_fingerprint(void) function:
the check memcmp(FIPS_signature,sig,sizeof(FIPS_signature)) fails.
Going deeper in the debug I discovered that the FIPS_signature value remains the default one, so I have the doubt that the incore script, called by the fipsld utility, is not embedding properly the fingerprint inside the OpenSSL shared object.
How can I check if the incore script embedded the fingerprint inside the shared object?
How can I print the expected fingerprint?
Do I need to adapt the incore script? (I suppose it's not allowed)
Do you have any suggestion?
Thanks a lot!
P.S.: I'm cross-compiling using an x86 Linux machine.
I found the issue! I'll try to explain the whole debugging process and the solution.
INTRODUCTION:
When OpenSSL is configured to be FIPS capable, during the compilation the Makefile calls a utility, fipsld, which both performs the check of the FIPS
Object Module and generates the new HMAC-SHA-1 digest for the application executable (as explained in the official OpenSSL user guide https://www.openssl.org/docs/fips/UserGuide-2.0.pdf)
The fipsld command requires that the CC and FIPSLD_CC environment variables be set,
with the latter taking precedence.
In the Makefile you will find something like this:
libcrypto$(SHLIB_EXT): libcrypto.a fips_premain_dso$(EXE_EXT)
#if [ "$(SHLIB_TARGET)" != "" ]; then \
if [ "$(FIPSCANLIB)" = "libcrypto" ]; then \
FIPSLD_LIBCRYPTO=libcrypto.a ; \
FIPSLD_CC="$(CC)"; CC=$(FIPSDIR)/bin/fipsld; \
export CC FIPSLD_CC FIPSLD_LIBCRYPTO; \
fi; \
$(MAKE) -e SHLIBDIRS=crypto CC="$${CC:-$(CC)}" build-shared && \
(touch -c fips_premain_dso$(EXE_EXT) || :); \
else \
echo "There's no support for shared libraries on this platform" >&2; \
exit 1; \
fi
Then, the fipsld utility invokes a shell script, incore, used to embed the FIPS Object Module's expected fingerprint in the OpenSSL shared object. It's important to specify the incore path via FIPS_SIG env variable, e.g.:
export FIPS_SIG=$PWD/openssl­fips­2.0/util/incore
DEBUGGING:
Debugging the incore script, I could see that the script tried to embed the signature into the shared object at the offset 0x001EE6B0 while the FIPS_signature symbol inside the shared object was located at a different offset, to be more specific at 0x001F0630:
objdump -t libcrypto.so.1.0.0 | grep FIPS_signature
001f0630 g O .data 00000014 FIPS_signature
readelf -a libcrypto.so.1.0.0 | grep FIPS_signature
870: 001f0630 20 OBJECT GLOBAL DEFAULT 18 FIPS_signature
3925: 001f0630 20 OBJECT GLOBAL DEFAULT 18 FIPS_signature
Furthermore dumping the shared object I wasn't able to find the generated signature at the offset 0x001EE6B0 so I reached the conclusion that the shared object was edited after the signature embedding procedure by some other process.
SOLUTION:
I was using a package Makefile for the OpenSSL packet formatted in the following way:
$(MAKE) $(PKG_JOBS) -C $(PKG_BUILD_DIR)
<options>
all
$(MAKE) $(PKG_JOBS) -C $(PKG_BUILD_DIR)
<options>
build-shared
rm $(PKG_BUILD_DIR)/libssl.so.*.*.*
$(MAKE) $(PKG_JOBS) -C $(PKG_BUILD_DIR)
<options>
do_linux-shared
$(MAKE) -C $(PKG_BUILD_DIR)
<options>
install
As suspected, make build-shared and make do_linux-shared commands were responsible for changing the shared object in the wrong way.
NOTICE that make build-shared was called without using the proper environment variables.
I changed the package Makefile:
$(MAKE) $(PKG_JOBS) -C $(PKG_BUILD_DIR)
<options>
all
$(MAKE) -C $(PKG_BUILD_DIR)
<options>
install
Now the FIPS_check_incore_fingerprint(void) function returns with success and everything is working fine!
NOTE:
The following guide for Android devices has been very useful to find the proper solution.
https://wiki.openssl.org/index.php/FIPS_Library_and_Android

How do I clone an OpenLDAP database

I know this is more like a serverfault question than a stackoverflow question, but since serverfault isn't up yet, here I go:
I'm supposed to move an application from one redhat server to another, and without very good knowledge of the internal workings of the application, how would I move the OpenLDAP database from the one machine to the other, with schemas and all.
What files would I need to copy over? I believe the setup is pretty standard.
The problem with SourceRebels' answer is that slapcat(8) does not guarantee that the data is ordered for ldapadd(1)/ldapmodify(1).
From man slapcat (from OpenLDAP 2.3) :
The LDIF generated by this tool is suitable for use with slapadd(8).
As the entries are in database order, not superior first order, they
cannot be loaded with ldapadd(1) without first being reordered.
(FYI: In OpenLDAP 2.4 that section was rephrased and expanded.)
Plus using a tool that uses the backend files to dump the database and then using a tool that loads the ldif through the ldap protocol is not very consistent.
I'd suggest to use a combination of slapcat(8)/slapadd(8) OR ldapsearch(1)/ldapmodify(1). My preference would go to the latter as it does not need shell access to the ldap server or moving files around.
For example, dump database from a master server under dc=master,dc=com and load it in a backup server
$ ldapsearch -Wx -D "cn=admin_master,dc=master,dc=com" -b "dc=master,dc=com" -H ldap://my.master.host -LLL > ldap_dump-20100525-1.ldif
$ ldapadd -Wx -D "cn=admin_backup,dc=backup,dc=com" -H ldap://my.backup.host -f ldap_dump-20100525-1.ldif
The -W flag above prompts for ldap admin_master password however since we are redirecting output to a file you wont see the prompt - just an empty line. Go ahead and type your ldap admin_master password and enter and it will work. First line of your output file will need to be removed (Enter LDAP Password:) before running ldapadd.
Last hint, ldapadd(1) is a hard link to ldapmodify(1) with the -a (add) flag turned on.
ldapsearch and ldapadd are not necessarily the best tools to clone your LDAP DB. slapcat and slapadd are much better options.
Export your DB with slapcat:
slapcat > ldif
Import the DB with slapadd (make sure the LDAP server is stopped):
slapadd -l ldif
Some appointments:
Save your personalized schemas and objectclasses definitions on your new server. You can look for your included files at slapd.conf to obtain it, for example (this is a part of my slapd.conf):
include /etc/ldap/schema/core.schema
Include your personalized schemas and objectclasses in your new openLDAP installation.
Use slapcat command to export your full LDAP tree to a single/various ldif files.
Use ldapadd to import the ldif files on to your new LDAP installation.
I prefer copy the database through the protocol:
first of all be sure you have the same schemas on both servers.
dump the database with ldapsearch:
ldapsearch -LLL -Wx -D "cn=admin,dc=domain" -b "dc=domain" > domain.ldif
and import it in the new server:
ldapmodify -Wx -D "cn=admin,dc=domain" -a -f domain.ldif
in one line:
ldapsearch -LLL -Wx -D "cn=admin,dc=domain" -b "dc=domain" | ldapmodify -w pass -x -D "cn=admin,dc=domain" -a
By using the bin/ldap* commands you are talking directly with the server while using bin/slap* commands you are dealing with the backend files
(Not enough reputation to write a comment...)
Ldapsearch opens a connection to the LDAP server.
Slapcat instead accesses the database directly, and this means that ACLs, time and size limits, and other byproducts of the LDAP connection are not evaluated, and hence will not alter the data. (Matt Butcher, "Mastering OpenLDAP")
Thanks, Vish. Worked like a charm! I edited the command:
ldapsearch -z max -LLL -Wx -D "cn=Manager,dc=domain,dc=fr" -b "dc=domain,dc=fr" >/tmp/save.ldif
ldapmodify -c -Wx -D "cn=Manager,dc=domain,dc=fr" -a -f /tmp/save.ldif
Just added the -z max to avoid the size limitation and the -c to go on even if the target domain already exists (my case).

Resources