How know SSL fingerprint for some website using Linux CLI? - linux

I'm trying update (using esp8266 microcontroller), update a DNS record hosted on DigitalOcean, using the API.
How do I do to get HTTPS fingerprint using Linux CLI, for use on my code?

If you are using ESP (arduino IDE) library WiFiClientSecure.h, you need declare the "const" variable for fingerprint, like bellow:
const char* fingerprint = "9A 8E 2F 33 95 CA 72 4F 47 26 54 52 A4 E5 00 70 F0 BD ED 85";
So anyhow, these values can be obtained running the command
$ echo | openssl s_client -connect host.example.com:443 |& openssl x509 -fingerprint -noout
Result:
SHA1 Fingerprint=9A:8E:2F:33:95:CA:72:4F:47:26:54:52:A4:E5:00:70:F0:BD:ED:85

Related

Decrypting an AES256 hex ciphertext with a hex key [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last year.
Improve this question
I have generated an AES-256 ciphertext in hex (cipher.hex) that I am trying to decrypt:
53 9B 33 3B 39 70 6D 14 90 28 CF E1 D9 D4 A4 07
with a corresponding 256-bit key in hex (key.hex):
80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 01
I started by attempting to decode both into base64 using the following commands:
base64 cipher.hex > input;
base64 key.hex > key;
and lastly, passed them into openssl as seen below:
openssl aes-256-cbc -d -a -pass file:key -in input -out out
at which point I get back "bad magic number" from openssl with no guidance on what went wrong.
Is there something wrong with the procedure I'm following? I have also tried converting the hex values into binary... really not sure what input to give openssl from hex, let alone if base64 decoding a newline-wrapped hexidecimal is valid.
You're trying to use CBC and password-based encryption on something that is encrypted with ECB and a raw key. You're also base64 encoding the hex characters, not the actual bytes. You're encoding <ASCII for 5><ASCII for 3><ASCII for space>... You want to encode 0x53, 0x9B, 0x33, etc. Base64 isn't needed here in any case.
First, convert your hex input to raw data (not base64). I like xxd for this:
echo "53 9B 33 3B 39 70 6D 14 90 28 CF E1 D9 D4 A4 07" | xxd -r -p > input
Next, you need your key in the hex format openssl likes, no spaces or newlines:
KEY=$(echo "80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 01" | tr -d '[ \n]'
echo $KEY
# prints 8000000000000000000000000000000000000000000000000000000000000001
And now, you can decrypt the data (aes-256-ecb with a hex key):
openssl aes-256-ecb -d -in input -K $KEY | xxd
This prints:
00000000: 8070 6050 4030 2010 0807 0605 0403 02 .p`P#0 ........
Which is too orderly to be wrong. (0x80, 0x70, 0x60, 0x50, ...)
If this were CBC, there would be an IV, and in most cases a padding block. That said, for a single block with no padding and a NULL IV (which you should never use), CBC is equivalent to ECB, so it is possible to decrypt this as CBC and get the same result:
openssl aes-256-cbc -iv 00000000000000000000000000000000 -d -in input -K $KEY | xxd
The "bad magic number" error is because you used the -pass option, which employs OpenSSL's password key derivation algorithm. Files that have been encrypted that way start with the ASCII sequence Salted__. That's missing, so the input is rejected. (It wouldn't work anyway, since this was encrypted with a raw key.)

dmcrypt : What happens when userspace crypto wrapper not present?

I am trying to set up an encrypted volume to store files securely. This is done on a NextThingCo pocketchip, but the OS is based on debian so I guessed I would give it a try here first, as my question is more related to dmcrypt than the platform itself (or so I think).
The recipe that I built so far is the following (may be incorrect or overly complicated):
Create a file
Set it up as a loop device.
Do the crypsetup for formatting and open. "abc" is the password, fed through stdin (is this assumption correct?).
Make a filesystem
Mount
So it looks like this:
sudo dd if=/dev/urandom of=./encrypted.volume bs=512K count=200
sudo losetup /dev/loop0 ./encrypted.volume
echo "abc" | sudo cryptsetup luksFormat /dev/loop0
echo "abc" | sudo cryptsetup open /dev/loop0 vault
sudo mkfs /dev/mapper/vault
sudo mount /dev/mapper/vault /mnt/vault
Now, all this seems to work fine and dandy, that is until I used the --debug parameter (I wanted to try other parameters as well e.g. key-size). And I realized the following messages:
# cryptsetup 1.7.0 processing "cryptsetup -v --debug --cipher aes-xts-plain64 --key-size
512 --hash sha512 --iter-time 5000 --timeout 10 --use-random luksFormat /dev/loop0"
# Running command luksFormat.
...
# Userspace crypto wrapper cannot use aes-xts-plain64 (-95).
...
device-mapper: remove ioctl on temporary-cryptsetup-6661 failed: Device or resource busy <------ appears when I change the --key-size to 512 i.s.o. default 256
...
device-mapper: remove ioctl on temporary-cryptsetup-6698 failed: Device or resource busy
I tried running the benchmark too:
chip#chip:~/data/run$ sudo cryptsetup --debug benchmark
[sudo] password for chip:
# cryptsetup 1.7.0 processing "cryptsetup --debug benchmark"
# Running command benchmark.
# Installing SIGINT/SIGTERM handler.
# Unblocking interruption on signal.
# Tests are approximate using memory only (no storage IO).
# Crypto backend (gcrypt 1.6.4) initialized in cryptsetup library version 1.7.0.
# Detected kernel Linux 4.4.13-ntc-mlc armv7l.
# KDF pbkdf2, hash sha1: 59041 iterations per second (256-bits key).
PBKDF2-sha1 59041 iterations per second for 256-bit key
# KDF pbkdf2, hash sha256: 79437 iterations per second (256-bits key).
PBKDF2-sha256 79437 iterations per second for 256-bit key
# KDF pbkdf2, hash sha512: 40705 iterations per second (256-bits key).
PBKDF2-sha512 40705 iterations per second for 256-bit key
# KDF pbkdf2, hash ripemd160: 50412 iterations per second (256-bits key).
PBKDF2-ripemd160 50412 iterations per second for 256-bit key
# KDF pbkdf2, hash whirlpool: 7481 iterations per second (256-bits key).
PBKDF2-whirlpool 7481 iterations per second for 256-bit key
# Cannot initialise cipher aes, mode cbc.
Required kernel crypto interface not available.
Command failed with code 95: Operation not supported
Here is some additional info about the platform and OS:
chip#chip:~/data/run$ uname -r
4.4.13-ntc-mlc
chip#chip:~/data/run$ cat /boot/config-4.4.13-ntc-mlc | grep CRYPTO_USER_API_SKCIPHER
# CONFIG_CRYPTO_USER_API_SKCIPHER is not set
I understand that I would need to recompile the kernel after I set CONFIG_CRYPTO_USER_API_SKCIPHER so the userspace crypto API becomes available. I don't think there is a way around that, is there?
I LuksDump the information about the storage file:
chip#chip:~/data/run$ sudo cryptsetup luksDump ./encrypted.volume
LUKS header information for ./encrypted.volume
Version: 1
Cipher name: aes <------- ???
Cipher mode: xts-plain64 <------- ???
Hash spec: sha256
Payload offset: 4096
MK bits: 256
MK digest: ee f8 8d ad 9b 67 d9 7d cb 20 fe a9 25 a3 8b a5 c2 65 56 dd
MK salt: 38 74 e8 9d 77 6a 93 b5 03 41 cb 3e ce 79 b4 00
55 f3 98 8f c5 a7 14 05 25 9c 4e 91 68 1a 53 37
MK iterations: 18500
UUID: 36912ea4-9adb-4d1f-b9f2-f6a09a258833
Key Slot 0: ENABLED
Iterations: 150587
Salt: e8 4f f3 c1 07 1a 2b 2d d2 d9 f4 55 0f b3 13 28
2a 69 06 aa a0 94 4a 05 5d 5f e9 28 9b 91 39 94
Key material offset: 8
AF stripes: 4000
Key Slot 1: DISABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED
However, I have a few questions about the current situation:
Is the partition actually encrypted? If so, with which scheme?
How to check this on the command line? Trying to dump information about the partition tells me that "there is a LUKS header", but that does not tell me whether the data is encrypted or not.
How to solve the ''resource busy'' situation, which would let me use a key size of 512?
Thank you for reading all the way here. Any pointers will be greatly appreciated.

Transferring a TLS/SSL certificate via serial

I need to send a PEM-formatted certificate for storaging on a module that can be communicated with through the AT command set via a serial interface on one of Linux device nodes in /dev.
So far I've been using mostly
echo 'AT' > /dev/ttyX
to issue the necessary commands and it has done the trick just fine.
Any output from the device is monitored with cat /dev/ttyX on another terminal window.
I now have a certificate file encoded with ANSI. The documentation tells me to input it to the module using only LF line breaks and to terminate the input with Ctrl+Z, which I believe is hex 0x1A. The document also specifies that the certificate file may not end with an EOF character. I have used a hex editor to verify that the file is formatted as it should be.
I've tried to use both echo and printf to send the certificate chars / string to the module.
I have tried to include the 0x1A character in both the file and send it separately after the certificate chars like so:
printf '\x1a' > /dev/ttyX
or alternatively
echo -n -e '\x1a' > /dev/ttyX
The module seems to acknowledge the 0x1A as it stops the >-prompt for certificate and gives me the most verbose reply ever: ERROR
Generally, I'm sending the certificate file contents as follows:
echo -e "$(cat certfile)" > /dev/ttyX
or
printf '%b' "$(cat certfile)" > /dev/ttyX
Please assume that I have access to basic Linux shell tools (such as echo, printf, nano, stty and so on) with no option to trivially install new ones. I use SSH to access the target device and pscp to transfer the file to the target device. I also have a Windows rig on the side.
Any suggestions what else I should take into consideration? Maybe an stty option that I've missed? Does cat do something nasty in the input phase? A revealing trick to investigate the actual character data about to be send to the module? Some weird kink with serial comms I've missed?
If I
printf '%b' "$(cat cert)" > ./testoutput
and
od -x testoutput
the file looks alright in hex (I reordered the output from od -x manually, it seems to make pairs of the hex digits and switch them around). For example the end is:
2d 2d 2d 2d 2d 45 4e 44 20 43 45 52 54 49 46 49 43 41 54 45 2d 2d 2d 2d 2d 0a 1a 00
There must be something in stty or the receiving end that's causing trouble. Right?
For example the end is:
2d 2d 2d 2d 2d 45 4e 44 20 43 45 52 54 49 46 49 43 41 54 45 2d 2d 2d 2d 2d 0a 00 1a
Wait a sec. What's that 00 doing there, right before the 1a?
That doesn't belong. Try removing it.

How to check which symbols on my shared library have non-position independent code (PIC)?

I'm trying to build a .deb package with debuild -i -us -uc -b and in the end I see:
Now running lintian...
warning: the authors of lintian do not recommend running it with root privileges!
W: libluajit-5.1-2: hardening-no-relro usr/lib/powerpc64le-linux-gnu/libluajit-5.1.so.2.1.0
E: libluajit-5.1-2: shlib-with-non-pic-code usr/lib/powerpc64le-linux-gnu/libluajit-5.1.so.2.1.0
W: luajit: hardening-no-relro usr/bin/luajit-2.1.0-alpha
W: luajit: binary-without-manpage usr/bin/luajit-2.1.0-alpha
Finished running lintian.
I have a hunch that I failed to define a "PIC code setup", which must be at the beginning of each external function:
The following code might appear in a PIC code setup sequence to compute
the distance from a function entry point to the TOC base:
addis 2,12,.TOC.-func#ha
addi 2,2,.TOC.-func#l
as specified by the ABI, page 99.
However I couldn't find the symbols which were non-PIC. Or maybe some relevant file that was not compiled with -fPIC?
Info:
system architecture: ppc64le
compiling .so library with: gcc -shared -fPIC
To find which symbols made your elf non-PIC/PIE (Position Independent Code/Executable), use scanelf from pax-utils package (on ubuntu, install it with sudo apt-get install pax-utils):
$ scanelf -qT /usr/local/lib/libluajit-5.1.so.2.1.0 | head -n 3
libluajit-5.1.so.2.1.0: buf_grow [0x7694] in (optimized out: previous lj_BC_MODVN) [0x7600]
libluajit-5.1.so.2.1.0: buf_grow [0x769C] in (optimized out: previous lj_BC_MODVN) [0x7600]
libluajit-5.1.so.2.1.0: buf_grow [0x76A0] in (optimized out: previous lj_BC_MODVN) [0x7600]
$ objdump -Sa /usr/local/lib/libluajit-5.1.so.2.1.0 | grep -A5 \ 7694:
7694: 00 00 80 39 li r12,0
7698: c6 07 8c 79 rldicr r12,r12,32,31
769c: 00 00 8c 65 oris r12,r12,0
76a0: 00 00 8c 61 ori r12,r12,0
76a4: a6 03 89 7d mtctr r12
76a8: 21 04 80 4e bctrl
On my case an absolute address was meant to be load on r12, but that's not possible for a dynamic library, so the linker used 0 for that parameter (I had to use #GOT operator, but that's the particular solution to my case).
On the luajit program, it's possible to define the address on linking time and it looks like this:
1003d0d4: 00 00 80 39 li r12,0
1003d0d8: c6 07 8c 79 rldicr r12,r12,32,31
1003d0dc: 07 10 8c 65 oris r12,r12,4103
1003d0e0: 30 ca 8c 61 ori r12,r12,51760
1003d0e4: a6 03 89 7d mtctr r12
Quite different right?
a much detailed explanation can be found on this wonderful Gentoo wiki page.
The failing lintian check is this:
# Now that we're sure this is really a shared library, report on
# non-PIC problems.
if ($objdump->{$cur_file}->{TEXTREL}) {
tag 'shlib-with-non-pic-code', $cur_file;
}
So you can probably find the offending file by looking for a .o that contains a TEXTREL dynamic section (which is making its way into your final link).
To do this, you can use readelf --dyanamic, in something like the following:
find . -name '*.o' |
while read obj
do
if readelf --dynamic "$obj" | grep -q TEXTREL
then
echo "$obj contains a TEXTREL section"
fi
done

differences in .zip format created with winrar and with 7-zip

all.
I'm using a 3rd party lib to archive/decompress some files.
Input/output format is .zip.
But I've discovered that the lib could read well archives created with WinRAR, but cannot read archives created with 7-zip.
Could anyone tell me what the problem is and how it could be fixed.
hex view of starting of archive, created with winRar:
50 4b 03 04 14 00 00 00 08 00 EC 88 25 45 6e 1a ....
hex view of starting of archive, created with 7-zip:
50 4b 03 04 14 00 00 00 08 00 ED 88 25 45 6e 1a ....
the difference is in byte EC and ED.
This is the header structure of a ZIP file:
+---------------------------------------------------------------------+
| BYTE |
|0x0 0x1 0x2 0x3|0x4 0x5|0x6 0x7| 0x8 0x9 |0xA 0xB|0xC 0xD| |
+---------------+-------+-------+-----------+--------+--------+-------+
| signature |version| flags |compression|mod time|mod date| CRC32 |
+---------------+-------+-------+-----------+--------+--------+-------+
| 50 4B 03 04| 14 00| 00 00| 08 00 |EC/ED 88| 25 45| |
+---------------------------------------------------------------------+
signature is always 0x50 0x4B 0x03 0x04 (as in your case). 0x50 = 'P', 0x4B = 'K' (Phillip Katz)
version needed to extract (0x14 0x00 = 20 -> ZIP 2.0)
general purpose bit flags (no flags, your file is not crypted / default language encoding)
compression method is deflate (08)
So the only things that change is the modification time (just some minutes).
Even the first two bytes of the CRC32 are the same. If the only difference is the EC / ED value, the problem shouldn't be the archive content.
Further references: https://users.cs.jmu.edu/buchhofp/forensics/formats/pkzip.html
7-Zip archives can be created using several different compression formats which are not compatible with standard ZIP compression/decompression libraries based for example on Info-ZIP source code.
Applications like WinRAR and Total Commander supporting creation/extraction of ZIP files natively use therefore additionally a DLL for extracting files from non standard 7-Zip archives.
See question Can I use the EXE or DLL files from 7-Zip in a Commercial Application? on FAQ page of 7-Zip.

Resources