How to convert a (binary) .key PGP file to an ASCII-armored file? - pgp

Is there a way to convert a (binary) .key file to an ASCII-armored .asc file?
There is a previous post which seems to suggest the file extension don't matter, the file content is the same: What is the difference b/w .pkr and .key file with respect to PGP?

You can use GnuPG for this. From man gpg:
--enarmor
--dearmor
Pack or unpack an arbitrary input into/from an OpenPGP ASCII armor.
This is a GnuPG extension to OpenPGP and in general not very useful.
--enarmor reads from stdin and outputs the armored version to stdout, --dearmor works the other way round. For ASCII armoring a binary keyring, use gpg --enarmor <file.key >file.asc.
Although the two files are different, they share the same OpenPGP packets and can be converted in both directions. The ASCII-armored version was created for usage in e-mail and other plain ASCII protocols.

Related

How does the Linux command `file` recognize the encoding of my files?

How does the Linux command file recognize the encoding of my files?
zell#ubuntu:~$ file examples.desktop
examples.desktop: UTF-8 Unicode text
zell#ubuntu:~$ file /etc/services
/etc/services: ASCII text
The man page is pretty clear
The filesystem tests are based on examining the return from a stat(2)
system call...
The magic tests are used to check for files with data in particular
fixed formats. The canonical example of this is a binary executable
(compiled program) a.out file, whose format is defined in #include
and possibly #include in the standard include
directory. These files have a 'magic number' stored in a particular
place near the beginning of the file that tells the UNIX operating
system that the file is a binary executable, and which of several
types thereof. The concept of a 'magic' has been applied by extension
to data files. Any file with some invariant identifier at a small
fixed offset into the file can usually be described in this way. The
information identifying these files is read from the compiled magic
file /usr/share/misc/magic.mgc, or the files in the directory
/usr/share/misc/magic if the compiled file does not exist. In
addition, if $HOME/.magic.mgc or $HOME/.magic exists, it will be used
in preference to the system magic files. If /etc/magic exists, it will
be used together with other magic files.
If a file does not match any of the entries in the magic file, it is
examined to see if it seems to be a text file. ASCII, ISO-8859-x,
non-ISO 8-bit extended-ASCII character sets (such as those used on
Macintosh and IBM PC systems), UTF-8-encoded Unicode, UTF-16-encoded
Unicode, and EBCDIC character sets can be distinguished by the
different ranges and sequences of bytes that constitute printable text
in each set. If a file passes any of these tests, its character set is
reported.
In short, for regular files, their magic values are tested. If there's no match, then file checks whether it's a text file, making an educated guess about the specific encoding by looking at the actual values of bytes in the file.
Oh, and you can also download the source code and look at the implementation for yourself.
TLDR: Magic File Doesn't Support UTF-8 BOM Markers
(and that's the main charset you need to care about)
The source code is on GitHub so anyone can search it. After doing a quick search, things like BOM, ef bb bf, and feff do not appear at all. That means UTF-8, Byte-Order-Mark reading is not supported. Files made in other applications that use or preserve the BOM marker will all be returned as "charset=unknown" when using file.
In addition, none of the config files mentioned in the Magic File manpage are a part of magic file v. 4.17. In fact, /etc/magicfile/ doesn't exist at all, so I don't see any way in which I can configure it.
If you're stuck trying to get the ACTUAL charset encoding and magic file is all you have, you can determine if you have a UTF-8 file at the Linux CLI with:
hexdump -n 3 -C $path_to_filename
If the above returns the following sequence, ef bb bf, then you are 99% likely in possession of a BOM-marked UTF-8 file. This is not a 100% certainty, but it is far more useful than magic file, where it has no handling whatsoever for Byte Order Marks.

text encoding & spam

everybody.
Please, help me fight my little personal war against spam/malware. I'm receiving spam with obviously fake attachments in form of .doc bills or invoices.
This fake docs contain macro code that I'm able to decrypt using various tools. Generally such code tries to download an encripted text file which is actually further VBA code which, if executed, tries to download the real malware in form of an EXE file.
The encrypted text file was usually a simple base64 encoded string.
Using normal base64 encoding/decoding tools was more than enough to decrypt the content and identify the IP address from which the malware tries to download the exe virus.
Recently things have changed. Now the encrypted text file contains what is apparently base64 encoding but it is not.
The content is something like:
PAB0AGUAeAB0ADEAMAA+ACQAaAB5AGcAcQB1AGQAZwBhAGgAcwA9ACcAbgB1AGQAcQBoAHcA
aQB1AGQAaABxAHcAZABxAHcAJwA7AA0ACgAkAGcAZgB5AHcAdQBnAGgAYQBtAHMAPQAnADEA
MgBqAGgAMwBnADEAMgBoACAAMQAyAGcAMwBqAGgAMQAyADMAMQAyADMAJwA7AA0ACgAkAGQA
...
aQBoAHcAZAB1AGkAcQB3AHUAZABxAHcAaQAgAGgAZABxAHcAaABkACIADQAKAFMAZQB0ACAA
bwBiAGoAUwBoAGUAbABsACAAPQAgAEMAcgBlAGEAdABlAE8AYgBqAGUAYwB0ACgAIgBXAFMA
YwByAGkAcAB0AC4AUwBoAGUAbABsACIAKQA8AC8AcwB0AGUAeAB0ADMAPgA=
which resembles base64 encoding. But if you try and decode it using base64, you get something like:
<^#t^#e^#x^#t^#1^#0^#>^#$^#h^#y^#g^#q^#u^#d^#g^#a^#h^#s^#=^#'^#n^#u^#d^#q^#h^#w^#i^#u^#d^#h^#q^#w^#d^#q^#w^#'^#;^#
^#
^#$^#g^#f^#y^#w^#u^#g^#h^#a^#m^#s^#=^#'^#1^#2^#j^#h^#3^#g^#1^#2^#h^# ^#1^#2^#g^#3^#j^#h^#1^#2^#3^#1^#2^#3^#'^#;^#
^#
^#$^#d^#o^#w^#n^# ^#=^# ^#N^#e^#w^#-^#O^#b^#j^#e^#c^#t^# ^#S^#y^#s^#t^#e^#m^#.^#N^#e^#t^#.^#W^#e^#b^#C^#l^#i^#e^#n^#t^#;^#
instead of plain text (the VBS code that downloads the exe file).
Can anyone help me find the way to decode such dirt?
Thanks in advance!

Display binary vtk file in ascii format

I have a program which writes the output as binary vtk file. Being a binary file, I am not able to understand it. However, I would like to read the contents of the file for my own understanding. Is there a way to view it in ascii format (so that I could understand the data better)?
A hex-editor or dumper will let you view the contents of the file, but it will likely still be unreadable. As a binary file, its contents are meant to be interpreted by your machine / vtk, and will be nonsensical without knowing the vtk format specifications.
Common hex dumpers include xxd, od, or hexdump.

codepage conversion support on linux

I have two questions regarding codepages on linux.
Is there any way to list out all the combination of codepages conversions possible on linux.
If i have a file with data encoded in some format(say encode-1), i can use
"iconv -f encode-1 -t encode-2 file > file1.txt" to encode it into encode-2 format.
This way i can check conversion from encode-1 to encode-2 is possible. But for this to test i need to have some file already encoded in encode-1 format. Is there any way to test whether a particular conversion is possible without having any file already encoded with format encode-1.
You seem to be using iconv. To get the list of all possible encodings, just run
iconv -l
If you do not have any file in a given encoding, you can create one: take any file in a known encoding and use iconv to convert it into the given encoding. If you are worried the conversion can exit in the middle, use
iconv -c
It omits invalid characters in the output, but encodes everything it can.

I exported via mysqldump to a file. How do I find out the file encoding of the file?

Given a text file in ubuntu (or debian unix in general), how do I find out the file encoding of the file ? Can I run od or hexdump on it to fingerprint its encoding ? What should I be looking out for ?
There are many tools to do this. Try a web search for "detect encoding". Here are some of the tools I found:
The Internationalizations Classes for Unicode (ICU) are a great place to start. See especially their page on Character Set Detection.
Chardet is a Python module to guess the encoding
of a file. See chardet.feedparser.org
The *nix command-line tool file detects file types, but might also detect encodings if mentioned in the file (e.g. if there's a mime-type notation in
the file). See man file
Perl modules Encode::Detect and Encode::Guess .
Someone asked a similar question in StackOverflow. Search for the question, PHP: Detect encoding and make everything UTF-8. That's in the context of fetching files from the net and using PHP, but you could write a command-line PHP script.
Note well what the ICU page says about character set detection: "Character set detection is ..., at best, an imprecise operation using statistics and heuristics...." In my experience the problem domain makes a big difference in how easy or difficult the job is. Don't forget that it's possible that the octets in a file can be of ambiguous encoding, i.e. sensibly interpreted using multiple different encodings. They can also be of mixed encoding, i.e. different subsets of the octets make sense interpreted in different encodings. This is why there's not a single command-line tool I can recommend which always does the job.
If you have a single file and you just want to get it into a known encoding, my trick is to open the file with a text editor which can import using a bunch of different encodings, such as TextWrangler or OpenOffice.org. First, open the file and let the editor guess the encoding. Take a look at the result. If you aren't satisfied with it, guess an encoding, open the file with the editor specifying that encoding, and take a look at the result. Then save as a known encoding, e.g. UTF-16.
You can use enca. Enca is a small command line tool for encoding detection and convertion.
You can install it at debian / ubuntu by:
apt-get install enca
In order to use it, just call
enca FILENAME
Also see the manpage for more information.

Resources