Why do some files appear as partial gibberish when opened in a text editor? - text

I often come across the situation where I would like to read a file's original content in a human-readable way. When opening this kind of file in a text editor, why is it that it is usually gibberish with some complete and comprehensible text ? I would think that if the file is converted to something other than it's original written format, that there would be no comprehensible text remaining, yet I often find it is somewhere in between.
For example, I know that if I open a binary in a text format, there will be nothing comprehensible left that isn't purely accidental.
Example screencapture of partial gibberish text
Why is there complete text in here mixed with gibberish? Does that mean if I open the file with some sort of different encoding (I don't know what's possible), the file will come through as fully readable text? I would understand if it were all-or-nothing (either gibberish-non-readable OR human language) but I don't understand the in-between.
Please provide educational responses, rather than "because that's the way it is" type answers.

Those are formatting characters; there is no standard use and vary by the format of the file in question. You can still extract the text as needed with a fair knowledge of grep and regex, but it won't be fun. The best bet is to open the file with the software that can read it properly, as a text editor like gedit or Notepad++ will read the raw data and display that. Adobe's pdf format has text embedded, for instance, and all that gibberish is instructions for the Reader software for displaying it correctly on the screen while still allowing for relatively straightforward text extraction when required.
Editors have no real way to interpret the special formatting characters, and would need to be loaded with APIs for every conceivable program. They would also need to be updated constantly, since the formatting changes regularly for a variety of reasons. Many times, it is just to keep the files from being backward compatible with their own or other products, forcing an upgrade path. Microsoft is rather famous for that, but they are by far not the only company to do so.

Related

How to output IBM-1027-codepage-binary-file?

My output (csv/json) from my newly-created program (using .NET framework 4.6) need to be converted to a IBM-1027-codepage-binary-file (to be imported to Japanese client's IBM mainframe),
I've search the internet and know that Microsoft doesn't have equivalent to IBM-1027 code page.
So how could I output a IBM-1027-codepage-binary-file if I have an UTF-8 CSV/json file in my hand?
I'm asking around for other solutions, but for now, I think I'm going to have to suggest you do the conversion manually; I assume whichever language you're using allows you to do a hex conversion, at worst. For mainframes, the codepage is usually implicit in the dataset, it isn't something that is included in the file header.
So, what you can do is build a conversion table, from https://www.ibm.com/support/knowledgecenter/en/SSEQ5Y_5.9.0/com.ibm.pcomm.doc/reference/html/hcp_reference26.htm. Grab a character from your json/csv file, convert to the appropriate hex digits, and write those hex digits to a file. Repeat until EOF. (Note to actually write the hex data, not the ascii representation of the hex data.) Make sure that when the client transfers the file to their system, they perform a binary transfer.
If you wanted to get more complicated than that, you could look at enhancing/overriding part of the converter to CP500, which does exist on Microsoft Windows. One of the design points for EBCDIC was to make doing character conversions as simple as possible, so many of the CP500 characters hex representations are the same as the CP1027, with the exception of the Kanji characters.
This is a separate answer, from a colleague; I don't have the ability to validate it, I'm afraid.
transfer the file to the host in raw mode, just tag it as ccsid 1208
(edited)
for uss export _BPXK_AUTOCVT=ALL
oedit/obrowse handles it automatically.

How can I open a .cat file?

I'm looking for a way to open a .cat file. I have not a single clue about how to do it (I've tried with the notepad and sublime text, without results), the only thing I know is that it's not corrupted (it's read by another program, but I need to see it with my eyes to understand the structure of the content and create a similar one for my purposes).
Every hint is well accepted.
If you can't make sense of it in a standard text editor, it's probably a binary format.
If so, you need to get yourself a program capable of doing hex dumps (such as od) and prepare for some detailed analysis.
A good start would be trying to find information about Advanced Disk Catalog somewhere on the web, assuming that's what it is.

LiveCode File Creation

I'm not sure I'm asking this in the right place, but I've been working with LiveCode and I'm curious how the actual .livecode or .rev files get created. They look like some sort of mixed binary and LiveCode format. I've glanced through the source code, but it's not clear to me how the files are constructed.
Note that I'm talking about the project containers, not the standalones.
I'm also not sure that this is the right place to ask. It isn't really a programming question, even though it is related. I think that the stackfile format is binary, but parts appear in clear text because that's what they are. Everything that is unrecognisable can be two things. It can be a definition of a byte range, or it can be the description of the stack, card or control itself. This description can contain user data, including clear text, but also movie data, picture data, a unicode stream, etc. Encrypted stacks appear as binary data.
I would ask this question directly to RunRev...
To find out what happens when the file is saved, you have to look at the C++ functions inside the Livecode engine when the savestack message is sent and handled.
No other way to tell, so you have to ask those familiar with the innards of the engine.

How to handle images in j2me like .dat format

I am developing a game in j2me. How to handle images in .dat format.
I downloaded some games and extracted jar , found some dat format images and not able to open that images and images size also very less.. what tools I need to use?
Ref link
enter link description here
Not able to find solution?
A dat file could be anything. Depends what the developer felt like doing.
Some developers chose to strip PNG files of their header, and added the header back in the code. This was partly done in order to save a few bytes (because they mattered back then), and partly because of the challenge in doing it like that, and partly because it ensured all images used the exact same palette.
So that's one possibility, but it really could be anything.
As stated by mr_lou, there really isn't anything special about a .dat extension.
The steps to re-compile a file usually start with opening the file up in a hex editor and then looking at the first bits of information in the file. You then basically work from there to re-compile the data necessary for a 'normal' program to interpret the file. In particular, the first 8-16 bytes are often very helpful for determining what type of file it is "supposed" to be.
If you are looking at a png file (that's what I usually prefer to use for art assets) then you can reference http://en.wikipedia.org/wiki/Portable_Network_Graphics to see how a 'normal' png might look. When you're tweaking to save bytes you often strip unnecessary fields from png headers (things like the ancillary chunks) and using a common palette.
However, remember that it's not necessarily image data. It could be things like level data, sound, default stats or any particular amount of stuff.

How to determine codepage of a file (that had some codepage transformation applied to it)

For example if I know that ć should be ć, how can I find out the codepage transformation that occurred there?
It would be nice if there was an online site for this, but any tool will do the job. The final goal is to reverse the codepage transformation (with iconv or recode, but tools are not important, I'll take anything that works including python scripts)
EDIT:
Could you please be a little more verbose? You know for certain that some substring should be exactly. Or know just the language? Or just guessing? And the transformation that was applied, was it correct (i.e. it's valid in the other charset)? Or was it single transformation from charset X to Y but the text was actually in Z, so it's now wrong? Or was it a series of such transformations?
Actually, ideally I am looking for a tool that will tell me what happened (or what possibly happened) so I can try to transform it back to proper encoding.
What (I presume) happened in the problem I am trying to fix now is what is described in this answer - utf-8 text file got opened as ascii text file and then exported as csv.
It's extremely hard to do this generally. The main problem is that all the ascii-based encodings (iso-8859-*, dos and windows codepages) use the same range of codepoints, so no particular codepoint or set of codepoints will tell you what codepage the text is in.
There is one encoding that is easy to tell. If it's valid UTF-8, than it's almost certainly no iso-8859-* nor any windows codepage, because while all byte values are valid in them, the chance of valid utf-8 multi-byte sequence appearing in a text in them is almost zero.
Than it depends on which further encodings may can be involved. Valid sequence in Shift-JIS or Big-5 is also unlikely to be valid in any other encoding while telling apart similar encodings like cp1250 and iso-8859-2 requires spell-checking the words that contain the 3 or so characters that differ and seeing which way you get fewer errors.
If you can limit the number of transformation that may have happened, it shouldn't be too hard to put up a python script that will try them out, eliminate the obvious wrongs and uses a spell-checker to pick the most likely. I don't know about any tool that would do it.
The tools like that were quite popular decade ago. But now it's quite rare to see damaged text.
As I know it could be effectively done at least with a particular language. So, if you suggest the text language is Russian, you could collect some statistical information about characters or small groups of characters using a lot of sample texts. E.g. in English language the "th" combination appears more often than "ht".
So, then you could permute different encoding combinations and choose the one which has more probable text statistics.

Resources