I have a huge Filedump to handle (7000+ Files) which are all encoded in OEM-US (and i need them to remain OEM-US or return to OEM-US when I'm done)
The search in files feature from Notepad++ would actually solve all my Problems. (It's a single use job - I don't want to bore you with the details but its about sanatizing old code which has partially been written in foreign languages like german or french including their notorious characters like äöüèéàç)
The thing is: Most of the time, Notepad++ detects the wrong encodings and different encodings for different files. Usually, it detects ANSI or UTF-8 but sometimes it get exotic and all of a sudden my files are supposed to be encoded in Shift-JIS or Big5 Which messes up my search terms as they sometimes turn different special chars into the same set of replacement chars.
So I'm looking for a way to either
a) Tell notepad++ which encoding to select for the "search in files" job i want to run.
b) convert all Files to UTF-8, run the search-replace job there and restore the encoding to OEM-US
or
c) Find a different Software to handle this issue for me
Can someone help me?
Related
I have a text file full of non-ASCII characters.
I can not detect the encoding by either file or enca.
file non_ascii.txt
non_ascii.txt: Non-ISO extended-ASCII text
enca non_ascii.txt
Unrecognized encoding
But I can open it normally in Windows Notepad++
Edit: The expression above leads misunderstanding. Sorry for this.
In fact, I picked some parts of the original file and put them into new text file, then opened in notepad++.
The 2 parts shows as below. They are decoded in 2 different ways by notepad++.
Question:
How could I detect the files encoding under linux?
how do I recover the characters represented by <F1><EE><E9><E4><FF>?
I couldn't get result by "grep 'сойдя' win.txt" even though the "сойдя" is encoded into <F1><EE><E9><E4><FF>?
The file content slice as follows:
less non_ascii.txt
"non_ascii.txt" may be a binary file. See it anyway?
<F1><EE><E9><E4><FF>
<F2><F0><E0><EA><F2><EE><E2><E0><F2><FC><F1><FF>
<D0><F2><E9><E4><D7><E9><E7><E1><EC><E1><F3><F8>
<D1><E5><EA><F3><ED><E4>
<F0><E0><E7><E3><F0><F3><E7><EA><E8>
<EF><EE><E4><F1><F2><E0><E2><EB><FF><F2><FC>
<F0><E0><E7><E3><F0><F3><E7><EA><E5>
<F1><EE><E9><E4><F3>
<F0><E0><E7><E3><F0><F3><E7><EA><E0>
<F1><EE><E2><EB><E0><E4><E0><EB><E8>
<C1><D7><E9><E1><F0><EF><FE><F4><E1>
<CB><C1><D3><D3><C9><D4><C5><D2><C9><D4>
<F1><EE><E2><EB><E0><E4><E0><EB><EE>
<F1><EE><E9><E4><E8>
<F1><EE><E2><EB><E0><E4><E0><EB><E0>
Your question really has two parts: (1) how do I identify an unknown encoding and (2) how do I convert that to something useful?
The first part is the real challenge, and really cannot be answered in universal terms -- in the general case, there is no reliable way to identify an unknown 8-bit encoding. Some encodings give you good hints (UTF-8 is an excellent example) and in many cases, if you have a good idea what the text is supposed to represent, the problem can be solved.
A mapping of 8-bit character meanings can be helpful (cough, the link is to mine) and in this case quickly hints at Windows code page 1251. Kudos for the hex dumps and the picture with the representation you expect!
With that out of the way, converting is easy.
iconv -f cp1251 -t utf-8 non_ascii.txt >utf8.txt
Provided your Linux system is set up to use UTF-8 at the terminal, your grep command should work on utf-8.txt now.
The indication that some of the text is "ANSI" (which is a bogus term anyway) is probably just a red herring -- as far as I can tell, everything in your excerpt looks like well-formed CP1251.
Some tools like chardet do a reasonable job of at least steering you in the right direction, though you have to understand that, like a human expert, they have to guess what the text is supposed to represent. There are corner cases where they just don't have enough information to guess correctly, either because there are several candidate encodings with very few differences (for example, Latin-1 vs Latin-9 vs Windows-1252, all of which also overlap with plain 7-bit US-ASCII in the first 128 positions) or because the input doesn't contain enough information to establish any common patterns.
I have about 3,500 text files, of mixed character sets: ISO-8859, UTF-8, ASCII, UTF-16, and maybe others.
I want to merge them all into one unicode text file, so I can run a Python script on it that expects it.
If I use cat, it doesn't exactly work.
What is the best way to solve this?
You could convert them up-front with a tool like iconv, or load them into Python with the correct encoding (by setting the correct encoding to open).
If you don't know what the encoding of each file is, then it is more complicated, because you'll need to detect the encoding of each file. There are many heuristics, but not absolutely standard way to do this. Again, using iconv can help a lot here.
I have a CSV file with special accents and save it in Notepad by selecting UTF-8 encoding. When I read the file using Java, it reads the BOM characters too.
So I want to save this file in UTF-8 format without appending a BOM initially in Notepad.
Otherwise, is there a built-in class in Java that eliminates the BOM characters that present at beginning, when reading the contents in a file?
Use Notepad++ - it is free and much better than Notepad. It will help to save text without a BOM using Encoding → Encode in UTF-8 without BOM: Notepad++ v6 and olders:
Notepad++ v7+:
When I encountered this problem in Java, I didn't find any library to parse these first three bytes (BOM). So my advice:
Use PushbackInputStream(in, 3).
Read the first three bytes
If it's not BOM (EF BB BF), push them back
Process the stream as UTF-8
I just learned from this Stack Overflow post, as #martin-geisler points out, that you can save files without the BOM in Windows Notepad, by selecting ANSI as the encoding.
I'm assuming that for more advanced uses this won't work because the resulting file is probably not the end encoding wished, but actually ANSI; but I tested and confirmed this works to save a very small .php script without BOM using only Notepad.
I learned the long, hard way that Windows' Notepad is not a true editor, although I'd like to point out for others that, despite this, it is misleadingly called up when you type "editor" on newer Windows machines, at least on one of mine.
I am currently using Emacs and other editors to solve this problem.
Use Notepad++ instead. See my personal blog post on it. From within Notepad++, choose the "Encoding" menu, then "Encode in UTF-8 without BOM".
Notepad on Windows 10 version 1903 (May 2019 update) and later versions supports saving to UTF-8 without a BOM. In fact, UTF-8 is the default file format now.
Reference: Windows 10 Notepad is Getting Better UTF-8 Encoding Support
The answer is: Not at all. Notepad can't do that.
In Java you can just skip the first byte in your InputStream and be done.
You might want to try out Notepad2 or Notepad++. Those Notepad replacements have the option for you to choose whether to output BOM.
As for a Java solution, as far as I know, Java does not understand the standard UTF-8. I googled and found Java's UTF-8 and Unicode writing is broken - Use this fix that might be the solution.
We're using the utility BOMStripperInputStream.java to strip the BOM from our input if present.
I'm having some troubles with text encoding. Parsing a website gives me a Data.Text string
"Project - Fran\195\167ois Dubois",
which I need to write to a file. So I'm using Data.Text.Lazy.Encoding.encodeUtf8 to convert it into a Bytestring. The problem is that this yields garbled output:
"Project - François Dubois".
What am I missing here?
If you have gotten Fran\195\167ois inside your Data.Text, you already have a UTF-8-encoded François.
That's inconvenient because Data.Text[.Lazy] is supposed to be UTF-16 encoded text, and the two code units 195 and 167 are interpreted as the unicode code points 195 resp. 167 which are 'Ã' resp. '§'. If you UTF-8-encode the text, these are converted to the byte sequences c383 ([195,131]) resp c2a7 ([194,167]).
The most likely way for getting into this situation is that the data you got from the website was UTF-8 encoded, but was interpreted as ISO-8859-1 (Latin 1) encoded (or another 8-bit encoding; 8859-15 is widespread too).
The proper way of handling it is avoiding the situation altogether [that may not be possible, unfortunately].
If the source of your data states its encoding correctly - as a website should - find out the encoding and interpret the data accordingly. If an incorrect encoding is stated, you are of course out of luck, and if no encoding is specified, you have to guess right (the natural guess nowadays is UTF-8, at least for languages using a variant of the Latin alphabet).
If avoiding the situation is not possible, the easiest ways of fixing it are
replacing the occurrences of the offending sequence with the desired one before encoding:
encodeUtf8 $ replace (pack "Fran\195\167ois") (pack "Fran\231ois") contents
assuming everything else is ASCII or inadvertent UTF-8 too, interpret the Text code units as bytes:
Data.ByteString.Lazy.Char8.pack $ Data.Text.Lazy.unpack contents
The former is more efficient, but becomes inconvenient if there are many different misencodings (caused by different accented letters, for example). The latter works only in the assumed situation (no code units above 255 in the Text) and is rather inefficient for long texts.
I am not completely sure if less can show UTF-8 encoded characters properly. GVim can. You can check this link on SO to find out how you can view UTF-8 data in gVim.
And regarding the other issue of being able to pass this to graphviz, I think you need to set the encoding on the command-line as explained in the Graph NonAscii FAQ.
From what you are explaining, I think there are no issues with how the data is being persisted. If you pass the encoding properly to graphviz, I think your problem will be resolved.
P.S: Creating an answer since it is easier to create descriptive links
From the answers to this question it appears there's a file somewhere on our server that's been saved with the wrong encoding.
I've seen this happen before - most often when pasting from Word into Visual Studio, when "smart quotes" can interfere with Visual Studio's encoding settings when saving the file.
Thing is - the problem I'm having involves 20-30 different script files, include files and so on (hey, that was how we kept it modular back in the day...) and I really don't want to open every one of them in Visual Studio and check the file encodings individually.
Is there any way I can analyze a folder tree full of files and spit out a list of each filename along with the text encoding used to save the file? (Or - if encodings aren't clearly specified - work out what encoding Microsoft IIS thinks was used to save the file?)
A textfile's encoding is just how it was intended to be interpreted, so you cannot detect this in a reliable way. You can probably detect UTF-8 and 16-bit unicode, but there's no way distinguishing between ISO-8859-1/2/3/4 etc... (Windows-1250/1251/1252 etc.).
If your document contains "weird" quotes, other than "" or '', you can simply find these, and replace them manually.