Command or option for the xgettext, msginit, msgfmt sequence for setting the MIME type? - linux

msgfmt “invalid multibyte sequence” error on a Polish text is corrected by manually editing the MIME Content-Type charset in the template file. Is there some command or option for the xgettext, msginit, msgfmt sequence for setting the MIME type?
cat >plt.cxx <<EOF
// plt.cxx
#include <libintl.h>
#include <locale.h>
#include <iostream>
int main (){
setlocale(LC_ALL, "");
bindtextdomain("plt", ".");
textdomain( "plt");
std::cout << gettext("Invalid input. Enter a string at least 20 characters long.") << std::endl;
}
EOF
g++ -o plt plt.cxx
xgettext --package-name plt --package-version 1.2 --default-domain plt --output plt.pot plt.cxx
sed --in-place plt.pot --expression='s/CHARSET/UTF-8/'
msginit --no-translator --locale pl_PL --output-file plt_polish.po --input plt.pot
sed --in-place plt_polish.po --expression='/#: /,$ s/""/"Nieprawidłowo wprowadzone dane. Wprowadź ciąg przynajmniej 20 znaków."/'
mkdir --parents ./pl_PL.utf8/LC_MESSAGES
msgfmt --check --verbose --output-file ./pl_PL.utf8/LC_MESSAGES/plt.mo plt_polish.po
LANGUAGE=pl_PL.utf8 ./plt

Just give full locale name and msginit will set charset correctly
msginit --no-translator --input=xx.pot --locale=ru_RU.UTF-8
results in
"Language: ru\n"
"Content-Type: text/plain; charset=UTF-8\n"

There is no argument for setting the output character encoding directly, but this should in pratice not be a problem, as your PO editor will automatically use an appropriate character encoding when saving the PO file (one that supports all the characters used in the translation), and replace CHARSET in the file with the name of the encoding. If it doesn’t, file a bug.
The only problem would be if the POT file contained non-ASCII characters, but xgettext does have a --from-code argument for this, which specifies the encoding of the input files. If the input contains non-ASCII characters and --from-code is set to the correct encoding, the output POT file will have the character encoding set to UTF-8 (this need not be equal to the input character encoding). However, if the input files only contain ASCII characters, --from-code=UTF-8 will unfortunately have no effect.
msginit does in fact automatically set the character encoding to something ‘appropriate’ for the chosen target locale. However, the list of locale to character encoding pairs seems outdated; UTF-8 is now really the best choice for all languages.
An alternative would be to use pot2po instead of msginit. This always uses UTF-8 automatically, AFAICS. However, unlike msginit, it does not automatically fill out the plural forms of the PO file, which may or may not be a problem (some think it is the job of the PO editor to do this).

Related

iconv command is not changing the encoding of a plain text file to another encoding

In Linux I created a plain text file. using "file -i" I am seeing file encoding is "us-ascii" . After trying below commands it is still showing output file encoding as "us-ascii". Could you please tell me how to change encoding? or Is there any way to download some encoded file which I can't read.
iconv -f US-ASCII -t ISO88592//TRANSLIT -o o.txt ip.txt
iconv -f UTF-8 -t ISO-8859-1//TRANSLIT -o op.txt ip.txt
I am expecting either iconv change the encoding or I can download some encoded file.
If your file contains only ASCII character, then there's no difference between the ASCII, UTF-8 and different ISO8859-x encoding. So after conversion, you will end up with the exactly same file.
A text file does not store any information about what encoding was used. Therefore, the file applies a few rules but at the end of the day, it's just a guess. And as the files are identical, the result will alwazys be the same.
To see a difference, you will must use characters that are encoded differently with the different encoding or are not avaialbe at all in one of the encodings, e.g. ă, € or 😊.

How to avoid iconv error to halt my program

I have a utf-8 document to be convert to big5 encoding using iconv with the code below
iconv -f utf-8 -t big5 $inputFile -o $outputFile
However there are some utf-8 characters encoding is not complete because I set byte size limit in each line in the document like 40 bytes in a line so some utf-8 characters will be cut.
Since the incomplete encoding of utf-8 characters leads to the error that iconv cannot find the corresponding big5 encode for the incomplete utf-8 characters encoding and the iconv stops.
Is there any why to avoid the iconv to halt and skip the incomplete utf-8 characters encoding and continue convert the following document to big5 encoding document?
I'm not sure that is what you are looking for, but, to quote man iconv:
DESCRIPTION
The iconv program converts the encoding of characters in inputfile, or
from the standard input if no filename is specified, from one coded
character set to another.
OPTIONS
-c Omit invalid characters from output.
[...]
The man is not really clear, but when you use that option, characters in the source file invalid given the source encoding are discarded.

How to remove non UTF-8 characters from text file

I have a bunch of Arabic, English, Russian files which are encoded in utf-8. Trying to process these files using a Perl script, I get this error:
Malformed UTF-8 character (fatal)
Manually checking the content of these files, I found some strange characters in them.
Now I'm looking for a way to automatically remove these characters from the files.
Is there anyway to do it?
This command:
iconv -f utf-8 -t utf-8 -c file.txt
will clean up your UTF-8 file, skipping all the invalid characters.
-f is the source format
-t the target format
-c skips any invalid sequence
Your method must read byte by byte and fully understand and appreciate the byte wise construction of characters. The simplest method is to use an editor which will read anything but only output UTF-8 characters. Textpad is one choice.
iconv
can do it
iconv -f cp1252 foo.txt
None of the methods here or on any other similar questions worked for me.
In the end what worked was simply opening the file in Sublime Text 2. Go to File > Reopen with Encoding > UTF-8. Copy the entire content of the file into a new file and save it.
May not be the expected solution but putting this out here in case it helps anyone, since I've been struggling for hours with this.

encoding problem?

i work with txt files, and i recently found e.g. these characters in a few of them:
http://pastebin.com/raw.php?i=Bdj6J3f4
what could these characters be? wrong character-encoding? i just want to use normal UTF-8 TXT files, but when i use:
iconv -t UTF-8 input.txt > output.txt
it's still the same.
When i open the files in gedit, copy+paste them in another txt files, then there's no characters like in the ones in pastebin. so gedit can solve this problem, it encodes the TXT files well. but there are too many txt files.
why are there http://pastebin.com/raw.php?i=Bdj6J3f4 -like chars in the text files? can they be converted to "normal chars"? I can't see e.g.: the "Ì" char, when i open the files with vim, only after i "work with them" (e.g.: awk, etc)
It would help if you posted the actual binary content of your file (perhaps by using the output of od -t x1). The pastebin returns this as HTML:
"Ì"
"Ã"
"é"
The first line corresponds to U+00C3 U+0152. THe last line corresponds to U+00C3 U+00A9, which is the string "\ux00e9" in UTF ("\xc3\xa9") with the UTF-8 bytes reinterpreted as Latin-1.
From man iconv:
The iconv program converts text from
one encoding to another encoding. More
precisely, it converts from the
encoding given for the -f option to
the encoding given for the -t option.
Either of these encodings defaults to
the encoding of the current locale
Because you didn't specify the -f option it assumes the file is encoded with your current locale's encoding (probably UTF-8), which apparently is not true. Your text editors (gedit, vim) do some encoding detection - you can check which encoding do they detect (I don't know how - I don't use any of them) and use that as -f iconv option (or save the open file with your desired encoding using one of those text editors).
You can also use some tool for encoding detection like Python chardet module:
$ python -c "import chardet as c; print c.detect(open('file.txt').read(4096))"
{'confidence': 0.7331842298102511, 'encoding': 'ISO-8859-2'}
..solved !
how:
i just right clicked on the folders containing the TXT files, and pasted them to another folder.. :O and presto..theres no more ugly chars..

How to determine encoding table of a text file

I have .txt and .java files and I don't know how to determine the encoding table of the files (Unicode, UTF-8, ISO-8525, …). Does there exist any program to determine the file encoding or to see the encoding?
If you're on Linux, try file -i filename.txt.
$ file -i vol34.tex
vol34.tex: text/x-tex; charset=us-ascii
For reference, here is my environment:
$ which file
/usr/bin/file
$ file --version
file-5.09
magic file from /etc/magic:/usr/share/misc/magic
Some file versions (e.g. file-5.04 on OS X/macOS) have slightly different command-line switches:
$ file -I vol34.tex
vol34.tex: text/x-tex; charset=us-ascii
$ file --mime vol34.tex
vol34.tex: text/x-tex; charset=us-ascii
Also, have a look here.
Open the file with Notepad++ and will see on the right down corner the encoding table name. And in the menu encoding you can change the encoding table and save the file.
You can't reliably detect the encoding from a textfile - what you can do is make an
educated guess by searching for a non-ascii char and trying to determine if it is a
unicode combination that makes sens in the languages you are parsing.
See this question and the selected answer. There’s no sure-fire way of doing it. At most, you can rule things out. The UTF encodings you’re unlikely to get false positives on, but the 8-bit encodings are tough, especially if you don’t know the starting language. No tool out there currently handles all the common 8-bit encodings from Macs, Windows, Unix, but the selected answer provides an algorithmic approach that should work adequately for a certain subset of encodings.
In a text file there is no header that saves the encoding or so. You can try the linux/unix command find which tries to guess the encoding:
file -i unreadablefile.txt
or on some systems
file -I unreadablefile.txt
But that often gives you text/plain; charset=iso-8859-1 although the file is unreadable (cryptic glyphs).
This is what I did to find the correct file encoding for an unreadable file and then translate it to utf8 was, after installing iconv. First I tried all encodings, displaying (grep) a line that contained the word www. (a website address):
for ENCODING in $(iconv -l); do echo -n "$ENCODING "; iconv -f $ENCODING -t utf-8 unreadablefile.txt 2>/dev/null| grep 'www'; done | less
This last commandline shows the the tested file encoding and then the translated/transcoded line.
There were some lines which showed readable and consistent (one language at a time) results. I tried manually some of them, for example:
ENCODING=WINDOWS-936; iconv -f $ENCODING -t utf-8 unreadablefile.txt -o test_with_${ENCODING}.txt
In my case it was a chinese windows encoding, which is now readable (if you know chinese).
Does there exist any program to determine the file encoding or to see the encoding?
This question is 10 years old as I write this, and the answer is still, "No" - at least not reliably. There's not been much improvement unfortunately. My recent experience suggests the file -I command is very much "hit-or-miss". For example, when checking a text file on macOS 10.15.6:
% file -i somefile.asc
somefile.asc: application/octet-stream; charset=binary
somefile.asc was a text file. All charcters in it were encoded in UTF-16 Little Endian. How did I know this? I used BBedit - a competent text editor. Determining the encoding used in a file is certainly a tough problem, but...?
if you are using python, the chardet package is a good option, for example
from chardet.universaldetector import UniversalDetector
files = ['a-1.txt','a-2.txt']
detector = UniversalDetector()
for filename in files:
print(filename.ljust(20), end='')
detector.reset()
for line in open(filename, 'rb'):
detector.feed(line)
if detector.done: break
detector.close()
print(detector.result)
gives me as a result:
a-1.txt {'encoding': 'Windows-1252', 'confidence': 0.7255358182877111, 'language': ''}
a-2.txt {'encoding': 'utf-8', 'confidence': 0.99, 'language': ''}

Resources