Section Sign when displaying .txt in Browser? - browser

I have the following issue: We have a tool that saves some data to a .txt and this data is delimited by §. This data can be accessed through the webservers directory index and viewed in the browser.
But both FF and Chrome will not display the § character correctly, and when copy / pasting data it will not copy the correct char as well.
The whole thing works fine though, when using Ctrl + S, the file is set as UTF-8 and the character is displayed in a proper way in any text editor.
The file is correctly encoded as UTF-8, the Content-Type header I get is text/plain; charset=UTF-8
Should I be mistaken and § not be printable in UTF-8 (why would the text editor show it correctly then), please make a suggestion for a proper charset to use, while keeping in mind that we might also have Chinese / Japanese characters etc. in our data.
Thanks for your help.
P.S. by not being diplayed correctly, i mean the black diamond with the ? in it.

Related

What Mime-Type for text with ANSI Escape-Sequences (Color Codes)

The log files on our build server come with ANSI Escape Sequences for colored text. (As you can tell from the square brackets, semicolons, trailing m's...)
So this looks ugly (raw view in its own browser tab). Just like if a webserver was passing a jpeg as mimetype text/plain to you, or you opening it in notepad.exe... So my question is:
Is there a mimetype for properly displaying ANSI-color-codes enriched text?
same grief – pointing correctly out that strictly speaking it's just an encoding...
a bookmarklet to help survive

How to enable my python code to read from Arabic content in Excel?

I have two related problems. I'm working on Arabic dataset using Excel. I think that Excel somehow reads the contents as ؟؟؟؟؟ , because when I tried to replace this character '؟' with this '?' it replaces the whole text in the sheet. But when I replace or search for another letter it works.
Second, I'm trying to edit the sheet using python, but I'm unable to write Arabic letters (I'm using jGRASP). For example when I write the letter 'ل' it appears as 0644, and when I run the code this message appears : "ُError encoding text. Unable to encode text using charset windows-1252 ".
0644 is the character code of the character in hex. jGRASP displays that when the font does not contain the character. You can use "Settings" > "Font" in jGRASP to choose a CSD font that contains the characters you need. Finding one that has those characters and also works well as a coding font might not be possible, so you may need to switch between two fonts.
jGRASP uses the system character encoding for loading and saving files by default. Windows-1252 is an 8-bit encoding used on English language Windows systems. You can use "File" > "Save As" to save the file with the same name but a different encoding (charset). Once you do that, jGRASP will remember it (per file) and you can load and save normally. Alternately, you can use "Settings" > "CSD Windows Settings" > "Workspace" and change the "Default Charset" setting to make the default something other than the system default.

recognising encodings in Excel

It is my understanding that txt files do not have encoding information stored so text editors simply make educated guesses about encoding of a given text file and then display the file on screen using that guessed encoding. If the editor guessed right you get your text on the screen, if the editor guessed wrong, then you (sometimes) get gibberish. Am I getting this right so far?
Now on to my problem. I have my bank statements in a csv file. When I open it in MS Excel 14 (MS Office 2010), it recognises the encoding and displays the problematic work as "obračun". Great. When I open the file in Emacs 24.3.1, it fails to recognise the correct encoding and displays the problematic word as "obra鑾n". Not so great.
My question is: how do I ask Excel which encoding the file is in? So I can tell that to Emacs since Excel obviously guessed correctly.
Thanks.
This could be a possible answer: http://metty-mathews.blogspot.si/2013/08/excel2013-character-encoding.html
After I opened ‘Advanced’ – ‘Web Options’ – ‘Encoding’, it said "Central European (Windows)" in "Save this document as:" field. It turns out that's Microsoft's name for Windows-1250 encoding and it turns out my file was indeed encoded with this encoding.
Is this just pure luck or does this field really show in which encoding Excel is displaying text - that I do not know.

How to mannually specify Byte Order Mark in CSV

I have a CSV that is encoded in Unicode, however lacks a byte order mark at the start. As such Excel (2013) opens without encoding correctly (i think it assumes ASCII if no BOM specified...), meaning that certain characters are displayed incorectly.
From reading around i have read that a BOM of "\uFEFF" should be entered at the start of the CSV file. I have tried opening in txt editor and adding the characters e.g.
\uFEFF
r1test 1, r1text2, r1text3
r2test 1, r2text2, r2text3
However, this does not solve the problem - the characters "\uFEFF" show up on the first row when I open in excel, rather than it beign interpreted as a BOM. I am not sure what I am doing wrong, and the format of how the text should be specified such that it is interpreted as a BOM, rather than text in the the first of the data
I have only very limited experience using CSV, and only just heard of a BOM... and thus I could be implementing this completely wrong!
(for reference, i know that I could specify the encoding if i use the import data option within excel... however I really want to work out how to get it correctly specified in advance such that I can just open the csv... I have several thousand of these files that I am creating and exporting - once I know how to do this 'manually' [i.e. by adding some text at start of a the file], I can configure to automatically do in Python).
Thanks in advance
For someone else wanting to tell Excel to add a BOM: See if you can "Save as Unicode Text".
source

Questions on Chinese Encoding

I'm trying to create a webpage in Chinese and I realized that while the text looks fine when I run it on browsers, once I change the Character Encoding, the text becomes gibberish. Here's what's happening:
I create my html file in Emacs, encoded in UTF-8.
I upload it to the server, and view it on my browsers (FF, IE, Chrome, Opera) - no problem.
I try to view the page in other encodings via FF > View > Character Encoding > All those different Chinese encoding systems, e.g. Chinese Simplified (HZ)
Apart from UTF-8, on every other encoding the text becomes gibberish.
I'm assuming this isn't a problem - i.e. browsers are smart enough to know which encoding the page is in, and parse the content accurately. What I'm wondering is why I can't read the Chinese text anymore once I change encoding - is it because I don't have Chinese fonts installed on my OS? Should I stick to UTF-8 if my audience are Chinese or should I choose among one of their many encoding systems?
Thanks in advance for your help/opinions.
UTF isn't a 'catch-all' encoding. It's designed to contain international language character symbols for ease of use, but it is still an encoding, just like the other encodings you've selected. You would have to retype the text in each encoding to make it appear correctly when viewed with that encoding.
Viewer encoding MUST match the file being read. Viewing UTF-8 as something other makes about same sense as renaming .txt to .exe and trying to run it.
You should specify correct encoding in HTML. The option you're using in web browser exist only for those rare occasions when web developer screwed up his job and declared other encoding than actually used OR mixed up 2 different encodings on one page.
Of course changing the encoding in your browser will "break" the text! The browser is taking the stream of UTF-8 codepoints and tries to force another encoding on the raw data. Needless to say, the result ain't pretty. Changing the encoding in the browser is NOT the equivalent of converting.
As you surmised correctly, modern browsers usually guess correctly -- but not always. As Agent_L make sure to declare the encoding in the headers.

Resources