Text Files Are Downloading In ANSI Instead Of UTF-8 Bom - text

If i try to download any text, srt, or exe program, it is downloading in ANSI encoding instead of UTF-8 Bom encoding. I'm Turkish user. Some of Turkish characters like ı, ş, İ, ğ has been showing as ý, þ, ð
For example Internet Download Manager uses lang files as text. Some Turkish characters are broken.
Windows language is totally Turkish. I bought the laptop from USA.
How can i fix this problem? Please help me.
I FIXED BY EDITING ADMINISTRATOR LANGUAGE SETTINGS.
IMAGE

I fixed this issue by editing administrator language settings.

Related

Javascript export CSV encoding utf-8 and using excel to open issue

I have been reading quite some posts including this one
Javascript export CSV encoding utf-8 issue
I know lots mentioned it's because of microsoft excel that using something like this should work
https://superuser.com/questions/280603/how-to-set-character-encoding-when-opening-excel
I have tried on ubuntu (which didn't even have any issue), on windows10, which I have to use the second posts to import, on mac which has the biggest problem because mac does not import, does not read the unicode at all.
Is there anyway I can do it in coding while exporting to enforce excel to open with utf-8? or some other workaround I might be able to try?
Thanks in advance for any help and suggestions.
Many Windows applications, including Excel, assume the localized ANSI encoding (Windows-1252 on US Windows) when opening a file, unless the file starts with byte-order-mark (BOM) code point. While UTF-8 doesn't need a BOM, a UTF-8-encoded BOM at the start of a file clues Excel that the file is UTF-8. The byte sequence is EF BB BF and the equivalent Unicode code point is U+FEFF.

Azure Cognitives services speech to text accentuation on Spanish

I'm having issues with the c++ sdk of Azure cognitives services speech to text with the spanish language related to accentuation.
I'm seeing the following error:
'sÃ' instead of 'Si' or 'Sí' which will be the correct transcription.
I'm guessing this is due to the api responde encoding. Is there any way to set headers to enable response on UTF-8 or any encoding with full spanish support?
The return is UTF8 encoded, if you redirect the output to file and load it into a UTF8 capable editor, you will see the text is actually correct. the problem is UTF8 output in the Windows cmd console.
There are several stackoverflow discussions about this. Perhaps something like this helps: how to convert utf-8 to ASCII in c++?

delete special characters preceding shebang (M-oM-;M-?#!/bin/bash) [duplicate]

I have a CSV file with special accents and save it in Notepad by selecting UTF-8 encoding. When I read the file using Java, it reads the BOM characters too.
So I want to save this file in UTF-8 format without appending a BOM initially in Notepad.
Otherwise, is there a built-in class in Java that eliminates the BOM characters that present at beginning, when reading the contents in a file?
Use Notepad++ - it is free and much better than Notepad. It will help to save text without a BOM using Encoding → Encode in UTF-8 without BOM: Notepad++ v6 and olders:
Notepad++ v7+:
When I encountered this problem in Java, I didn't find any library to parse these first three bytes (BOM). So my advice:
Use PushbackInputStream(in, 3).
Read the first three bytes
If it's not BOM (EF BB BF), push them back
Process the stream as UTF-8
I just learned from this Stack Overflow post, as #martin-geisler points out, that you can save files without the BOM in Windows Notepad, by selecting ANSI as the encoding.
I'm assuming that for more advanced uses this won't work because the resulting file is probably not the end encoding wished, but actually ANSI; but I tested and confirmed this works to save a very small .php script without BOM using only Notepad.
I learned the long, hard way that Windows' Notepad is not a true editor, although I'd like to point out for others that, despite this, it is misleadingly called up when you type "editor" on newer Windows machines, at least on one of mine.
I am currently using Emacs and other editors to solve this problem.
Use Notepad++ instead. See my personal blog post on it. From within Notepad++, choose the "Encoding" menu, then "Encode in UTF-8 without BOM".
Notepad on Windows 10 version 1903 (May 2019 update) and later versions supports saving to UTF-8 without a BOM. In fact, UTF-8 is the default file format now.
Reference: Windows 10 Notepad is Getting Better UTF-8 Encoding Support
The answer is: Not at all. Notepad can't do that.
In Java you can just skip the first byte in your InputStream and be done.
You might want to try out Notepad2 or Notepad++. Those Notepad replacements have the option for you to choose whether to output BOM.
As for a Java solution, as far as I know, Java does not understand the standard UTF-8. I googled and found Java's UTF-8 and Unicode writing is broken - Use this fix that might be the solution.
We're using the utility BOMStripperInputStream.java to strip the BOM from our input if present.

String unknown Eclipse

I've changed the encoding from Eclipse but all my strings with special characters now are like this "�". The old encoding was the Default (Cp1252), now it is UTF-8. How can I fix the strings with special characters?
Thanks.
Well, imagine you switch your brain to only understand Chinese. Could you read an English text anymore?
You changed the way Eclipse interprets the bits of your sourcecode. So you need to translate the sourcecode from Cp1252 to UTF-8.
I don't know if Eclipse is able to do this, but Notepad++ is.
Open a java-file you want to change the encoding of in Notepad++.
Click on Encoding
Select Convert to UTF-8
Save the file
When you now click on Encoding again, there should be a dot in front of Encode in UTF-8
Edit: Notepad++ recognizes the used encoding, so you can read it there. Copy and Paste from Notepad++ to Eclipse won't work, because you copied the same string Eclipse couldn't read. You have to change the encoding of the string.

How to convert old Japanese text encodings?

I run MacBook Pro under 10.6.7, and I am competent in Unix. I have old Japanese text files in various encodings (EUC, SJS, New-JIS) that I can no longer read or display. The old program jconv.c does not help, since it only converts among these encodings. Is there a way to convert them (or any one of them) to the current "normal" Japanese text that can be seen in TextEdit, etc.? I have set the Terminal preferences to SJS and EUC (can't find NewJIS), among others, including UTF-8. Eleanor
I recommend you look into iconv for doing such conversions.
nkf is a Linux command line program which can meet your requirement.
I am sure it is available at Debian. So you can download sorce code from the Net.

Resources