UTF-8 Character encoding of Web-Essentials-generated (e.g. minified) files - visual-studio-2012

I have a javascript file encoded in UTF-8 (with BOM). I'd like its minified counterpart to also be UTF-8 with BOM. But whenever I update the original file, the generated one reverts to ANSI.
I've tried using Visual Studio to overwrite the generated file using my preferred encoding (as suggested by TypeScript error Web essential), but without any luck.

I was able to force the minified files to save as UTF-8 with BOM by toggling on the "Save UTF-8 files with BOM" option:
Tools > Options > Web Essentials > Misc > Save UTF-8 files with BOM
Not sure how one would save as UTF-8 without BOM, but thankfully I don't need to.

Related

Possible to force CMake/MSVC to use UTF-8 encoding for source files without a BOM? C4819

All our source code is valid UTF-8, however some users on Windows cannot build them because their system is configured for a different encoding.
Without adding a BOM to source files, is it possible to tell MSVC to treat all source as UTF-8, irrespective of the users system encoding?
See MSDN's link regarding this topic (requires adding BOM header).
You can try:
add_compile_options("$<$<C_COMPILER_ID:MSVC>:/utf-8>")
add_compile_options("$<$<CXX_COMPILER_ID:MSVC>:/utf-8>")
By default, Visual Studio detects a byte-order mark to determine if the source file is in an encoded Unicode format, for example, UTF-16 or UTF-8. If no byte-order mark is found, it assumes the source file is encoded using the current user code page, unless you have specified a code page by using /utf-8 or the /source-charset option.
References
Docs - Visual C++ - ‎Documentation - IDE and Tools - Building - Build Reference: /utf-8 (Set Source and Executable character sets to UTF-8)
If you happen to create cross-platform code solving the problem using a command-line switch means that
add_compile_options("$<$<C_COMPILER_ID:MSVC>:/utf-8>")
add_compile_options("$<$<CXX_COMPILER_ID:MSVC>:/utf-8>")
or adding something like /utf-8 or /source-charset to the CFLAGs might mean you'll have to do a similar thing for other platforms, as well.
If possible it therefore might be better to avoid the problem, instead of solving it, by using an \uxxxx instead of an unicode character in strings: This way the source specifies which unicode characters to use, but doesn't actually contain them.

Node writes files with an encoding unrecognised by Sublime Text

When writing files with Node's fs's writeFileSync command, Sublime Text is unable to determine the correct character encoding.
Even when I explicitly define the encoding in the options:
fs.writeFileSync( '/path/to/file', 'some string', {encoding:'utf-8'});
In order to encode it as UTF-8, I have to File > Save with Encoding in Sublime to get it to recognise the correct encoding.
My hunch is that the problem is with Node, not Sublime, as I have encoding issues reading the file back into Node when special characters are present.
I'm using Sublime Text Build 3065.
Any ideas as to what's going on?
EDIT
Apologies, I forgot to mention that I use this command in the Sublime Text console in order to determine the encoding of the file:
view.encoding() // 'Undefined'

delete special characters preceding shebang (M-oM-;M-?#!/bin/bash) [duplicate]

I have a CSV file with special accents and save it in Notepad by selecting UTF-8 encoding. When I read the file using Java, it reads the BOM characters too.
So I want to save this file in UTF-8 format without appending a BOM initially in Notepad.
Otherwise, is there a built-in class in Java that eliminates the BOM characters that present at beginning, when reading the contents in a file?
Use Notepad++ - it is free and much better than Notepad. It will help to save text without a BOM using Encoding → Encode in UTF-8 without BOM: Notepad++ v6 and olders:
Notepad++ v7+:
When I encountered this problem in Java, I didn't find any library to parse these first three bytes (BOM). So my advice:
Use PushbackInputStream(in, 3).
Read the first three bytes
If it's not BOM (EF BB BF), push them back
Process the stream as UTF-8
I just learned from this Stack Overflow post, as #martin-geisler points out, that you can save files without the BOM in Windows Notepad, by selecting ANSI as the encoding.
I'm assuming that for more advanced uses this won't work because the resulting file is probably not the end encoding wished, but actually ANSI; but I tested and confirmed this works to save a very small .php script without BOM using only Notepad.
I learned the long, hard way that Windows' Notepad is not a true editor, although I'd like to point out for others that, despite this, it is misleadingly called up when you type "editor" on newer Windows machines, at least on one of mine.
I am currently using Emacs and other editors to solve this problem.
Use Notepad++ instead. See my personal blog post on it. From within Notepad++, choose the "Encoding" menu, then "Encode in UTF-8 without BOM".
Notepad on Windows 10 version 1903 (May 2019 update) and later versions supports saving to UTF-8 without a BOM. In fact, UTF-8 is the default file format now.
Reference: Windows 10 Notepad is Getting Better UTF-8 Encoding Support
The answer is: Not at all. Notepad can't do that.
In Java you can just skip the first byte in your InputStream and be done.
You might want to try out Notepad2 or Notepad++. Those Notepad replacements have the option for you to choose whether to output BOM.
As for a Java solution, as far as I know, Java does not understand the standard UTF-8. I googled and found Java's UTF-8 and Unicode writing is broken - Use this fix that might be the solution.
We're using the utility BOMStripperInputStream.java to strip the BOM from our input if present.

String unknown Eclipse

I've changed the encoding from Eclipse but all my strings with special characters now are like this "�". The old encoding was the Default (Cp1252), now it is UTF-8. How can I fix the strings with special characters?
Thanks.
Well, imagine you switch your brain to only understand Chinese. Could you read an English text anymore?
You changed the way Eclipse interprets the bits of your sourcecode. So you need to translate the sourcecode from Cp1252 to UTF-8.
I don't know if Eclipse is able to do this, but Notepad++ is.
Open a java-file you want to change the encoding of in Notepad++.
Click on Encoding
Select Convert to UTF-8
Save the file
When you now click on Encoding again, there should be a dot in front of Encode in UTF-8
Edit: Notepad++ recognizes the used encoding, so you can read it there. Copy and Paste from Notepad++ to Eclipse won't work, because you copied the same string Eclipse couldn't read. You have to change the encoding of the string.

How can I determine file encodings on Windows / IIS?

From the answers to this question it appears there's a file somewhere on our server that's been saved with the wrong encoding.
I've seen this happen before - most often when pasting from Word into Visual Studio, when "smart quotes" can interfere with Visual Studio's encoding settings when saving the file.
Thing is - the problem I'm having involves 20-30 different script files, include files and so on (hey, that was how we kept it modular back in the day...) and I really don't want to open every one of them in Visual Studio and check the file encodings individually.
Is there any way I can analyze a folder tree full of files and spit out a list of each filename along with the text encoding used to save the file? (Or - if encodings aren't clearly specified - work out what encoding Microsoft IIS thinks was used to save the file?)
A textfile's encoding is just how it was intended to be interpreted, so you cannot detect this in a reliable way. You can probably detect UTF-8 and 16-bit unicode, but there's no way distinguishing between ISO-8859-1/2/3/4 etc... (Windows-1250/1251/1252 etc.).
If your document contains "weird" quotes, other than "" or '', you can simply find these, and replace them manually.

Resources