On my system I am using lrelease version of qt4.7.
I have generated qm file which is working fine in production environment.
But unfortunately our server uses lrelease version of qt3.3
and qm file generated from same ts file on server giving junk characters on production environment.
I got a suggestion of converting special characters of other languages ( french & Italian ) to hex code and its working fine.
But as I got large ts file, it is hard to change each special character to hex code.
Could you please suggest some fix ( besides changing lrelease version on server)
or any tool which converts special characters to hex code in ts files?
This works as designed. If your application runs on Qt4, use the Qt4 tools to generate the .qm files. Using Qt3's tools will not work.
Related
All our source code is valid UTF-8, however some users on Windows cannot build them because their system is configured for a different encoding.
Without adding a BOM to source files, is it possible to tell MSVC to treat all source as UTF-8, irrespective of the users system encoding?
See MSDN's link regarding this topic (requires adding BOM header).
You can try:
add_compile_options("$<$<C_COMPILER_ID:MSVC>:/utf-8>")
add_compile_options("$<$<CXX_COMPILER_ID:MSVC>:/utf-8>")
By default, Visual Studio detects a byte-order mark to determine if the source file is in an encoded Unicode format, for example, UTF-16 or UTF-8. If no byte-order mark is found, it assumes the source file is encoded using the current user code page, unless you have specified a code page by using /utf-8 or the /source-charset option.
References
Docs - Visual C++ - Documentation - IDE and Tools - Building - Build Reference: /utf-8 (Set Source and Executable character sets to UTF-8)
If you happen to create cross-platform code solving the problem using a command-line switch means that
add_compile_options("$<$<C_COMPILER_ID:MSVC>:/utf-8>")
add_compile_options("$<$<CXX_COMPILER_ID:MSVC>:/utf-8>")
or adding something like /utf-8 or /source-charset to the CFLAGs might mean you'll have to do a similar thing for other platforms, as well.
If possible it therefore might be better to avoid the problem, instead of solving it, by using an \uxxxx instead of an unicode character in strings: This way the source specifies which unicode characters to use, but doesn't actually contain them.
I want to view some ansi-art on the linux local-console. (my setup:raspberry pi3 / newest raspbian - no x11)
i've tried many different settings in raspi-config, dpkg-reconfigure console-setup, /etc files, environment vars but i had no luck yet. do i need a special pcf font to get it working?
a reliable way to enable it for remote terminals would also be great.
thanks in advance
It depends on what your data uses (see chart). Codes 0..31 are a problem unless you have a program that can map those codes to a printable value (as noted in Why does showconsolefont have different output in tmux?, the showconsolefont program does this mapping of 0..31).
Most of the usable fonts for the Linux console are "psf" fonts: having a header which tells which Unicode values each glyph corresponds to. Using that, along with a known character set (cp437), you could convert the data or "play" it using an application which knows how to do this:
You could convert it using iconv or recode, or
The line-drawing (128..255) could be done using luit in a UTF-8 console.
I am working on Windows 7 and have a node.js project which is under git. I set up my TortoiseGit to autocrlf: false and safecrlf: false. Then changed all project files' line endings to LF. The project starts and operates normally and I see no reason to go back to CRLF.
Should I expect any side effects after doing this?
No, there is no problem at all using *nix end of line sequences on Windows (LF instead of CRLF). In fact, my personal recommendation would be to ensure your Windows editor (if you're developing node.js on Windows) to use LF.
Just as an example, I use Visual Studio Code for my editor developing node.js, and I have specified the following in my user settings to use LF instead of CRLF: "files.eol": "\n". Now I no longer need to worry about that.
CRLF line endings cause breaking issues in a node.js application that is run on Linux, and it isn't the most straightforward thing to troubleshoot if you don't know what to look for.
TL;DR Use LF while developing node.js applications on Windows if you really care about cross-platform (which you should care).
Note: just because git changes your line endings doesn't mean that is the solution. Even if you are ok with your version control changing your source code (which I don't recommend), if you do an npm publish it'll be using your source files locally and you could sneak CRLFs into the npm registry.
If all tools/editors/IDEs are LF compatible, then there is no problem.
Otherwiese you might get errors or mixed line endings on saving.
In order to make sure no conversion is happening for other users who clone your repository you can put a .gitattributes file into the root folder containing: * -crlf which disables all crlf conversions.
As you can see, D fails to output german Umlaute. At least on Windows. On Linux or BSD the same program outputs the string as I've saved it.
I already tried wstring or dstring, but the output is the same.
What am I doing wrong?
D will output UTF-8 regardless of the operating system. How the output will be interpreted depends on how it is displayed. In this particular case, it looks like your IDE is interpreting the output as if it was encoded in the Windows-1252 encoding.
For the standard Windows console, you could change the output encoding by calling SetConsoleOutputCP(65001), but note that this may have some undesired side effects (you should restore the codepage before your progam exits, and batch files may not run while the console output codepage is set to 65001).
CyberShadows post guided me to an acceptable answer. :-)
In Eclipse it is possible to change the output-encoding without changing global settings of the OS.
Go to Run --> Run-Configurations...
There select the Common-Tab and change the encoding to UTF-8. Now german Umlaute are displayed correctly. At least in Eclipse. :-)
Another possibility is to use https://babun.github.io/ . It is a Cygwin-based Shell that ouputs UTF-8:
Please help me to solve a problem that appeared recently.
When release project is build on built machine (using msbuild) all string literals in code are escaped with \x00NN where nn are two digitals. The problem is that if such values are displayed in form (winforms) they appear as broken encoded (like broken codepage in www)
in source code it looks like
str = " Без ПДВ"
but reflector shows
str = " \x00c1\x00e5\x00e7 \x00cf\x00c4\x00c2";
And this appears as string with broken encoding in the form, like
â ò.÷. ÏÄÂ
.
What causes msbuild to convert non-ascii string literals to escaped symbols? There is no such problem for dev builds on developers machines.
Regional settings were checked for user that runs ms-build and were changed from German to Ukrainian, the same was done for non-unicode programs language. It does not help after reboot.
MsBuild has worked without such problem on the same machine for one year but latest build beraks string literals in code
command-line looks like
MSBuild {LocalPath}{Solution} /property:DefineConstants="{Defines}{DefinesExtra}" /t:{Target} /property:Configuration={Configuration} {Platform} /clp:NoItemAndPropertyList
Target is Build (or Rebuild it does not matter) configuration is release, platform is x86
PS I know that this is bad to store localized strings in code (but shit happens).