Need help on character encoding for web sphere server. I am trying to insert arabic characters in DB. But its stored as ???????
When I changed the server to tomcat it is inserting properly with arabic characters.
What changes needs to be done to make it work with web sphere server.
I tried by adding the value in server.xml
-Ddefault.client.encoding=UTF-8"
Still the issue continues.
You can try to enforce an encoding by using the following JVM argument:
-Dclient.encoding.override=UTF-8
Related
I'm having an issue in a .NET application where pages served by local IIS display random characters (mostly black diamonds with white question marks in them). This happens in Chrome, Firefox, and Edge. IE displays the pages correctly for some reason.
The same pages in production and in lower pre-prod environments work in all my browsers. This is strictly a local issue.
Here's what I've tried:
Deleted code and re-cloned (also tried switching branches)
Disabled all browser extensions
Ran in incognito mode
Rebooted (you never know)
Deleted temporary ASP.NET files
Looked for corrupt fonts on machine but didn't find any
Other Information:
Running IIS 10.0.17134.1
.NET MVC application with Knockout
I realize there are several other posts regarding black diamonds with question marks, but none of them seem to address my issue.
Please let me know if you need more information.
Thanks for your help!
You are in luck. The explicit purpose of � is to indicate that character encodings are being misused. When users see that, they'll know that we've messed up and lost some of their text data, and we'll know that, at one or more points, our processing and/or configuration is wrong.
(Fonts are not at issue [unless there as no font available to render �]. When there is no font available for a character, it's usually rendered as a white-filled rectangle.)
Character encoding fundamentals are simple: use a sufficient character set (say Unicode), pick an appropriate encoding (say UTF-8), encode text with it to obtain bytes, tell every program and person that gets the bytes that they represent text and which encoding is used. The encoding might be understood from a standard, convention, or specification.
Your editor does the actual encoding.
If the file is part of a project or similar system, a project file might store the intended encoding for all or each text file in the project. If your editor is an IDE, it should understand how the project does that.
Your compiler needs the know the encoding of each text file you give it. A project system would communicate what it knows.
HTML provides an optional way to communicate the encoding. Example: <meta charset="utf-8">. An HTML-aware editor should not allow this indicator to be different than the encoding it uses when saving the file. An HTML-aware editor might discover this indicator when opening the file and use the specified encoding to read the file.
HTTP uses another optional way: Content-Type response header. The web server emits this either statically or in conjunction with code that it runs, such as ASP.NET.
Web browsers use the HTTP way if given.
XHR (AJAX, etc) uses HTTP along with JavaScript processing. If needed the JavaScript processing should apply the HTTP and HTML rules, as appropriate. Note: If the content is JSON, the current RFC requires the encoding to be UTF-8.
No one or thing should have to guess.
Diagnostics
Which character encoding did you intend to use? This century, UTF-8 is so much the norm that if you choose to use a different one, you should have a good reason and document it (for others and your future self).
Compare the bytes in the file with the text you expect it to represent. Does it use the entended encoding? Use an editor or tool that shows bytes in hex.
As suggested by #snakecharmerb, what does the server send? Use a web browser's F12 network tab.
What does the HTTP response header say, if anything?
What does the HTML meta tag say, if anything?
What is the HTML doctype, if any?
This is such a basic question I am surprised I could not easily find an answer to it:
I use Notepad++ to write my scripts in. Someone sent me some code for a shell script (.sh) that I could modify to suit my needs. I simply changed a small bit of text using Notepad++ (on Windows) and used FileZilla (SFTP) to upload it to my server (Debian Linux).
There were a few problems with this that it took my server admin an hour to find, namely:
FileZilla, for whatever reason, defaults to ASCII rather than binary! (changed it to binary and removed the .sh association with ASCII)
The permissions were wrong, chmod took care of this
Problem is it STILL did not work. To fix it my server admin simply copied the text right on the server (using vim or nano) into a new shell script file and saved that. Before he kept saying the problem was Windows (which he loves to hate on) but it seems it is the encoding that text-editors are using that is corrupting the files.
He said my text-editor encoding needs to be said to "None". However, that is not an option - only ANSI, UTF and UTS variants are options!
How can I create a shell script on Windows with no encoding whatsoever so that it doesn't get corrupted?
I need to be able to simply transfer the file to the server, I can't mess around with modifying it once on the server which is wholly impractical.
To fix it EndOfLine and encoding on Notepad++ :
On the bottom right of Notepad++ you can right click on the left of the encoding "UTF-8" and click on Convert UNIX(LF) format. Be sure to change encoding to UTF-8 if it is not the case.
In Filezilla :
Transfert mode : auto
We have installed new LINUX server for our Pentaho installation, but I am having problem with diacritics in PDF generated files.
For web (HTML), I have set encoding to utf-8 which is working perfectly.
BUT for PDF encoding utf-8 is not working. I have "fixed" it on old server by setting up CP-1250 encoding, but I don't want to use old standard anymore. So I have been trying to fix it.
I have set option in pentaho-server/tomcat/webapps/pentaho/WEB-INF/classes/classic-engine.properties to
org.pentaho.reporting.engine.classic.core.modules.output.pageable.pdf.Encoding=UTF-8
but PDF versions of report are still ignoring letters with diacritics..
Soo, my thought is, that there must be some PDF encoding setting above this, perhaps some global PDF generator setting or perhaps Java or Linux itself?
Is anyone able to give me a hint where should I look, what to check?
A simple query about expected behaviour when compiling Windows-1252 characters under UTF-8. When building using an ant task on java source code it seems that some weird character encoding occurs.
For certain fields characters that are normally encoded as \u2013 on the windows machine for example, turn into \226 on Linux. What is the explanation for the \226? Will it still be rendered correctly on a browser, for example?
When I create a document with MS Word and upload it to an html server it it correctly displayed when it is a windows server, but not when it is a linux server.
I tried this with both IE and Firefox.
The meta tag in the source says charset=windows-1252
Displaying the source code in the browser shows exactly the same source as I uploaded, so the server is not changing that. Nevertheless are characters like accented e displayed as silly characters when obtained from the linux server.
So somewhere in the tcp/http/??? records that the server sends to the browser makes the browser interpret the characters different from what is ment.
What could that be?
When you create a document in MS Word, there are a lot of characters that you can't see that are actually in the file, such as end of line markers, page breaks, etc. which you will not notice until after you upload the file to the server.
You should always use a plain text editor such as Notepad++, or even bluefish to create these files. Sometimes you can get MSWord to do the trick if you make sure to save the file as a web document(htm or html), but the special characters will usually begin to cause problems depending on your goal.