I'm using MySQL 5.1 and Susbsonic 3.0.0.3. Database and all tables are in cp1251. I have problems with saving russian symbols. After saving it looks like "?????". How i can setup subsonic to save symbols in cp1251?
P.S. With reading everything is ok.
I've solved problem. I set parameter init-connect
init-connect=SET NAMES cp1251
But it is not good solution to this problem!
Related
I am using Spyder v.3.2.8 and I'm trying to modify the Real-time code style analysis. For example, I'd like to set the max-line-length to 99.
I exactly followed what was suggested here, i.e. I created a file .pycodestyle in the directory resulting from import os; os.path.expanduser('~'). The file looks as follows
[pycodestyle]
ignore = E226,E302,E41,E501,W503
max-line-length = 99
I am aware that ignoring E501 renders max-line-length virtually ineffective. However, I still get warnings if the code exceeds the default 79 columns. Am I missing something?
EDIT:
The issue is solved now. All of a sudden, the settings are recognized. A reboot might have helped (OS: Windows 10).
I had the same problem and I have resolved it by creating a file ~/.config/pycodestyle (without a dot and inside ~/.config) instead of ~/.pycodestyle, according to the official website:
https://pep8.readthedocs.io/en/latest/intro.html#configuration
I try to export my Oracle view data to Excel sheet using Oracle Data Integrator.
English text is exported good, but russian (cyrillic) is wrong!
Help me please, how can I configure datasources and encoding.
After export to excel data has cp1252 encoding, but there is no place where such encoding is configured!
Information:
Oracle DS use jdbc:oracle:thin and
NLS_LANG=AMERICAN_AMERICA.CL8MSWIN1251
(in windows registry and environment variables(same to database).
ODI starting with product.conf:
AddVMOption -Dfile.encoding=Cp1251
AddVMOption -Dsun.jnu.encoding=Cp1251
AddVMOption -Duser.language=ru
AddVMOption -Duser.country=RU
(and I see such values in Help-About-Properties.
Excel DS use jdbc:odbc and
charSet=cp1251
property.
Oracle 12c, ODI 12c.
If I execute simple java code with
-Dfile.encoding=Cp1251
option - russian language displays correct, but not over ODI...
I would be glad any advice!
I want to answer my own question.
May be it is obviously for anyone, but I found solution not fast...
I need to change Windows property Current language for non-unicode programs. from English to Russian.
So I even not need "AddVMOption -Dfile.encoding=Cp1251" options in ODI - they are set correct after changing property.
Hope this information to be helpful for someone.
I'm newbie at node.js!
I'm in trouble at encoding 'Korean'.
My oracle database has Korean String data, and directly Console.log(Korean text) is worked.
but when I use execute function and select query, and Console.log(return rows), It isn't work. text is broken.
I set all encoding type in Eclipse as UTF-8. As I think, I have to set something controlling encoding type in oracledb module, But I can`t find how set it.
please help me. I have to resolve this problem at any cost!!!
I already set NLS_LANG.
I don't set it. I just check it and take picture. It is already done when I found registry.
Windows 7 64bit
node 0.10.38
Eclipse / NodeClipse
It's resloved.
I set my NLS_LANG variable, as 'original value'.UTF8 And It works.
This is from master of node-oracledb. thanks.
From version 1.0, node-oracledb always sets the "client" character encoding to Oracle's AL32UTF8 value. See https://github.com/oracle/node-oracledb/blob/master/CHANGELOG.md
OK, so i have a SmartGWT web application, where i have some reports running in list grids.
I export to excel using the Smartgwt built-in export as per the examples using the listgrid.exportclientdata();
Here's the issue:
-When running locally, all works fine and i can open the XLSX file and see all data.
-When running from the server, the columns containing data become hidden! If i manually do the "unhide" thing from within Excel, i can see the columns...
My local environment is Mac OSX, Mountain Lion(also tried earlier), running Sun's Java.
The prod server is Debian running OpenJDK.
Not sure if that would have something to do with it, and i am at a loss as to how i go about solving it... Running my webapp on Tomcat6 in both environments.
For others, searching the stackoverflow void for answers to riddles in the dark:
This was caused by certain fonts not being available in OpenJDK, without any errors being logged anywhere...
I switched to Sun's JDK on my Debian as well, and the Excel files now open and display properly.
This an also happen when facing this POI Bug that seems to be related to a Java Bug.
If it's the case changing the Font to something other than Calibri or using jre above 7u21 should fix the issue.
You can also mitigitate the issue and avoid the columns from being totally collapsed by using a code like this
sheet.autoSizeColumn(x);
if (sheet.getColumnWidth(x) == 0) {
// autosize failed use MIN_WIDTH
sheet.setColumnWidth(x, MIN_WIDTH);
}
I am curling a website and writing it to .json file; this file is input to my java code which parses it using json library and the necessary data is written back in a CSV file which i later use to store it in a database.
As you know data coming from a website can be in different formats so i make sure that i read and write in UTF-8 format, still i get wrong output.
For example, Østerriksk becomes �sterriksk.
I am doing all this in Linux. I think there is some encoding problem because this same code runs fine in Windows but not in Unix/Linux.
I am quite sure my java code is proper but i am not able to find out what I'm doing wrong.
You're reading the data as ISO 8859-1 but the file is actually UTF-8. I think there's an argument (or setting) to the file reader that should solve that.
Also: curl isn't going to care about the encodings. It's really something in your Java code that's wrong.
What kind of IDE are you using, for example this can happen if you are using Eclipse IDE, and not set your default encoding to utf-8 in properties.