How to setting charset to utf8 when create schema in scala slick 3.2? - slick

I'm use db.run(DBIOAction.seq(chartRef.schema.create)) to create a table for mysql.But it charset show as latin1.How to make the table's charset as utf8 when create schema?
I have append useUnicode=true&characterEncoding=UTF-8 to connection url string.

There are two ways:
Either set it manually by setting the db collation to UTF-8
Try this
db.default.url="jdbc:mysql://localhost/db_name?characterEncoding=UTF-8"
Configure URL using this link.
Please let me know if it works.

Related

How do I handle an encoding error in Excel Power Query?

I receive the following error when connecting to a Postgres database.
DataSource.Error: ODBC: ERROR [22P05] ERROR: character with byte sequence 0xc2 0x96 in encoding
"UTF8" has no equivalent in encoding "WIN1252"; Error while executing the query
Details:
DataSourceKind=Odbc
DataSourcePath=database=XXXXXXXX;dsn=PostgreSQL30;encoding='utf-8';port=XXXX;server=XXXXXXXXX
OdbcErrors=Table
It only happens with this table, so the connection works in general. I would prefer to deal with this at the excel level and not make changes to the database. I tried including 'encoding='utf-8' in the connection string, but I see that the problem isn't that excel doesn't recognize the encoding scheme but that it doesn't have a process to handle 0xc2 0x96 in WIN1252.
Is there a way to change excel default encoding or a way to specify it in the connection string or query settings to handle this?
Solved by installing the unicode Postgres driver and setting the dsn to it.

SQL Error (1366): Incorrect string value: '\xE3\x82\xA8\xE3\x83\xBC...'

Hi I am trying to upload data to the Heidi SQL table, but it returned "SQL Error (1366): Incorrect string value: '\xE3\x82\xA8\xE3\x83\xBC...'".
This issue is prompted by this string - "エーペックスレジェンズ" , and the source data file has a number of special characters. Want to know if there's a way to override this, so that all forms of character could be uploaded?
My default setting is utf8 and I have also tried utf8mb4, but neither of them would work.
That happens when you select the wrong file encoding in HeidiSQL's open-file dialog:
Never select "Auto-detect" - I wrote that auto-detection, and I can tell you it often detects the wrong encoding. Use the right encoding instead, which is mostly utf-8 nowadays.

Vertex error, encoding error in azure data lake analytics while loading data. The appropriate reasons can be?

USE DATABASE retail;
#log=EXTRACT id int,
item string
FROM "/Retailstock/stock.txt"
USING Extractors.Tsv();
INSERT INTO sales.stock
SELECT id, item FROM #log;
It is the question from Azure data lake analytics course. I need to load the table sales.stock with sales schema.
It gives vertex error and encoding error.
I can't understand the problem after head banging for 2 days. Thanks.
It might be due to encoding mismatch. Extractors have a default encoding set to UTF8, and in case the encoding of your source file is different a runtime error will happen during extraction.
You can change the encoding by providing "encoding" parameter, e.g:
USING Extractors.Text(encoding : Encoding.[ASCII]);
See more about supported encodings here: Extractor parameters - encoding
In my case setting the encoding to ASCII in the Extractors.Text() / Extractors.Tsv() did not work. Not sure why, since the file is clearly in the ASCII encoding. I've had to manually convert the file to UTF-8.

Node.js - Oracle DB and fetchAsString format

I am stuck on a problem and I am not sure what is the best way to solve it. I have a date column that I want to select and I want to fetch it as a string. Which is great, node-oracledb module has this option with fetchAsString mehotd. But it fetches the date like this for example 10-JAN-16 and I want to fetch it like this 10-01-2016. Is there a way to do that from the node-oracledb module, or I should modify the date after I get the result from the query?
UPDATE: I mean solution without to_char in the query and without query modifications
Check out this section of my series on Working with Dates in JavaScript, JSON, and Oracle Database:
https://dzone.com/articles/working-with-dates-using-the-nodejs-driver
The logon trigger shows an example of using alter session to set the default date format. Keep in mind that there is NLS_DATE_FORMAT, NLS_TIMESTAMP_FORMAT, NLS_TIMESTAMP_TZ_FORMAT.
I only show NLS_TIMESTAMP_TZ_FORMAT because I convert to that type in the examples that follow as I need to do some time zone conversion for the date format I'm using.
Another way to set the NLS parameters is to use environment variables of the same name. Note that this method will not work unless you set the NLS_LANG environment variable as well.

Charset of Lotus Domino Server

I've a Java agent (running on Linux server) that manage document attachments, but something wrong with accented chars in their names (ò,è,ù ecc..).
I wrote this code to display the charset used:
OutputStreamWriter writer = new OutputStreamWriter(new ByteArrayOutputStream());
String enc = writer.getEncoding();
System.out.println("CHARSET: " + enc);
This display
CHARSET: ASCII
In a server where everything works fine, same line print:
CHARSET: UTF8
Servers have same configuration (works with "Internet sites", where "Use UTF-8 for output" is set to "Yes").
Any idea about parameter to set (Domino/Linux)?
UPDATE
I'll try to explain better...
I call an agent through Ajax call.
In parameter, i pass "ààà" string. When i try to decode in UTF-8 inside agent, string is resolved with
"???"
instead of
"ààà"
This is what System.out.println() shows in console.
On another Domino server, everything works. I don't understand if it is a matter of server settings or OS settings.
Just a suggestion, but you could change the first line in your example to be:
OutputStreamWriter writer = new OutputStreamWriter(new ByteArrayOutputStream(),
Charset.forName("UTF-8"));
That will force the OutputStreamWriter to UTF8, and your sample code will show consistent output on both servers. Without knowing more details, I can't say for sure if that's relevant to the real problem.
Although this might not directly answer your question, maybe you might be interessted in this article about encoding.

Resources