Teradatasql teradata_write_csv file encoding? - python-3.x

I am using the Teradatasql library to download data into a CSV file, using:
sql="{fn teradata_write_csv("+destination_path+")}SELECT DISTINCT..."
How do I determine (or specify) what the file encoding is? Is there a default encoding, or does it use the encoding of the table? (There are multiple tables which may have different encodings)
According to the filesystem on Ubuntu the file is ASCII, according to Windows it is ANSI, but in both cases the file contains incorrect characters. When parsing the file in python I get:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 5061: ordinal not in range(128)

Related

Why is my Jupyter Notebook using ascii codec even on Python3?

I'm analyzing a dataframe that contains French characters.
I set up an IPython kernel based on Python3 within my jupyter notebook, so the default encoding should be utf-8.
However, I can no longer save my notebook as soon as an accented character appears in my .ipynb (like é, è...), even though those are handled in utf-8.
The error message that I'm getting when trying to save is this :
Unexpected error while saving file: Videos.ipynb 'ascii' codec can't encode characters in position 347-348: ordinal not in range(128)
Here is some minimal code that gives me the same error message in a basic Python3 kernel
import pandas as pd
d = {'EN': ['Hey', 'Wassup'], 'FR': ['Hé', 'ça va']}
df = pd.DataFrame(data=d)
(actually a simple cell with "é" as text does prevent me from saving already)
I've seen similar questions, but all of them were based on Python 2.7 so nothing relevant. I also tried several things in order to fix this :
Including # coding: utf-8 at the top of my notebook
Specifying the utf-8 encoding when reading the csv file
Trying to read the file with latin-1 encoding then saving (still not
supported by ascii codec)
Also checked my default encoding in python3, just to make sure
sys.getdefaultencoding()
'utf-8'
Opened the .ipynb in Notepad++ : the encoding is set to utf-8 in there. I can add accented characters and save there, but then can no longer open the notebook in jupyter (I get an "unknown error" message").
The problem comes from saving the notebook and not reading the file, so basically I want to switch to utf-8 encoding for saving my .ipynb files but don't know where.
I suspect the issue might come from the fact that I'm using WSL on Windows10 - just a gut feeling though.
Thanks a lot for your help!
Well, turns out uninstalling then reinstalling jupyter notebook did the trick. Not sure what happened but it's now solved.

ascii codec error is showing while running python3 code via apche wsgi

Objective: Insert a Japanese text to a .ini file
Steps:
Python version used 3.6 and used Flask framework
The library used for writing config file is Configparser
Issue:
When I try running the code via the "flask run" command, there are no issues. The Japanese text is inserted to ini file correctly
But when I try running the same code via apache(wsgi) I am getting the following error
'ascii' codec can't encode characters in position 17-23: ordinal not in range(128)
Never interact with text files without explicitly specifying the encoding.
Sadly, even Python's official documentation neglects to obey this simple rule.
import configparser
config_path = 'your_file.ini'
config = configparser.ConfigParser()
with open(config_path, encoding='utf8') as fp:
config.read_file(fp)
with open(config_path, 'w', encoding='utf8') as fp:
config.write(fp)
utf8 is a reasonable choice for storing Unicode characters, pick a different encoding if you have a preference.
Japanese characters consume up to five bytes per character in UTF-8, picking utf16 (always two bytes per character) can result in smaller ini files, but there is no functional difference.

Utf-8 codec can't decode byte 0xff in position 185: invalid start byte

Is there anyone else getting this error when running pyinstaller?
Utf-8 codec can't decode byte 0xff in position 185: invalid start byte
I saved my python file with a notepad++ in utf-8 without bom but that hasnt helped. Pyinstaller was working fine earlier and just suddenly I began getting this error. Is anyone experiencing the same issue?
Regards,
A bit late to the party, but I was having this exact issue. you can use open as 'rb' so that it won't try to convert the text to ANSI. I did mine like this:
with open(path_to_file,'rb') as f:
contents = f.read()
contents = contents.rstrip("\n").decode("utf-16")
contents = contents.split("\r\n")
The contents.split is just for formatting. when you decode the data within the file, it'll keep all the /r/n (if in windows) or /n (if in linux)
hope this helps!

How to convert Linux Python 3.4 code with national characters into executable code in windows

My national language is Polish.
I've got program in Python 3.4 which I wrote on linux. This program mostly work on text, Polish text. So of course, variable names don't have any special characters, but sometimes I put into them some strings with Polish characters, user will input from keyboard some strings with Polish characters and My program read from files, where I got strings with Polish characters.
Everything work well on Linux. I didn't think about encoding, it just worked. But now i want to make it work on Windows. Can you help me understand, what I should actually do to make this transform?
Or maybe some workaround - I just need to have Windows executable file. Perfect way for this, would be "Pyinstaller", but it work only for python 2.7, not 3.4. That's why I want to make it working on Windows, and in VirtualBox with py2exe compile into executable form. But maybe somone know way for this in Linux, it without this encoding problems, it would be great.
If not, I back to my question. I tried to convert my python scripts in gedit into ISO or CP1250 or 1252, I wrote in the file headline what coding I'm using, it actually worked a little, now my windows error pint me into my files with text form which I read some data, so I converted them too... But it didn't work.
So I decided, that it's no more time for blind trials, and I need to ask for help, I need to understand what encoding is used on windows, which on linux, what is the best way to convert one into another, and how make program read characters in right way.
The best way would be - I guess - not changing anything in encoding, but just make windows python understand what encoding I'm using. Is that possible?
Complete answer for my question would be great, but anything what will point me in right direction will also help me a lot.
OK. I'm not sure, if I understand your answer in comments, but tried sending text for myself via mail, coping it in virtualbox into notepad and save as utf_8. Still get this message:
C:\Users\python\Documents>py pytania.py
Traceback (most recent call last):
File "pytania.py", line 864, in <module>
start_probny()
File "pytania.py", line 850, in start_probny
utworzenie_danych()
File "pytania.py", line 740, in utworzenie_danych
utworzenie_pytania_piwo('a')
File "pytania.py", line 367, in utworzenie_pytania_piwo
for line in f: # Czytam po jednej linii
File "C:\Python34\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 1134: cha
racter maps to <undefined>
As mentioned by Zero Piraeus in a comment: The default source encoding for Python 3.x is UTF-8, regardless of what platform it's running on...
If you have problems, that probably because your source code has incorrect encoding. You should stick to UTF-8 only (even though PEP 0263 -- Defining Python Source Code Encodings allows changing it).
The error message you provided is clear:
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 1134
Python is currently expecting UTF8 (because "UnicodeDecodeError"!), but it encounters an illegal char (0x9d isn't a valid char is UTF8). To diagnose the problem, use iconv(1) on a Linux machine, to detect errors buy doing a dummy conversion:
iconv -f utf8 -t iso8859-2 -o /dev/null < test.py
You can try to reproduce the problem by creating a very simple python file, typically : print "test €uro encoding"

open a unicode name file on linux from Python

I was trying to open a file on Ubuntu in Python using:-
open('<unicode_string>', "wb")
unicode_string is '\u9879\u76ee\u7ba1\u7406'. It is a Chinese text.
But I get the following error:-
UnicodeEncodeError: 'latin-1' codec can't encode characters in position 0-3: ordinal not in range(256)
I am trying to understand what limits this behavior? Went through some links. I understand that the OS's filesystem has the responsibility to encode the string. For windows the 'mbcs' encoding handles possibly every character. What could be the problem with linux.
Does not fail for all linux setups. What should I be checking?

Resources