How to print a UTF-8 unicode string as EBCDIC in PL/I - mainframe

How can I instruct PL/I to print a UTF-8 value as EBCDIC. Is there a "trick" in PL/I or do I have to call the z/OS unicode services to convert the UTF-8 value?
PUT SKIP EDIT('VAR: ',VAR) (A,A);
Using the above instruction gives unreadable output.
VAR: & (!¢

Try UTF8TOCHAR, which appears to be new with IBM Enterprise PL/I version 5.
PUT SKIP EDIT('VAR: ',UTF8TOCHAR(VAR)) (A,A);
That's just freehand, but I think you get the idea.

Related

Beautiful Soup - meaning of letter 'u' in documentation [duplicate]

Like in:
u'Hello'
My guess is that it indicates "Unicode", is that correct?
If so, since when has it been available?
You're right, see 3.1.3. Unicode Strings.
It's been the syntax since Python 2.0.
Python 3 made them redundant, as the default string type is Unicode. Versions 3.0 through 3.2 removed them, but they were re-added in 3.3+ for compatibility with Python 2 to aide the 2 to 3 transition.
The u in u'Some String' means that your string is a Unicode string.
Q: I'm in a terrible, awful hurry and I landed here from Google Search. I'm trying to write this data to a file, I'm getting an error, and I need the dead simplest, probably flawed, solution this second.
A: You should really read Joel's Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) essay on character sets.
Q: sry no time code pls
A: Fine. try str('Some String') or 'Some String'.encode('ascii', 'ignore'). But you should really read some of the answers and discussion on Converting a Unicode string and this excellent, excellent, primer on character encoding.
My guess is that it indicates "Unicode", is it correct?
Yes.
If so, since when is it available?
Python 2.x.
In Python 3.x the strings use Unicode by default and there's no need for the u prefix. Note: in Python 3.0-3.2, the u is a syntax error. In Python 3.3+ it's legal again to make it easier to write 2/3 compatible apps.
I came here because I had funny-char-syndrome on my requests output. I thought response.text would give me a properly decoded string, but in the output I found funny double-chars where German umlauts should have been.
Turns out response.encoding was empty somehow and so response did not know how to properly decode the content and just treated it as ASCII (I guess).
My solution was to get the raw bytes with 'response.content' and manually apply decode('utf_8') to it. The result was schöne Umlaute.
The correctly decoded
für
vs. the improperly decoded
fĂźr
All strings meant for humans should use u"".
I found that the following mindset helps a lot when dealing with Python strings: All Python manifest strings should use the u"" syntax. The "" syntax is for byte arrays, only.
Before the bashing begins, let me explain. Most Python programs start out with using "" for strings. But then they need to support documentation off the Internet, so they start using "".decode and all of a sudden they are getting exceptions everywhere about decoding this and that - all because of the use of "" for strings. In this case, Unicode does act like a virus and will wreak havoc.
But, if you follow my rule, you won't have this infection (because you will already be infected).

How do I write the £ (GBP) sign in a CSV file from Ruby and read it back correctly in Excel?

When I write a CSV file using Ruby containing the £ sign and I open it using Excel I see this symbol instead ¬£.
My understanding is that Ruby uses UTF-8, but Excel interprets this file using a different encoding (ASCII).
I tried to write a US-ASCII encoded CSV file and guessed the £ encoding in ASCII like this:
csv = CSV.open(filename, 'w:US-ASCII')
csv << "\xA3"
csv.close
but it fails with invalid byte sequence in UTF-8 somewhere deep into the CSV library.
What am I doing wrong?
Thank you
For sure, Excel is not bound to use ASCII. For instance, I can easily input japanese characters into an Excel cell, and these are certainly not representable by ASCII.
While Ruby, by default, uses Unicode in its internal representation, every String object incorporates its own encoding, so you could in theory mix strings with different encodings, if you want to. In your case, you want to force a certain encoding when writing a file. This can be done either by using the w: output option, as you did, or by using external_encoding: Encoding::US-ASCII. See here for the names of the constants in Encoding.
I don't think US-ASCII is a good choice for the encoding, simply because there is no pound symbol in the ASCII chart. I would have expected that you get a warning message on stderr, when trying to write a pound symbol. If you need an 8-bit-encoding, ISO-8859-1 should do the job, but my recommendation would be to write UTF-8 and tell Excel to use this encoding when reading the CSV file. The possibility to import UTF exists at least since Excel 2007.

Reading txt file & opening with excel

Using Delphi 2007 & trying to read a txt file with greek characters & opening with Excel, I am not getting greek characters but symbols...Any help?
The text file is created with this code
CSVSTRList.SaveToFile('c:\test\xxx2.txt');
where CSVSTRList is a TStringList.
Looking at your code in your previous question it seems you are taking a stock TStringList and calling SaveToFile. That encodes the text as ANSI. Your Greek characters cannot be encoded as ANSI.
You want to export text using a Unicode encoding. For instance, in a modern Delphi you would use:
StringList.SaveToFile(FileName, Encoding.Unicode);
for UTF-16, or
StringList.SaveToFile(FileName, Encoding.UTF8);
for UTF-8.
I would expect that Excel will understand either of these encodings.
Since you are using a non-Unicode Delphi, things are somewhat more difficult. You'll need to change all you code, every single piece of string handling, to be Unicode aware. So you cannot use the stock string list any more, for example, because it contains 8 bit ANSI strings. The simplest way to do this with legacy Delphi is with the TNT Unicode library.
Or you could take the step of moving to a Unicode Delphi. If you care about international text it is the most sensible option.

delete special characters preceding shebang (M-oM-;M-?#!/bin/bash) [duplicate]

I have a CSV file with special accents and save it in Notepad by selecting UTF-8 encoding. When I read the file using Java, it reads the BOM characters too.
So I want to save this file in UTF-8 format without appending a BOM initially in Notepad.
Otherwise, is there a built-in class in Java that eliminates the BOM characters that present at beginning, when reading the contents in a file?
Use Notepad++ - it is free and much better than Notepad. It will help to save text without a BOM using Encoding → Encode in UTF-8 without BOM: Notepad++ v6 and olders:
Notepad++ v7+:
When I encountered this problem in Java, I didn't find any library to parse these first three bytes (BOM). So my advice:
Use PushbackInputStream(in, 3).
Read the first three bytes
If it's not BOM (EF BB BF), push them back
Process the stream as UTF-8
I just learned from this Stack Overflow post, as #martin-geisler points out, that you can save files without the BOM in Windows Notepad, by selecting ANSI as the encoding.
I'm assuming that for more advanced uses this won't work because the resulting file is probably not the end encoding wished, but actually ANSI; but I tested and confirmed this works to save a very small .php script without BOM using only Notepad.
I learned the long, hard way that Windows' Notepad is not a true editor, although I'd like to point out for others that, despite this, it is misleadingly called up when you type "editor" on newer Windows machines, at least on one of mine.
I am currently using Emacs and other editors to solve this problem.
Use Notepad++ instead. See my personal blog post on it. From within Notepad++, choose the "Encoding" menu, then "Encode in UTF-8 without BOM".
Notepad on Windows 10 version 1903 (May 2019 update) and later versions supports saving to UTF-8 without a BOM. In fact, UTF-8 is the default file format now.
Reference: Windows 10 Notepad is Getting Better UTF-8 Encoding Support
The answer is: Not at all. Notepad can't do that.
In Java you can just skip the first byte in your InputStream and be done.
You might want to try out Notepad2 or Notepad++. Those Notepad replacements have the option for you to choose whether to output BOM.
As for a Java solution, as far as I know, Java does not understand the standard UTF-8. I googled and found Java's UTF-8 and Unicode writing is broken - Use this fix that might be the solution.
We're using the utility BOMStripperInputStream.java to strip the BOM from our input if present.

Reading text from file and converting it to UTF32

I'm using CSFML 1.6 library (it's multimedia library based on OpenGL). And I live in Poland, here we have special characters like:
ąęźćół
Now I have a text file which consists this characters and CSFML offer function to set UnicodeText on displayed string, it's argument is array of ints.
How can I properly read characters from file and then pass them to this function?
Any help really appreciated.
Judging to sfml-dev, the library accepts either char string in ISO-8859-1, or wchar_t string in UTF16, or there's a possibility to provide a completely own charset.
I suppose, the simplest is to stick with UTF16. Save your text in UTF16, and use the "wstring family" functions (those beginning with 'w', like wcscmp()) to handle it.

Resources