Excel VBA on Mac german special characters not encoded correctly (ÄÜÖ) - excel

I have an Excel VBA Script that I originally wrote for Windows (where it works fine) and now had to port to Mac OS. I don't think that it matters but the script is calling cURL to get a JSON Response from a web API which is then parsed, edited and inserted into the spreadsheet.
Some of the fields in the parsed JSON contain special characters like Ä, Ü, Ö (German characters). The script can handle these just fine on Windows but on Mac instead of ÖÜÄ I get other symbols. This breaks the tool as it depends on some vlookup-functions where the values are written by hand (with the correct symbols).
I tried lots of googling but was not able to find anything.
One thing that might be interesting is that the code itself changes on Mac as well! I have some statements printed to the console and even the hardcoded strings that contain a special character are broken as soon as I open the script on a Mac.

The question is for Mac VBA. This is a pain. The only solution I have is to send the curl output to a file, then open that file with workbooks.opentext and Origin:=65001 and all the response is in cell A1, correctly encoded.
I have asked my own question on that, to see if any one has a more recent answer.
How to read UTF8 data output from cURL in popen/fread in VBA on Mac?

Related

Is there a Linux command line utility for getting random data to work with from the web?

I am a Linux newbie and I often find myself working with a bunch of random data.
For example: I would like to work on a sample text file to try out some regular expressions or read some data into gnuplot from some sample data in a csv file or something.
I normally do this by copying and pasting passages from the internet but I was wondering if there exists some combination of commands that would allow me to do this without having to leave the terminal. I was thinking about using something like the curl command but I dont exactly know how it works...
To my knowledge there are websites that host content. I would simply like to access them and store them in my computer.
In conclusion and as a concrete example, how would i copy and paste a random passage off the internet from a website and store it in a file in my system using only the command line? Maybe you can point me in the right direction. Thanks.
You could redirect the output of a curl command into a file e.g.
curl https://run.mocky.io/v3/5f03b1ef-783f-439d-b8c5-bc5ad906cb14 > data-output
Note that I've mocked data in Mocky which is a nice website for quickly mocking an API.
I normally use "Project Gutenberg" which has 60,000+ books freely downloadable online.
So, if I want the full text of "Peter Pan and Wendy" by J.M. Barrie, I'd do:
curl "http://www.gutenberg.org/files/16/16-0.txt" > PeterPan.txt
If you look at the page for that book, you can see how to get it as HTML, plain text, ePUB or UTF-8.

Javascript export CSV encoding utf-8 and using excel to open issue

I have been reading quite some posts including this one
Javascript export CSV encoding utf-8 issue
I know lots mentioned it's because of microsoft excel that using something like this should work
https://superuser.com/questions/280603/how-to-set-character-encoding-when-opening-excel
I have tried on ubuntu (which didn't even have any issue), on windows10, which I have to use the second posts to import, on mac which has the biggest problem because mac does not import, does not read the unicode at all.
Is there anyway I can do it in coding while exporting to enforce excel to open with utf-8? or some other workaround I might be able to try?
Thanks in advance for any help and suggestions.
Many Windows applications, including Excel, assume the localized ANSI encoding (Windows-1252 on US Windows) when opening a file, unless the file starts with byte-order-mark (BOM) code point. While UTF-8 doesn't need a BOM, a UTF-8-encoded BOM at the start of a file clues Excel that the file is UTF-8. The byte sequence is EF BB BF and the equivalent Unicode code point is U+FEFF.

How to disable encoding in a text-editor?

This is such a basic question I am surprised I could not easily find an answer to it:
I use Notepad++ to write my scripts in. Someone sent me some code for a shell script (.sh) that I could modify to suit my needs. I simply changed a small bit of text using Notepad++ (on Windows) and used FileZilla (SFTP) to upload it to my server (Debian Linux).
There were a few problems with this that it took my server admin an hour to find, namely:
FileZilla, for whatever reason, defaults to ASCII rather than binary! (changed it to binary and removed the .sh association with ASCII)
The permissions were wrong, chmod took care of this
Problem is it STILL did not work. To fix it my server admin simply copied the text right on the server (using vim or nano) into a new shell script file and saved that. Before he kept saying the problem was Windows (which he loves to hate on) but it seems it is the encoding that text-editors are using that is corrupting the files.
He said my text-editor encoding needs to be said to "None". However, that is not an option - only ANSI, UTF and UTS variants are options!
How can I create a shell script on Windows with no encoding whatsoever so that it doesn't get corrupted?
I need to be able to simply transfer the file to the server, I can't mess around with modifying it once on the server which is wholly impractical.
To fix it EndOfLine and encoding on Notepad++ :
On the bottom right of Notepad++ you can right click on the left of the encoding "UTF-8" and click on Convert UNIX(LF) format. Be sure to change encoding to UTF-8 if it is not the case.
In Filezilla :
Transfert mode : auto

"put binary" command is just outputting text instead of a binary file on LiveCode Server

I'm using LiveCode Community Server 8.1.2 on Windows Server 2016 Datacenter (running Apache 2.4)
I use the following code
put header "content-disposition: attachment; filename=" & tFileName
put header "content-type: application/pdf"
put header "content-transfer-encoding: binary"
put url("binfile:" & "../resources/documents/" & tActualFileName into tBinaryData
put binary tBinaryData
When included in a script called by a browser this code returns the data as text in the browser window rather than a pdf file that can be downloaded.
A few months ago I wrote this code and it worked, I returned to it today and it doesn't.
I've double checked and I'm sure it's correct but I have no idea what else could have broken it.
I've tested on
Chrome 59 on Windows 8.1 Pro
Chrome 59 on MacOS Sierra 10.12.5
Safari on iOS 10.3.2
Any help or guidance would be most welcome.
Edit:
Network headers from Chrome shown below
Edit:
Amended the code to remove the word "binary" from line 4 - this was generating an error which was producing text output at the end of the returned result - it hasn't resolved the problem - still getting text returned and "Content-Type:text/html" in the response header
Edit:
There are 2 blank lines at the beginning of the source (after using View Source, Ctrl-U on Chrome)
After a couple of days of working on this I now have a solution. It's a work around and I don't know what the actual problem is but for completeness it's worth posting.
The 2 blank lines before the "%PDF" code were the problem and after adding various debugging 'put' statements I tracked the offending code down the the following 2 statements
require ("codeSnippets.lc")
require ("databaseOperations.lc")
Each of these statements added an extra line to the output. I presumed the referenced files must somehow be outputting a blank line which goes unnoticed for a normal html page.
There's no code in them to do this so as a test I added the following code to the very last line of 'codeSnippets.lc'
put "test1"
And I added a similar line between the require statements
require ("codeSnippets.lc")
put "test2"
require ("databaseOperations.lc")
The output showed "test1" and "test2" on different lines. I can't explain what's happening between the last line of the Required script and the next line of the main script. If someone can shed light on this I'd be most grateful.
It's worth noting that I have other require statements that didn't show this issue. The following worked without producing an extra line
require ("../config.lc")
require("../resources/library/templateFunctions.lc")
As a work around I simply moved copies of the various bits of code I need for document download into a new file called "documentHandler.lc" and called the following
require("../resources/library/documentHandler.lc")
It all works perfectly now.

Display Spanish characters with accents properly

The file I am working on has everything written in Spanish. The format is .sav. What I wanna do is to open it on JMP, export to Excel with .csv format. I am using Mac with running OS Sierra.
Here is the problem. I surely opened the file with utf-8 on JPM but there are some corrupted characters. So I changed the default language of Mac to Spanish and it did not work. I also exported the file from JMP to texteditor with the corruption remaining, duplicate the file with utf-8, and import on Excel. It did not work as well. Changing to utf-16 was one of my attempts and did not work at all. I used Numbers instead of Excel, but this also failed.
What else I can do to display the characters properly?
FYI, the file is taken from http://evaluacion.oportunidades.gob.mx:8010/EVALUACION/en/eval_cuant/p_bases_cuanti.php
Any suggestion is much appreciated!
Thank you in advance!!

Resources