Pentaho: Binary file (standard input) matches - linux

When I'm trying to use MySQL in Table input with an order by in the query, I'm getting below error and ETL stops abruptly.
Binary file (standard input) matches
If I remove order by in the query, it works. Is this a bug in Pentaho?
And, it occurs only in Linux Environment
I'm using Pentaho 8.1.0.0 CE
OS: Ubuntu 16.04.4 LTS
MySQL Driver version: mysql-connector-java-5.1.46.jar

Checks whether the select is returning a zero date 0000-00-00 00:00:00
If so, add the property zeroDateTimeBehavior=convertToNull in the JDBC parameters

This happened to me also in Ubuntu 16.04 and PDI 8.1 (although it worked fine in Windows 10 and PDI 8.1)
In my case happened by a step that had an accent in the name.
Try to keep the names of the steps simple, without accents or any other fancy characters.

Related

AWS RDS Postgres PostGIS upgrade problems

I have an RDS instance running Postgres 11.16. I'm trying to upgrade it to 12.11 but it's giving me errors on PostGIS. If I try a "modify" I get the following error in the precheck log file:
Upgrade could not be run on Sun Sep 18 06:05:13 2022
The instance could not be upgraded from 11.16.R1 to 12.11.R1 because of following reasons. Please take appropriate action on databases that have usages incompatible with requested major engine version upgrade and try again.
Following usages in database 'XXXXX' need to be corrected before upgrade:
-- The instance could not be upgraded because there are one or more databases with an older version of PostGIS extension or its dependent extensions (address_standardizer, address_standardizer_data_us, postgis_tiger_geocoder, postgis_topology, postgis_raster) installed. Please upgrade all installations of PostGIS and drop its dependent extensions and try again.
----------------------- END OF LOG ----------------------
First, I tried just removing postgis to upgrade then add it back again. I used: drop extension postgis cascade;. However, this generated the same error
Second, I tried running SELECT postgis_extensions_upgrade();. However, it gives me the following error:
NOTICE: Updating extension postgis_raster from unpackaged to 3.1.5
ERROR: function st_convexhull(raster) does not exist
CONTEXT: SQL statement "ALTER EXTENSION postgis_raster UPDATE TO "3.1.5";"
PL/pgSQL function postgis_extensions_upgrade() line 82 at EXECUTE
SQL state: 42883
Third, I tried to do a manual snapshot and upgrade the snapshot. Same results.
One additional piece of information, I ran SELECT PostGIS_Full_Version(); and this is what it returns:
"POSTGIS=""3.1.5 c60e4e3"" [EXTENSION] PGSQL=""110"" GEOS=""3.7.3-CAPI-1.11.3 b50468f"" PROJ=""Rel. 5.2.0, September 15th, 2018"" GDAL=""GDAL 2.3.1, released 2018/06/22"" LIBXML=""2.9.1"" LIBJSON=""0.12.1"" LIBPROTOBUF=""1.3.0"" WAGYU=""0.5.0 (Internal)"" TOPOLOGY RASTER (raster lib from ""2.4.5 r16765"" need upgrade) (raster procs from ""2.4.4 r16526"" need upgrade)"
As you'll notice, the raster lib is old but I can't really figure out how to upgrade it. I think this is what is causing me problems but I don't know how to overcome it.
I appreciate any thoughts.
I ended up finally giving up on this after many failed attempts. I ended up solving this by:
Spinning up a new instance on the desired postgres version
Using pg_dump on the old version (schema and data)
Using pg_restore on the new version
I'm not sure if I did something wrong with the above but I found my sequences were out of sync on a number of tables. I wrote some scripts to reset the sequence values after doing this. I had to use something like this to re-sync those sequences:
SELECT setval('the_sequence', (SELECT MAX(the_primary_key) FROM the_table)+1);
I wasted enough time and this got me past the issue. Hopefully the next upgrade doesn't give me this much trouble.

How to read Arabic text from DB2 on z/OS using Node.JS

I am trying to get data from DB2 table on z/OS in my Node application, where some of the values are stored in Arabic (IBM-420). The result that I am getting in application is the following:
������� ���� ���������
I am using ibm_db version 2.7.4 to fetch data from DB2 and:
Windows 10 - 64 bit
Node version: 14.17.3
NPM version: 7.19.1
I have tried to display the result on the console and writing it on a txt file using fs through following:
fs.writeFile('data.txt', content, err => {
if (err) {
console.error(err)
return
}
})
Any suggestion to convert the text into a proper Arabic?
Resolved by creating the system environment variable DB2CODEPAGE and setting its value to 1208 , then restarting all components to pick up this new variable.
When converting / translating between codepages , the Db2 CLI component will need to know the application code page, along with the DB2 server database code page and convert/translate between them as necessary. This requires the relevant conversion tables to be already present in the Db2 client (in this case the CLIDRIVER).
The DB2CODEPAGE environment variable value 1208 on Microsoft Windows forces the CLI components of Db2 to use Unicode as the application code page. When that DB2CODEPAGE variable is not present then the Db2 CLI component will try to derive the code page from the Microsoft Windows Control Panel Regional Options - which may not be what you need. Other values of this variable are possible, refer to the documentation for details.
When you set DB2CODEPAGE=1208 you must ensure that all the Microsoft -Windows applications really do use unicode or UTF-8 when inserting/updating data in the Db2 tables.

SSIS problem deploying package on SQL Server 2012

I want to ask what the problem may be. I run the ssis package on my computer, where I export data from Oracle to Excel and everything works. But when I deploy this package to my SQL Server 2012 machine, the package reports an error:
Column "xx" cannot convert between unicode and non-unicode string data types.
My package looks like this:
It's very simple, and on my PC everything works.
I understand that if I probably add data conversion in the package, it will fix the error. But I'm more interested in why this is the case for me and not on the server?
The error occurred during the last stage, but I didn't get it. I selected 4 columns from Oracle and inserted them into Excel in DT_WSTR format, so I didn't know why it reported an error with converting between non-Unicode and Unicode. I resolved this error as follows:
ValidateExternalMetaData: False

.net core cli - build on Linux and Windows - Issues with string comparison and accents

I switched my CI tool to call dotnet build on Linux whereas I was previoulsy doing it on Windows. Then I suddenly had a bug in my app.
I am comparing a variable read from database to a string litteral (directly in the code) returned from a method. The string litteral has French accents. So does the value stored in the db (SQL Server).
However, the line that does the string comparison returns true when built on Windows but false when built from Linux.
How come?
UPDATE
Here the exception message that showed me the issue:
throw new ArgumentOutOfRangeException(nameof(modeDePaiement), modeDePaiement, "Mode de paiement non supporté");
When displaying the error, modeDePaiement was equal to "prélèvement" both when built from Windows or Linux.
However, the value it is compared against AS WELL AS the word "supporté" in the error message weren't displayed equally: when built from Linux the "é" is replaced with some symbol whereas it is correctly displayed when built from Windows.
It seems that strings, in code, are not encoded the same way when built from Linux or Windows.

(Unexpected SQL_NO_TOTAL) error on text fields larger than 4096 bytes

I filed a bug report: https://github.com/hdbc/hdbc-odbc/issues/4
But maybe this is not the hdbc-odbc issue, so i'll ask here as well.
OS: linux 64 bit (archlinux), ghc-7.4.1, hdbc-odbc-2.3.1.0
Connecting to MS Sql server 2005.
Retrieving a text field larger than 4096 bytes.
With unixodbc 2.3.0 and freetds 0.82 works fine
With unixodbc 2.3.1 and freetds 0.91 gives an error "Unexpected SQL_NO_TOTAL"
tsql utility retrieves and shows large text field fine on freetds 0.91.
Anyone had problems with latest freetds, large text fields and MS SQL server ?
EDIT: I added correct handling of large text fields into hdbc-odbc. The patch is here:
https://github.com/vagifverdi/hdbc-odbc/commit/8134f715c18a0d60cc7b0329c7c2dbfee3e3e932
I added correct handling of large text fields into hdbc-odbc. The patch is here: https://github.com/vagifverdi/hdbc-odbc/commit/8134f715c18a0d60cc7b0329c7c2dbfee3e3e932
It is included in the latest hdbc-odbc on hackage.

Resources