Azure Congnitive Seach index int64 does not accept string value - azure

I'm using a Python Azure App Function to copy data from a SQLServer database to a Azure Cognitive Search index. The problem I'm seeing is that there are some nvarchar fields that contain numeric data that I'm trying to put into an Edm.int64 field in the index. The documentation states that this should work:
https://learn.microsoft.com/en-us/rest/api/searchservice/data-type-map-for-indexers-in-azure-search#bkmk_sql_search
However, I get an error – “Cannot convert a value to target type 'Edm.Int64' because of conflict between input format string/number and parameter 'IEEE754Compatible' false/true”.
It works when copy string with numbers into an Edm.int32 index field....
Has anyone else encountered/solved this issue?
Thanks!

You're getting the error since you're trying to convert from a varchar to a Edm.int32 index field and that is not supported.
As per https://learn.microsoft.com/rest/api/searchservice/data-type-map-for-indexers-in-azure-search#bkmk_sql_search you can only convert int, smallint, tinyint types into Edm.int32.
In the conversion table you'll find that char, nchar, varchar, nvarchar can only be converted to Edm.String or
Collection(Edm.String).
You can make your index field an Edm.String type and then in your client app code translate the string to an int accordingly once the content has been indexed to manipulate the response type.
I hope this helps.

Related

How to copy Numeric Array from parquet to postgres using Azure data factory

We are trying to copy the parquet file from blob to Postgres table. Now the problem is my source parquet has some columns with number arrays which ADF is complaining to be not supported, if I change that to string datatype my Postgres say that it is expecting Number Array
Is there some solution or workaround to tackle this?
The workaround for the problem would be to change the type of those columns from array type to string in your Postgres table. This can be done using the following code:
ALTER TABLE <table_name> ALTER COLUMN <column_name> TYPE text;
I have taken a sample table player consisting of 2 array columns position (integer array) and role (text array).
After changing the type of these columns, the table looks like this.
ALTER TABLE player1 ALTER COLUMN position TYPE varchar(40);
ALTER TABLE player1 ALTER COLUMN role TYPE varchar(40);
You can now complete the copy activity in ADF without getting any errors.
If there are any existing records, the specific array type values will be converted to string type, and it also helps you complete the copy activity without any errors. The following is an example of this case.
Initial table data (array type columns): https://i.stack.imgur.com/O6ErV.png
Convert to String type: https://i.stack.imgur.com/Xy69B.png
After using ADF copy activity: https://i.stack.imgur.com/U8pFg.png
NOTE:
Considering you have changed the array column to string type in the source file, if you can make changes such that the list of values are enclosed within {} rather than [], then you can convert the column type back to array type using ALTER query.
If list of elements are enclosed within [] and you try to convert the columns back to array type in your table, it throws the following error.
ERROR: malformed array literal: "[1,1,0]"
DETAIL: Missing "]" after array dimensions.

Not able to change datatype of Additional Column in Copy Activity - Azure Data Factory

I am facing very simple problem of not able to change the datatype of additional column in copy activity in ADF pipeline from String to Datetime
I am trying to change source datatype for additional column in mapping using JSON but still it doesn't work with polybase cmd
When I run my pipeline it gives same error
Is it not possible to change datatype of additional column, by default it takes string only
Dynamic columns return string.
Try to put the value [Ex. utcnow()] in the dynamic content of query and cast it to the required target datatype.
Otherwise you can use data-flow-derived-column :
https://learn.microsoft.com/en-us/azure/data-factory/data-flow-derived-column
Since your source is a query, you can choose to bring current date in source SQL query itself in the desired format rather than adding it in the additional column.
Thanks
Try to use formatDateTime as shown below and define the desired Date format:
Here since format given is ‘yyyy-dd-MM’, the result will look as below:
Note: The output here will be of string format only as in Copy activity we could not cast data type as of the date.
We could either create current date in the Source sql query or use above way so that the data would load into the sink in expected format.

How to compare money type data in azure search service?

I have a azure search service with some money type data named 'price',and I met a problem when I try to sort or filter.
The test query expression is "&seach=*&$orderby=price desc",and the return result please refer the picture
As you can see,the money type automatic convert to string type thus the sort result is compare the string
I am afraid that change the money type to double type will make some calculation mistake.Has anyone know how to fix this without change the data type?
It looks like the data type of the price field in your index schema is Edm.String - that's why the values are sorted as strings. You need to store the price as a number. Currently Azure Search doesn't have support for a money or decimal data type, so you need to use one of the numerical types like Edm.Double, Edm.Int32 or Edm.Int64.
You can add a suggestion for money / decimal data type on Azure Search UserVoice.

Stream anaytics complain about type of data sent to powerbi

I have a query where I specifically convert each and every one of the columns explicitly to either float, datetime or bigint in stream analytics. When I send the output of this query to powerbi I get errors in the Operations Log saying that:
Data type [System.Object] is not supported by Power BI, convert to string type.
What can the problem be?
We improved our diagnostic message recently. If you restart your job, and if this problem repro, it will tell you which column has null value, that's why it is treated the System.Object type. ASA will convert it into null string and send to Power BI. Please also make sure that you don't have mismatch input field than the one you specify in the query or no null value from your input data. What query you are using?
There are 2 options:
In your SELECT clause you can do CASE IS NULL THEN ELSE END AS (e.g. SELECT CASE Name IS NULL THEN 'Unknown' ELSE Name END AS NAME).
You can filter out this row in the WHERE clause (e.g. WHERE Name IS NOT NULL).
Hope this helps!
Ziv.

Dynamic Query Item Used for Sorting

I'm using Cognos Framework Manager and I'm creating a Data Item for a dynamic sort. I'm creating the Data Item using a CASE WHEN, here's my sample code:
CASE #prompt('SortOrder', 'string')#
WHEN 'Date' THEN <Date Column>
WHEN 'ID' THEN <String Column>
END
I'm getting this error QE-DEF-0405 Incompatible data types in case statement. Although I can cast the date column into a string wouldn't that make sort go wrong for the 'date' option? Should I cast the date column in a different way, cast the whole case, or am I barking at the wrong tree? In line with my question, should there be a general rule when creating dynamic columns via CASE with multiple column data types?
Column in Framework Manager should have datatype. Only one datatype.
So you need to cast your date column to correctly sortable string.
E.g. 'yyyy-mm-dd' format.
You are using the two different types of data format, so in prompt function use token instead of string (#prompt('sortorder','token')#)

Resources