Dynamic Query Item Used for Sorting - cognos

I'm using Cognos Framework Manager and I'm creating a Data Item for a dynamic sort. I'm creating the Data Item using a CASE WHEN, here's my sample code:
CASE #prompt('SortOrder', 'string')#
WHEN 'Date' THEN <Date Column>
WHEN 'ID' THEN <String Column>
END
I'm getting this error QE-DEF-0405 Incompatible data types in case statement. Although I can cast the date column into a string wouldn't that make sort go wrong for the 'date' option? Should I cast the date column in a different way, cast the whole case, or am I barking at the wrong tree? In line with my question, should there be a general rule when creating dynamic columns via CASE with multiple column data types?

Column in Framework Manager should have datatype. Only one datatype.
So you need to cast your date column to correctly sortable string.
E.g. 'yyyy-mm-dd' format.

You are using the two different types of data format, so in prompt function use token instead of string (#prompt('sortorder','token')#)

Related

Not able to change datatype of Additional Column in Copy Activity - Azure Data Factory

I am facing very simple problem of not able to change the datatype of additional column in copy activity in ADF pipeline from String to Datetime
I am trying to change source datatype for additional column in mapping using JSON but still it doesn't work with polybase cmd
When I run my pipeline it gives same error
Is it not possible to change datatype of additional column, by default it takes string only
Dynamic columns return string.
Try to put the value [Ex. utcnow()] in the dynamic content of query and cast it to the required target datatype.
Otherwise you can use data-flow-derived-column :
https://learn.microsoft.com/en-us/azure/data-factory/data-flow-derived-column
Since your source is a query, you can choose to bring current date in source SQL query itself in the desired format rather than adding it in the additional column.
Thanks
Try to use formatDateTime as shown below and define the desired Date format:
Here since format given is ‘yyyy-dd-MM’, the result will look as below:
Note: The output here will be of string format only as in Copy activity we could not cast data type as of the date.
We could either create current date in the Source sql query or use above way so that the data would load into the sink in expected format.

How to add Dynamic GroupBy Column in Data Flow Aggregate Activity in Azure Data Factory

I am using the Data Flow(preview). My "Aggregate" activity requires a GroupBy column which is not dynamic. Hence, I cant group by that column. I just want to map the column by name.
For example:
These are the two schemas:
1) Columns: M Id, Date/Time, Data Type, Values
2) Columns: MID, Date, DataType, Units
Both have actually the same data type and structure. I want to GroupBy DataType and avg(units).
Because, the name of the one field is "Data Type" and other "DataType". How do I map it together.
I have created a "Derived" activity with this
Column: DataType
Expression: case(startsWith(toString(byPosition(7)), 'D'), toString(byName('Data Type')),toString(byName('DataType')))
But it doesnt work. Any help is highly appreciated.
I just want to know how do I map the column by name.
You can write a dynamic expression directly in the Group by field in the Aggregate transformation. Hover over the Group By field and select "Computed Column" to enter into the Expression Builder.
Are you trying to determine whether to use the column "Data Type" or the column called "DataType"? If so, just enter your conditional expression directly into the expression builder on the Aggregate group by. Note that in your expression above, you are using byPosition() which is a numeric value for the number representing the incoming columns left to right, starting at position 1. Is that what you intended?

Cassandra 2.2.11 add new map column from text column

Let's say I have table with 2 columns
primary key: id - type varchar
and non-primary-key: data - type text
Data column consist only of json values for example like:
{
"name":"John",
"age":30
}
I know that i can not alter this column to map type but maybe i can add new map column with values from data column or maybe you have some other idea?
What can i do about it ? I want to get map column in this table with values from data
You might want to make use of the CQL COPY command to export all your data to a CSV file.
Then alter your table and create a new column of type map.
Convert the exported data to another file containing UPDATE statements where you only update the newly created column with values converted from JSON to a map. For conversion use a tool or language of your choice (be it bash, python, perl or whatever).
BTW be aware, that with map you specify what data type is your map's key and what data type is your map's value. So you will most probably be limited to use strings only if you want to be generic, i.e. a map<text, text>. Consider whether this is appropriate for your use case.

Generic Inquiry time format

In a Generic Inquiry, I'm trying to format the time part of a DateTime field in the results. I don't currently see any way to do this without parsing the date as a string, but I must be missing something. Using the Format() function, running the query tells me "The method or operation is not implemented". Using the Minute() function gets the minutes part, but using the Hour() function says "Unsupported formula operator Hour".
Add CRCase table in your tables and don't join with any tables under Relations tab, and in result grid, under Schema field use CRCase.CreatedDateTime, with this you will get the result in DateTime format.
let me know if this is my wrong assumption to your question.

Auto infer schema from parquet/ selectively convert string to float

I have a parquet file with 400+ columns, when I read it, the default datatypes attached to a lot of columns is String (may be due to the schema specified by someone else).
I was not able to find a parameter similar to
inferSchema=True' #for spark.read.parquet, present for spark.read.csv
I tried changing
mergeSchema=True #but it doesn't improve the results
To manually cast columns as float, I used
df_temp.select(*(col(c).cast("float").alias(c) for c in df_temp.columns))
this runs without error, but converts all the actual string column values to Null. I can't wrap this in a try, catch block as its not throwing any error.
Is there a way where i can check whether the columns contains only 'integer/ float' values and selectively cast those columns to float?
Parquet columns are typed, so there is no such thing as schema inference when loading Parquet files.
Is there a way where i can check whether the columns contains only 'integer/ float' values and selectively cast those columns to float?
You can use the same logic as Spark - define preferred type hierarchy and attempt to cast, until you get to the point, where you find the most selective type, that parses all values in the column.
How to force inferSchema for CSV to consider integers as dates (with "dateFormat" option)?
Spark data type guesser UDAF
There's no easy way currently,
there's a Github issue already existing which can be referred
https://github.com/databricks/spark-csv/issues/264
somthing like https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchema.scala
existing for scala this can be created for pyspark

Resources