I have generated an hive extract .For instance,It has below columns
fields-->a1,a2,a3,b,c,d,ee1,e2,f1,f2
I need to combine a1,a2,a3 fields into one field as 'a'
once it is combined, i have to take each record and apply need some vector elements for some fields when it is migrated to mainframe.Since in hive vector fields are not applicable,we used to create the source table with different coulmns for the no of vector incidences like e1,e2, f1,f2
for eg,this is the format which I needed
record
ebcdic string e;
ebcdic string f;
end [2]
Now what I need to do is write a hive query to transform normal file layout in hive into above format.Since I am not familar with this can any one suggest some logic to solve this?
Thanks in advance.
Related
I have huge data file and one of the column is text and has large data set in that column.
I tried to create column with text data type but it is not supported.
How to bring text data type data over to databricks.
please guide
Here's reference: Databricks data types
For CHAR, VARCHAR, NVARCHAR, TEXT and, in general, character strings of any size, just use STRING.
On the parquet documentation is explicitly mentioned that the design supports splitting the metadata and data into different files , including also the possibility that different column groups can be stored in different files.
However , I could not find any instructions on how to achieve that. In my use case I would like to store the metadata in one file , store columns 1-100 data in one file and 101-200 in a second file .
Any idea how to achieve this ?
If you are using PySpark, it's as easy as this:
df = spark.createDataFrameFrom(...)
df.write.parquet('file_name.parquet')
and it will create a folder called file_name.parquet in the default location in HDFS. You can just create two dataframes, one with columns 1-100, and the other dataframe with columns 101-200 and save them separately. It automatically will save the metadata, if you mean the data frame schema.
You can select a range of columns like this:
df_first_hundred = df.select(df.columns[:100])
df_second_hundred = df.select(df.columns[100:])
Save them as separate files:
df_first_hundred.write.parquet('df_first_hundred')
df_second_hundred.write.parquet('df_second_hundred')
Let's say I have table with 2 columns
primary key: id - type varchar
and non-primary-key: data - type text
Data column consist only of json values for example like:
{
"name":"John",
"age":30
}
I know that i can not alter this column to map type but maybe i can add new map column with values from data column or maybe you have some other idea?
What can i do about it ? I want to get map column in this table with values from data
You might want to make use of the CQL COPY command to export all your data to a CSV file.
Then alter your table and create a new column of type map.
Convert the exported data to another file containing UPDATE statements where you only update the newly created column with values converted from JSON to a map. For conversion use a tool or language of your choice (be it bash, python, perl or whatever).
BTW be aware, that with map you specify what data type is your map's key and what data type is your map's value. So you will most probably be limited to use strings only if you want to be generic, i.e. a map<text, text>. Consider whether this is appropriate for your use case.
I have a parquet file with 400+ columns, when I read it, the default datatypes attached to a lot of columns is String (may be due to the schema specified by someone else).
I was not able to find a parameter similar to
inferSchema=True' #for spark.read.parquet, present for spark.read.csv
I tried changing
mergeSchema=True #but it doesn't improve the results
To manually cast columns as float, I used
df_temp.select(*(col(c).cast("float").alias(c) for c in df_temp.columns))
this runs without error, but converts all the actual string column values to Null. I can't wrap this in a try, catch block as its not throwing any error.
Is there a way where i can check whether the columns contains only 'integer/ float' values and selectively cast those columns to float?
Parquet columns are typed, so there is no such thing as schema inference when loading Parquet files.
Is there a way where i can check whether the columns contains only 'integer/ float' values and selectively cast those columns to float?
You can use the same logic as Spark - define preferred type hierarchy and attempt to cast, until you get to the point, where you find the most selective type, that parses all values in the column.
How to force inferSchema for CSV to consider integers as dates (with "dateFormat" option)?
Spark data type guesser UDAF
There's no easy way currently,
there's a Github issue already existing which can be referred
https://github.com/databricks/spark-csv/issues/264
somthing like https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchema.scala
existing for scala this can be created for pyspark
I am using this as a resource to get me started - http://www.pantz.org/software/sqlite/sqlite_commands_and_general_usage.html
Currently I am working on creating an AIR program making use of the built in SQLite database. I could be considered a complete noob in making SQL queries.
table column types
I have a rather large excel file (14K rows) that I have exported to a CSV file. It has 65 columns of varying data types (mostly ints, floats and short strings, MAYBE a few bools). I have no idea about the proper form of importing so as to preserve the column structure nor do I know the best data formats to choose per db column. I could use some input on this.
table creation utils
Is there a util that can read an XLS file and based on the column headers, generate a quick query statement to ease the pain of making the query manually? I saw this post but it seems geared towards a preexisting CSV file and makes use of python (something I am also a noob at)
Thank you in advance for your time.
J
SQLite3's column types basically boil down to:
TEXT
NUMERIC (REAL, FLOAT)
INTEGER (the various lengths of integer; but INT will normally do)
BLOB (binary objects)
Generally in a CSV file you will encounter strings (TEXT), decimal numbers (FLOAT), and integers (INT). If performance isn't critical, those are pretty much the only three column types you need. (CHAR(80) is smaller on disk than TEXT but for a few thousand rows it's not so much of an issue.)
As far as putting data into the columns is concerned, SQLite3 uses type coercion to convert the input data type to the column type whereever the conversion makes sense. So all you have to do is specify the correct column type, and SQLite will take care of storing it in the correct way.
For example the number -1230.00, the string "-1230.00", and the string "-1.23e3" will all coerce to the number 1230 when stored in a FLOAT column.
Note that if SQLite3 can't apply a meaningful type conversion, it will just store the original data without attempting to convert it at all. SQLite3 is quite happy to insert "Hello World!" into a FLOAT column. This is usually a Bad Thing.
See the SQLite3 documentation on column types and conversion for gems such as:
Type Affinity
In order to maximize compatibility between SQLite and other database
engines, SQLite supports the concept of "type affinity" on columns.
The type affinity of a column is the recommended type for data stored
in that column. The important idea here is that the type is
recommended, not required. Any column can still store any type of
data. It is just that some columns, given the choice, will prefer to
use one storage class over another. The preferred storage class for a
column is called its "affinity".