In thrift, you could have composite columns of the form string:bytearray and integer:bytearray and decimal:bytearray. Once defined, you could store values in an integer:bytearray like so
{empty}.somebytearray
{empty}.somebytearray
5.somebytearray
10.somebytearray
I could then query and get all the columns that were prefixed with {empty}.
This seems it cannot be done in CQL3 so we cannot port our code to CQL3 at this time? Is there a ticket for this or will it every be resolved.
thanks,
Dean
The empty column name isn't null.
A good example is the cql3 row marker which looks like this when exported via sstable2json:
//<--- row marker ----->
{"key": "6b657937","columns": [["","",1375804090248000], ["value","value7",1375804090248000]]}
It looks like the column name is empty, but its a byte array with 3 components. So say we want to add a column with an empty name:
// column name
columnFamily.addColumn(ByteBuffer.wrap(new byte[3]), value, timestamp);
Related
i am new to Power query and i would like to learn a bit more about it. I am facing the following problem. My table looks like this (empty fields already removed):
What i'm trying is to get a new table where "Spalte2" holds my list of ISINs and S^"Spalte 8" but also "Spalte 9" and "Spalte 10" hold my portfolio share (komma separated).
EDIT: For clarification I hope to get something like this:
EDIT: I try to get a table in here, hope it works:
Spalte1
Spalte2
Spalte8
Spalte9
Spalte10
Bâloise Holding AG
CH0012410517
1,04
Null
Null
Barry Callebaut AG
CH0009002962
0,63
Null
Null
Galenica AG
CH0360674466
0,58
Null
Null
Givaudan SA
CH0010645932
1,24
Null
Null
HelloFresh SE
DE000A161408
527.705,26
1,85
Null
Kering S.A.
FR0000121485
431.145,00
1,51
Null
Standard Chartered PLC
GB0004082847
4,610 117.699,50
Null
0,41
Unilever PLC
GB00B10RZP78
42,305 315.241,76
Null
1,11
What i'm trying to get is this:
Spalte2
Spalte8
CH0012410517
1,04
CH0009002962
0,63
CH0360674466
0,58
CH0010645932
1,24
DE000A161408
1,85
FR0000121485
1,51
GB0004082847
0,41
GB00B10RZP78
1,11
Which way can i use in PQ to match the ISIN with its portfolio share? Thanks a lot!
Thomas
Am I correct in understanding that you simply want to consolidate the information from the rightmost populated column of each row into one column, and disregard any other information between it and the first column?
If so, then this might be one possible approach.
Starting with a sample table called Table1 in power query:
I just add a new column and use if then statements to select the rightmost populated column's information:
(In the above M code, I check that each column is both not null and not blank, to be thorough.)
I get this result:
Then I select the Spalte2 and Custom columns and remove the other columns to get this:
Context: I've a data flow that extracts data from SQL DB, when data comes is just one column with a string separated by tab, in order to manipulate the data properly, I've tried to separate every single column with its corresponding data:
Firstly, to 'rebuild' the table properly I used a 'Derived Column' activity replacing tab with semicolons instead (1)
dropLeft(regexReplace(regexReplace(regexReplace(descripcion,[\t],';'),[\n],';'),[\r],';'),1)
So, after that use 'split()' function to get an array and build the columns (2)
split(descripcion, ';')
Problem: When I try to use 'Flatten' activity (as here https://learn.microsoft.com/en-us/azure/data-factory/data-flow-flatten), is just not working and data flow throws me just one column or if I add an additional column in the 'Flatten' activity I just get another column with the same data that the first one:
Expected output:
column2
column1
column3
2000017
ENVASE CORONA CLARA 24/355 ML GRAB
PC13
2004297
ENVASE V FAM GRAB 12/940 ML USADO
PC15
Could you say me what i'm doing wrong, guys? thanks by the way.
You can use the derived column activity itself, try as below.
After the first derived column, what you have is a string array which can just be split again using derived schema modifier.
Where firstc represent the source column equivalent to your column descripcion
Column1: split(firstc, ';')[1]
Column2: split(firstc, ';')[2]
Column3: split(firstc, ';')[3]
Optionally you can select the columns you need to write to SQL sink
Relatively new to Blue Prism,
I have a collection that looks like this, with 100+ rows:
Results
Answer
Timestamp
8 Apr 2021
Name
ABC
I'd like to manipulate the data such that if Results = 'Name', Get the Answer (aka ABC) and put it into a data item.
Is there any way to do this?
I understand I could hardcode i.e. Get value based on Row Index and Column Index, but my data is complex and may not always have the same rox index.
Can you use the collection filter to get a collection output? The utility has an action to filter where you can input a collection and then use
[FieldName] Like "some value"
This would result in every complete row in the collection that matches the filter.
Can you please provide the query to append data to an existing value in a column of type text? Something similar to this:
UPDATE cycling.upcoming_calendar SET events = events + ['Tour de France Stage 10'] WHERE year = 2015 AND month = 06;
The above query will update a list. My column datatype is text.
In my case, if the column "events" has a value, "Test" I want to update it to the value, "Test , Test1".
Appending data to a text column is not possible in Cassandra. The only possible options I can think of are
Option 1 : Change the column data type to List
Option 2 : Fetch the data from the column in your application and then append the new value to the existing value, and finally update the DB.
I'm accessing a Cassandra database and I only know the table names.
I want to discover the names & types of the columns.
This will give me the column names:
select column_name
from system.schema_columns
where columnfamily_name = 'customer'
allow filtering;
Is this reasonable?
Does anyone have suggestions about determining column types?
Depending on what driver you're using, you should be able to use the metadata API.
A couple examples:
http://datastax.github.io/python-driver/api/cassandra/metadata.html#schemas
https://datastax.github.io/java-driver/features/metadata/#schema-metadata
The drivers query the system schema metadata to create these models.
You can infer the column types by looking at the classes used for the validator. The validator column is just a string.
The string has one of 3 formats:
org.apache.cassandra.db.marshal.XXXType for simple column types, where XXX is the Java type for the column (e.g. for bigint columns, XXX is "Long", for varchar/text, XXX is "UTF8", etc.)
org.apache.cassandra.db.marshal.SetType(org.apache.cassandra.db.marshal.XXXType) for set columns, where the type in parenthesis is the type of each set element
org.apache.cassandra.db.marshal.MapType(org.apache.cassandra.db.marshal.XXXType,org.apache.cassandra.db.marshal.XXXType) for maps
Quite old but still valid question. There is a class variable of your model that describe columns (field name and column class):
class Tweet(cqldb.Model):
"""
Object representing the tweet column family in Cassandra
"""
__keyspace__ = 'my_ks'
# follows model definition
...
...
print(Tweet._defined_columns)
# output
OrderedDict([('tweetid',
<cassandra.cqlengine.columns.Text at 0x7f4a4c9b66a0>),
('tweet_id',
<cassandra.cqlengine.columns.BigInt at 0x7f4a4c9b6828>),
('created_at',
<cassandra.cqlengine.columns.DateTime at 0x7f4a4c9b6748>),
('ttype',
<cassandra.cqlengine.columns.Text at 0x7f4a4c9b6198>),
('tweet',
<cassandra.cqlengine.columns.Text at 0x7f4a4c9b6390>),
('lang',
<cassandra.cqlengine.columns.Text at 0x7f4a4c9b3d68>)])