Check for [Axis.Icon] for null values on Spotfire - spotfire

I am learning to use the spotfire tool now. I am creating a graphical table with Icons. I would like to represent null values as Icon instead of showing ---. Is it possible to do like this?
I also try to write a custom expression as
If([Axis.Icon] is null, 0)
for which I get an error saying "All parts of the expression have to be aggregated".
Can anybody help me to fix this issue? Many Thanks!

Values / Expressions which are on an aggregate axis must be aggregated in their entirety to maintain consistency. Otherwise, the info-graphic could misrepresent the data. In cases like yours, usually you can either aggregate all of the parts or the entire expression, or handle your logic in your table data itself.
Something like If(SUM([Axis.Icon]) is null, SUM(0))
A lot of people rather replace NULL with 0 in their data. For this, you need to create a calculated column or use a transformation to replace missing values in your data with 0, with a similar expression: If([columnName] is null, 0).
In both cases you may still have --- which is similar to null but is actually the missing value of a specific grouping you are using. What this means is that there aren't any rows which conform to this grouping, thus you can't force a value.
For your specific case, we would need a sample data set.

Related

Missing values for nominal attribute in Weka

I have a data set and I am doing classification using Weka NaiveBayes classifier. I have 14 attributes, some of which are nominals.
In only one of these attributes, I have some missing values. What I have done so far is that I have left them as missing values, and I know that Weka replaces those values automatically (a question is asked here about that ).
I mean, the values for this attribute are empty in my feature file, and when I create the ARFF file, I see "?" between the two commas.
Now, I have two possibilities:
1) Let them be filled by Weka automatically.
2) Replace them by "NULL".
The problem is that in the first case, the classifier works better. Now, I am wondering if it is allowed to let them be replaced by Weka? Or should I use the second approach, even though I get worse results?
I mean, "when" should we let Weka replace the missing values? and when not?
Meanwhile, the feature which has missing values represents the WordNet supersense of the words and when it is empty, it means that the instance is, for example, a preposition, or a WH question.
Thanks in advance,
Well, about missing values, weka doesn't replace them by default, you have to use filter (exactly as in post you linked first in your question). Some classifiers can handle missing values, I think Naive Bayes can, just by don't count them in probability calculation. So basically you have three options. Use ReplaceMissingValues filter to replace missing values with mode values, don't use filter and use dataset with missing values (in this case I recommend you to have a look how Naive Bayes works, to understand how your missing values will be treated and if it is good for you) and final option, replace your missing values with your own label like "other values" or so. Probably the key for correct choice is in your last paragraph, that suggest that your missing values probably means something. If this is so, I will use third approach - your new label. On the other hand, if missing values doesn't means anything and are just result of some fault in data collection I will think about first two approaches. Good luck.

cql binary protocol and named bound variables in prepared queries

imagine I have a simple CQL table
CREATE TABLE test (
k int PRIMARY KEY,
v1 text,
v2 int,
v3 float
)
There are many cases where one would want to make use of the schema-less essence of Cassandra and only set some of the values and do, for example, a
INSERT into test (k, v1) VALUES (1, 'something');
When writing an application to write to such a CQL table in a Cassandra cluster, the need to do this using prepared statements immediately arises, for performance reasons.
This is handled in different ways by different drivers. Java driver for example has introduced (with the help of a modification in CQL binary protocol), the chance of using named bound variables. Very practical: CASSANDRA-6033
What I am wondering is what is the correct way, from a binary protocol point of view, to provide values only for a subset of bound variables in a prepared query?
Values in fact are provided to a prepared query by building a values list as described in
4.1.4. QUERY
[...]
Values. In that case, a [short] <n> followed by <n> [bytes]
values are provided. Those value are used for bound variables in
the query.
Please note the definition of [bytes]
[bytes] A [int] n, followed by n bytes if n >= 0. If n < 0,
no byte should follow and the value represented is `null`.
From this description I get the following:
"Values" in QUERY offers no ways to provide a value for a specific column. It is just an ordered list of values. I guess the [short] must correspond to the exact number of bound variables in a prepared query?
All values, no matter what types they are, are represented as [bytes]. If that is true, any interpretation of the [bytes] value is left to the server (conversion to int, short, text,...)?
Assuming I got this all right, I wonder if a 'null' [bytes] value can be used to just 'skip' a bound variable and not assign a value for it.
I tried this and patched the cpp driver (which is what I am interested in). Queries get executed but when I perform a SELECT from clqsh, I don't see the 'null' string representation for empty fields, so I wonder if that is a hack that for some reasons is not just crashing or the intended way to do this.
I am sorry but I really don't think I can just download the java driver and see how named bound variables are implemented ! :(
---------- EDIT - SOLVED ----------
My assumptions were right and now support to skip a field in a prepared query has been added to cpp driver (see here ) by using a null [bytes value].
What I am wondering is what is the correct way, from a binary protocol point of view, to provide values only for a subset of bound variables in a prepared query?
You need to prepare a query that only inserts/updates the subset of columns that you're interested in.
"Values" in QUERY offers no ways to provide a value for a specific column. It is just an ordered list of values. I guess the [short] must correspond to the exact number of bound variables in a prepared query?
That's correct. The ordering is determined by the column metadata that Cassandra returns when you prepare a query.
All values, no matter what types they are, are represented as [bytes]. If that is true, any interpretation of the [bytes] value is left to the server (conversion to int, short, text,...)?
That's also correct. The driver will use the returned column metadata to determine how to convert native values (strings, UUIDS, ints, etc) to a binary (bytes) format. Cassandra does the inverse of this operation server-side.
Assuming I got this all right, I wonder if a 'null' [bytes] value can be used to just 'skip' a bound variable and not assign a value for it.
A null column insertion is interpreted as a deletion.
Implementation of what I was trying to achieve has been done (see here ) based on the principle I described.

Updating TimeUUID columns in cassandra

I'm trying to store some time series data on the following column family:
create column family t_data with comparator=TimeUUIDType and default_validation_class=UTF8Type and key_validation_class=UTF8Type;
I'm successfully inserting data this way:
data={datetime.datetime(2013, 3, 4, 17, 8, 57, 919671):'VALUE'}
key='row_id'
col_fam.insert(key,data)
As you can see, using a datetime object as the column name pycassa converts to a timeUUID object correctly.
[default#keyspace] get t_data[row_id];
=> (column=f36ad7be-84ed-11e2-af42-ef3ff4aa7c40, value=VALUE, timestamp=1362423749228331)
Sometimes, the application needs to update some data. The problem is that when I try to update that column, passing the same datetime object, pycassa creates a different UUID object (the time part is the same) so instead of updating the column, it creates another one.
[default#keyspace] get t_data[row_id];
=> (column=f36ad7be-84ed-11e2-af42-ef3ff4aa7c40, value=VALUE, timestamp=1362423749228331)
=> (column=**f36ad7be**-84ed-11e2-b2fa-a6d3e28fea13, value=VALUE, timestamp=1362424025433209)
The question is, how can I update TimeUUID based columns with pycassa passing the datetime object? or, if this is not the correct way to doing it, what is the recommended way?
Unless you do a read-modify-write you can't. UUIDs are by their nature unique. They exist to solve the problem of how to get unique IDs that sort in chronological order but at the same time avoid collisions for things that happen at exactly the same time.
So to update that column you need to first read it, so you can find its column key, change its value and write it back again.
It's not a particularly elegant solution. You should really avoid read-modify-write in Cassandra. Perhaps TimeUUID isn't the right type for your column keys? Or perhaps there's another way you can design your application to avoid having to go back and change things.
Without knowing what your query patterns look like I can't say exactly what you should do instead, but here are some suggestions that hopefully are relevant:
Don't update values, just write new values. If something was true at time T will always have been true for time T, even if it changes at time T + 1. When things change you write a new value with the time of the change and let the old values be. When you read the time line you resolve these conflics by picking the most recent value -- and since the values will be sorted in chronological order the most recent value will always be the last one. This is very similar to how Cassandra does things internally, and it's a very powerful pattern.
Don't worry that this will use up more disk space, or require some extra CPU when reading the time series, it will most likely be tiny in comparison with the read-modify-write complexity that you would otherwise have to implement.
There might be other ways to solve your problem, and if you give us some more details maybe we can come up with someting that fits better.

Check if values of two string-type items are equal in a Zabbix trigger

I am monitoring an application using Zabbix and have defined a custom item which returns a string value. Since my item's values are actually checksums, they will only contain the characters [0-9a-f]. Two mirror copies of my application are running on two servers for the sake of redundancy. I would like to create a trigger which would take the item values from both machines and fire if they are not the same.
For a moment, let's forget about the moment when values change (it's not an atomic operation, so the system may see inconsistent state, which is not a real error, for a short time), since I could work around it by looking at several previous values.
The crux is: how to write a Zabbix trigger expression which could compare for equality the string values of two items (the same item on two mirror hosts, actually)?
Both according to the fine manual and as I confirmed in praxis, the standard operators = and # only work on numeric values, so I can't just write the natural {host1:myitem[param].last(0)} # {host2:myitem[param].last(0)}. Functions such as change() or diff() can only compare values of the same item at different points in time. Functions such as regexp() can only compare the item's value with a constant string/regular expression, not with another item's value. This is very limiting.
I could move the comparison logic into the script which my custom item executes, but it's a bit messy and not elegant, so if at all possible, I would prefer to have this logic inside my Zabbix trigger.
Perhaps despite the limitations listed above, someone can come up with a workaround?
Workaround:
{host1:myitem[param].change(0)} # {host2:myitem[param].change(0)}
When only one of the servers sees a modification since the previously received value, an event is triggered.
From the Zabbix Manual,
change (float, int, str, text, log)
Returns difference between last and previous values.
For strings:
0 - values are equal
1 - values differ
I believe, and am struggling with this EXACT situation to this myself, that the correct way to do this is via calculated items.
You want to create a new ITEM, not trigger (yet!), that performs a calculated comparison on multiple item values (Strings Difference, Numbers within range, etc).
Once you have that item, have the calculation give you a value you can trigger off of. You can use ANY trigger functions in your calculation along with arrhythmic operations.
Now to the issue (which I've submitted a feature request for because this is extremely limiting), most trigger expressions evaluate to a number or a 0/1 bool.
I think I have a solution for my problem, which is that I am tracking a version number from a webpage: e.g. v2.0.1, I believe I can use string connotation and regex in calculated items in order to convert my string values into multiple number values. As these would be a breeze to compare.
But again, this is convoluted and painful.
If you want my advice, have yourself or a dev look at the code for trigger expressions and see if you can submit a patch add one trigger function for simple string comparison. (Difference, Length, Possible conversion to numerical values (using binary and/or hex combinations) etc.)
I'm trying to work on a patch myself, but I don't have time as I have so much monitoring to implement and while zabbix is powerful, it's got several huge flaws. I still believe it's the best monitoring system out there.
Simple answer: Create a UserParameter until someone writes a patch.
You could change your items to return numbers instead of strings. Because your items are checksums that are using only [0-9a-f] characters, they are numbers written in hexadecimal. So you would need to convert the checksum to decimal number.
Because the checksum is a big number, you would need to limit the hexadecimal number to 8 characters for Numeric (unsigned) type before conversion. Or if you would want higher precision, you could use float (but that would be more work):
Numeric (unsigned) - 64bit unsigned integer
Numeric (float) - floating point number
Negative values can be stored.
Allowed range (for MySQL): -999999999999.9999 to 999999999999.9999 (double(16,4)).
I wish Zabbix would have .hashedUnsigned() function that would compute hash of a string and return it as a number. Such a function should be easy to write.

Core Data - Optional attributes and performance

Per the Core Data Programming Guide:
You can specify that an attribute is
optional—that is, it is not required
to have a value. In general, however,
you are discouraged from doing
so—especially for numeric values
(typically you can get better results
using a mandatory attribute with a
default value—in the model—of 0). The
reason for this is that SQL has
special comparison behavior for NULL
that is unlike Objective-C's nil. NULL
in a database is not the same as 0,
and searches for 0 will not match
columns with NULL.
I have always made numeric values non-optional, but have not for dates and strings. It is convenient in my code to base logic on dates and/or strings being nil.
Based on the above recommendations, I am considering making everything in my database non-optional. For dates I could set the model default to a value of 0 and for strings a model default of nothing (""). Then, in my code I could test dates for [date timeIntervalSince1970] != 0 and strings for string.length != 0.
The question is, for a relatively small database, does this really matter from a Core Data performance standpoint? And what is the tradeoff if the attribute in question will never be directly queried via a predicate?
I have not seen any performance problems on small to medium sized data sets. I suspect that this is something you would deal with in the performance stage of your application.
Personally I use the same logic of non numerics being optional if it makes sense as it does indeed make the code easier which in turn gives me more time to optimize later.

Resources