Sequence number Equivalent in sybase ase - sap-ase

I have an existing sybase ase table which is using IDENTITY as its primary key. Now i need to recreate this table but i want to start the PK from next value of IDENTITY PK in prod env. e.g. If currently PK = 231 then after re-creating i want it to start from 232 onwards or any other INTEGER value > 231.
In oracle its easy to configure a sequence number and we can give start with but in sybase ase as we dont have sequence available so i tried using newid() function but it gives binary(16) values whereas i want integer values.
Can anyone suggest something ?

I am planning to use something like mentioned below and i think it will resolve my problem. Let me know if anyone has a better solution.
select abs(hextoint(newid()))
Any thoughts on this solution ? Can this ever generate the same number which it generated already?

select next_identity('tablename') will return the identity value of the next insert for a table with an identity column so you know which ID will be allocated next.
Select ##identity immediately after an insert will return the ID which was just given to the row inserted.
However you need to be careful as identity columns are not the same as sequences and should not be relied upon if you want a sequence with no gaps because you will get a gap (albeit small sometimes) if the database crashes or is shutdown with nowait. For these a number fountain/insert trigger type generation of IDs is a better option. Using 'identity insert' is only really for when you want to bulk-load a whole table - you should not be setting that with every insert or you will defeat the whole purpose of an identity column, which is fast generation of new key values.

Related

Why Cassandra does not allow udf in update statements?

I am new to Cassandra. I created a table, and I have inserted some data in it, now I want to select data from it, and in the output, I want some calculated columns.
I created a user defined function concat, which concatenates 2 strings and returns the result. Then I noticed that this function shows data correctly when I use it in SELECT statement. but it does not work when I use in UPDATE statement:
That is, this works;
select concat(prov,city), year,mnth,acno,amnt from demodb.budgets;
but this does not;
update demodb.budgets set extra=concat(prov,city) where prov='ON';
In addition, the UPDATE also does not work if we simply assign one column's value to another column of same type (without any calculations), as below;
update demodb.budgets set extra=city where prov='ON';
Also, even a simple arithmetic calculation doesn't work in Update statement;
that is, this too doesn't work;
update demodb.budgets set amnt = amnt + 20 where prov='ON';
here amnt is a simple double type column.
(when I saw this; all I could do is pull my hair hardly and say, I can't work with Cassandra, i don't just want it if it cannot do simple arithmetic)
Can someone please help how can I achieve the desired updates?
I think the basic answer to your question is Read-before-write is a huge anti-pattern in Cassandra.
The issue of concurrency in a distributed environment is a key point there.
More info.

CQL check if record exists

I'm on my path to learning Cassandra, and the differences in CQL and SQL, but I'm noticing the absence of a way to check to see if a record exists with Cassandra. Currently, the best way that I have is to use
SELECT primary_keys FROM TABLE WHERE primary_keys = blah,
and checking to see if the results set is empty. Is there a better way to do this, or do I have the right idea for now?
Using count will make it traverse all the matching rows just to be able to count them. But you only need to check one, so just limit and return whatever. Then interpret the presence of the result as true and absence - as false. E.g.,
SELECT primary_keys FROM TABLE WHERE primary_keys = blah LIMIT 1
That's the usual way in Cassandra to check if a row exists. You might not want to return all the primary keys if all you care about is if the row exists or not, so you could do this:
SELECT count(*) FROM TABLE WHERE primary_keys = blah,
This would just return a 1 if the row exists, and a 0 if it doesn't exist.
If you are using primary key to filter rows, all the above 3 solutions (including yours) are fine. And I don't think there are real differences.
But if you are using a general way (such as indexed column, partition key) to filter rows, you should take the solution of "Limit 1", which will avoid useless network traffic.
There is a related example at:
The best way to check existence of filtered rows in Cassandra? by user-defined aggregate?

Azure Table Storage: Order by

I am building a web site that has a wish list. I want to store the wish list(s) in azure table storage, but also want the user to be able to sort their wish list, when viewing it, a number of different ways - date added, date added reversed, item name etc. I also want to implement paging which I believe I can implement by making use of the continuation token.
As I understand it, "order by" isn't implemented and the order that results are returned from table storage is based on the partition key and row key. Therefore if I want to implement the paging and sorting that I describe, is the best way to implement this by storing the wish list multiple times with different partition key / row key?
In this simple case, it is likely that the wish list won't be that large and I could in fact restrict the maximum number of items that can appear in the list, then get rid of paging and sort in memory. However, I have more complex cases that I also need to implement paging and sorting for.
On today’ s hardware having 1000’s of rows to hold, in a list, in memory and sort is easily supportable. What the real issue is, how possible is it for you to access the rows in table storage using the Keys and not having to do a table scan. Duplicating rows across multiple tables could get quite cumbersome to maintain.
An alternate solution, would be to temporarily stage your rows into SQL Azure and apply an order by there. This may be effective if your result set is too large to work in memory. For best results the temporary table would need to have the necessary indexes.
Azure Storage keeps entities in lexicographical order, indexed by Partition Key as primary index and Row Key as secondary index. In general for your scenario it sounds like UserId would be a good fit for a partition key, so you have the Row Key to optimize for per each query.
If you want the user to see the wish lists latest on top, then you can use the log tail pattern where your row key will be the inverted Date Time Ticks of the DateTime when the wish list was entered by the user.
https://learn.microsoft.com/azure/storage/tables/table-storage-design-patterns#log-tail-pattern
If you want user to see their wish lists ordered by the item name you could have your item name as your row key, and so the entities will naturally sorted by azure.
When you are writing the data you may want to denormalize the data and do multiple writes with these different row key schemas. Since you will have the same partition key as user id, you can at that stage do a batch insert operation and not worry about consistency since azure table batch operations are atomic.
To differentiate the different rowkey schemas, you may want to prepend each with a const string value. Like your inverted ticks row key value for instance woul dbe something like "InvertedTicks_[InvertedDateTimeTicksOfTheWishList]" and your item names row key value would be "ItemName_[ItemNameOfTheWishList]"
Why not do all of this in .net using a List.
For this type of application I would have thought SQL Azure would have been more appropriate.
Something like this worked just fine for me:
List<TableEntityType> rawData =
(from c in ctx.CreateQuery<TableEntityType>("insysdata")
where ((c.PartitionKey == "PartitionKey") && (c.Field == fieldvalue))
select c).AsTableServiceQuery().ToList();
List<TableEntityType> sortedData = rawData.OrderBy(c => c.DateTime).ToList();

how to implement fixed number of (timeuuid) columns in cassandra (with CQL)?

Here is an example use case:
You need to store last N (let's say 1000 as fixed bucket size) user actions with all details in timeuuid based columns.
Normally, each users' actions are already in "UserAction" column family where user id as row key, and actions in timeuuid columns. You may also have "AllActions" column family which stores all actions with same timeuuid as column name and user id as column value. It's basically a relationship column family but unfortunately without any details of user actions. Querying with this column family is expensive I guess, because of random partioner. On the other hand, if you store all details in "AllActions" CF then cassandra can't handle that big row properly at one point. This is why I want to store last N user actions with all details in fixed number of timeuuid based columns.
Maybe you may have a better design solution for this use case... I like to hear that ...
If not, the question is how to implement fixed number of (timeuuid) columns in cassandra (with CQL) effectively?
After insertion we could delete old (overflow) columns if we had some sort of range support in cql's DELETE. AFAIK there is no support for this.
So, any idea? Thanks in advance...
IMHO, this is something that C* must handle itself like compaction. It's not a good idea to handle this on client side.
Maybe, we need some configuration (storage) options for column families to make them suitable for "most recent data".

Insert rows into Access db from C# using Microsoft.Jet.OLEDB.4.0, autonumber column is set to zero

I'm using C# and Microsoft.Jet.OLEDB.4.0 provider to insert rows into an Access mdb.
Yes, I know Access sucks. It's a huge legacy app, and everything else works OK.
The table has an autonumber column. I insert the rows, but the autonumber column is set to zero.
I Googled the question and read all the articles I could find on this subject. One suggested inserting -1 for the autonumber column, but this didn't work. None of the other suggestions I could find worked.
I am using OleDbParameter's, not concatenating a big SQL text string.
I've tried the insert with and without a transaction. No difference.
How do I get this insert to work (i.e. set the autonumber column contents correctly)?
Thanks very much in advance,
Adam Leffert
In Access it is possible to INSERT an explicit value into an IDENTITY (a.k.a. Automnumber) column. If you (or your middleware) is writing the value zero to the IDENTITY column and there is no unique constraint on the IDENTITY column then that might explain it.
Just to be clear you should be using the syntax
INSERT INTO (<column list>) ...
and the column list should omit the IDENTITY column. Jet SQL will allow you to omit the entire column list but then implicitly include the IDENTITY column. Therefore you should use the INSERT INTO () syntax to explicitly omit the IDENTITY column.
In Access/Jet, you can write explicit values to the IDENTITY column, in which case the value will obviously not be auto-generated. Therefore, ensure both you and your middleware (ADO.NET etc) are not explicitly writing a zero value to the IDENTITY column.
BTW just for the IDENTITY column in the below table will auto-generate the value zero every second INSERT:
CREATE Table Test1
(
ID INTEGER IDENTITY(0, -2147483648) NOT NULL,
data_col INTEGER
);
When doing the insert, you need to be sure that you are NOT specifying a value for the AutoNumber column. Just like in SQL Server you don't insert a value for an identity column.

Resources