problem sybase autogenerated ids - sap-ase

I have a table in which the PK column Id of type bigint and it is populated automatically increasing order of 1,2,3...so on
i notice that some times all of a sudden the ids that are generated are having very big value . for example the ids are like 1,2,3,4,5,500000000000001,500000000000002
there is a huge jump after 5...ids 6 , 7 were not used at all
i do perform delete operations on this table but i am absolutely sure that missing ids were not used before.
why does this occur and how can i fix this?
many thanks for looking in to this.
my env:
sybase ase 15.0.3 , linux

You get this with Sybase when the system is restarted after an improper shutdown. See full description, and what to do about it, here.

Related

Is there an idiomatic way of versioning clients to a database?

I'm supplying client drivers to a database I am maintaining. The DB has lots of tables with well defined schemas. (Cassandra in this case)
From time to time there will be some breaking changes (stemming from product and system requirements) and the clients will "break" in the sense that the queries they were performing until now will not be correct in regards to the newer schemas.
I'm curious to know if there is a good clean way to "version" the clients to work with the corresponding tables?
For instance a naive implementation could add the version number to the table name, i.e. for every table in the db , append a version number to the table name.
The clients would always query tables that match this naming convention. Newer breaking versions would change the table name to match the newer version and clients would be upgraded accordingly.
Is there a better way to handle this?
It's also possible to add 1 version for you DB and 1 version that is stored on your client, when a breaking change is made you update the database version.
When the client starts a version check is performed and if the version missmatches an auto upgrade can be done.
I came across the same problem few months ago. We have to load the schema according to the Version in which our client should support. The solution we found is as follows:
Along with the schema, one more table will be created which contains the following fields ---> version_no, ks_name, table_name, column_name, add/drop, is_loaded, primary key(version_no,(ks_name, table_name, column_name)). Note:if you have single keyspace, you can remove that column or table name can be itself written as ks_name.table_name.
Then, whenever we want to load a new version, we will log the changes in that table and when we load the previous schema again, the script will make sure that the old alterations are effected such that it will roll back to the same previous version of schema. Make sure that you update the is_loaded field as it is the only way to differentiate if a schema is half loaded or script failed such that it will not rise further more errors. Hope it helps!!

Is there an accurate way to get all of the stored procedure column dependencies in Sybase ASE?

I am currently working within a Sybase ASE 15.7 server and need a dependable way to get all of the stored procedures which are dependent upon a particular column. The Sybase system procedure sp_depends is notoriously unreliable for this. I was wondering if anyone out there had a more accurate way to discover these dependencies.
Apparently, the IDs of the columns are supposed to be stored in a bitmap in the varbinary column sysdepends.columns. However, I have not yet found a bitmask which has been effective in decoding these column IDs.
Thanks!
a tedious solution could be to parse the SP code in the system table syscomments to retrieve the tables.
A partial solution might be to run sp_recompile on all relevant tables, then watch master..monCachedProcedures to see changes in the CompileDate. Note that the CompileDate will only change once the stored proc has been executed after the sp_recompile (it actually gets compiled on 1st execution)
This would at least give you an idea of stored procedures that are in use, that are dependent on the specified table.
Not exactly elegant...

cassandra - how to update decimal column by adding to existing value

I have a cassandra table that looks like the following:
create table position_snapshots_by_security(
securityCode text,
portfolioId int,
lastUpdated date,
units decimal,
primary key((securityCode), portfolioId)
)
And I would like to something like this:
update position_snapshots_by_security
set units = units + 12.3,
lastUpdated = '2017-03-02'
where securityCode = 'SPY'
and portfolioId = '5dfxa2561db9'
But it doesn't work.
Is it possible to do this kind of operation in Cassandra? I am using version 3.10, the latest one.
Thank you!
J
This is not possible in Cassandra (any version) because it would require a read-before-write (anti-)pattern.
You can try the counter columns if they suit your needs. You could also try to caching/counting at application level.
You need to issue a read at application level otherwise, killing your cluster performance.
Cassandra doesn't do a read before a write (except when using Lightweight Transactions) so it doesn't support operations like the one you're trying to do which rely on the existing value of a column. With that said, it's still possible to do this in your application code with Cassandra. If you'll have multiple writers possibly updating this value, you'll want to use the aforementioned LWT to make sure the value is accurate and multiple writers don't "step on" each other. Basically, the steps you'll want to follow to do that are:
Read the current value from Cassandra using a SELECT. Make sure you're doing the read with a consistency level of SERIAL or LOCAL_SERIAL if you're using LWTs.
Do the calculation to add to the current value in your application code.
Update the value in Cassandra with an UPDATE statement. If using a LWT you'll want to do UPDATE ... IF value = previous_value_you_read.
If using LWTs, the UPDATE will be rejected if the previous value that you read changed while you were doing the calculation. (And you can retry the whole series of steps again.) Keep in mind that LWTs are expensive operations, particularly if the keys you are reading/updating are heavily contended.
Hope that helps!

Cassandra data corruption: NULL values appearing on certain columns

I'm running a Cassandra 3.9 cluster, and today I noticed some NULL values in some generated reports.
I opened up cqlsh and after some queries I noticed that null values are appearing all over the data, apparently in random columns.
Replication factor is 3.
I've started a nodetool repair on the cluster but it hasn't finished yet.
My question is: I searched for this behavior and could not find it anywhere. Apparently the random appearance of NULL values in columns is not a common problem.
Does anyone know what's going on? This kind of data corruption seems pretty serious. Thanks in advance for any ideas.
ADDED Details:
Happens on columns that are frequently updated with toTimestamp(now()) which never returns NULL, so it's not about null data going in.
Happens on immutable columns that are only inserted once and never changed. (But other columns on the table are frequently updated.)
Do updates cause this like deletions do? Seems kinda serious to me, to wake up to a bunch of NULL values.
I also know specifically some of the data that has been lost, three entries I've already identified are for important entries which are missing. These have not been deleted for sure - there is no deletion on one specific table which is full of NULL everywhere.
I am the sole admin and nobody ran any nodetool commands overnight, 100% sure.
UPDATE
nodetool repair has been running for 6+ hours now and it fully recovered the data on one varchar column "item description".
It's a Cassandra issue and no, there were no deletions at all. And like I said functions which never return null had null in them(toTimestamp(now())).
UPDATE 2
So nodetool repair finished overnight but the NULLs were still there in the morning.
So I went node by node stopping and restarting them and voilĂ , the NULLs are gone and there was no data loss.
This is a major league bug if you ask me. I don't have the resources now to go after it, but if anyone else faces this here's the simple "fix":
Run nodetool repair -dcpar to fix all nodes in the datacenter.
Restart node by node.
I faced a similar issue some months ago. It's explained quite good in the following blog. (This is not written by me).
The null values actually have been caused by updates in this case.
http://datanerds.io/post/cassandra-no-row-consistency/
Mmmh... I think that if this was a Cassandra bug it would already be reported. So I smell code bug in your application, but you didn't post any code, so this will remain only a (wild) guess until you provide some code (i'd like to have a look at the update code).
You don't delete data, nor use TTL. It may seem there are no other ways to create NULL values, but there's one more tricky one: failing at binding, that is explictly binding to NULL. It may seem strange, but it happens...
Since
...null values are appearing all over the data...
I'd expect to catch this very fast enabling some debugging or assert code on the values before issuing any updates.
check the update query if it updates only the columns necessary, or it does it through Java beans which includes the list of all columns in the table. This would explain the NULL updates for other columns which weren't desired to be updated.

Cassandra nodejs DataStax driver don't return newly added columns via prepared statement execution

After adding a pair of columns in schema, I want to select them via select *. Instead select * returns old set of columns and none new.
By documentation recommendation, I use {prepare: true} to smooth JavaScript floats and Cassandra ints/bigints difference (I don't really need the prepared statement here really, it is just to resolve this ResponseError : Expected 4 or 0 byte int issue and I also don't want to bother myself with query hints).
So on first execution of select * I had 3 columns. After this, I added 2 columns to schema. select * still returns 3 columns if is used with {prepare: true} and 5 columns if used without it.
I want to have a way to reliably refresh this cache or make cassandra driver prepare statements on each app start.
I don't consider restarting database cluster a reliable way.
This is actually an issue in Cassandra that was fixed in 2.1.3 (CASSANDRA-7910). The problem is that on schema update, the prepared statements are not evicted from the cache on the Cassandra side. If you are running a version less than 2.1.3 (which is likely since 2.1.3 was released last week), there really isn't a way to work around this unless you create another separate prepared statement that is slightly different (like extra spaces or something to cause a separate unique statement).
When running with 2.1.3 and changing the table schema, C* will properly evict the relevant prepared statements from the cache, and when the driver sends another query using that statement, Cassandra will respond with an 'UNPREPARED' message, which should provoke the nodejs driver to reprepare the query and resend the request for you.
On the Node.js driver, you can programatically clear the prepared statement metadata:
client.metadata.clearPrepared();

Resources