Cassandra - Update Timestamp column not working properly - cassandra

Trying to update Timestamp column in Cassandra database.
update sample set date='2016-10-21 19:15:10.000' where rowkey=1;
When i check the results, it is less than 5:30 hours.
Output:
2016-10-21 13:45:10.000000+0000
Is it something to do with Locale?, i tried to update the same using programmatically, the same output.

That's because cqlsh shows timestamps only in UTC as per CASSANDRA-10000 in versions 2.1.9, 2.2.1, and 3.0 (beta). It was fixed to show timestamps while applying the local timezone offset in CASSANDRA-10397 as of versions 2.2.6, 3.0.4, 3.4.
If this is an issue for you, an upgrade to a recent version of Cassandra should correct this behavior.

Related

AWS RDS Postgres PostGIS upgrade problems

I have an RDS instance running Postgres 11.16. I'm trying to upgrade it to 12.11 but it's giving me errors on PostGIS. If I try a "modify" I get the following error in the precheck log file:
Upgrade could not be run on Sun Sep 18 06:05:13 2022
The instance could not be upgraded from 11.16.R1 to 12.11.R1 because of following reasons. Please take appropriate action on databases that have usages incompatible with requested major engine version upgrade and try again.
Following usages in database 'XXXXX' need to be corrected before upgrade:
-- The instance could not be upgraded because there are one or more databases with an older version of PostGIS extension or its dependent extensions (address_standardizer, address_standardizer_data_us, postgis_tiger_geocoder, postgis_topology, postgis_raster) installed. Please upgrade all installations of PostGIS and drop its dependent extensions and try again.
----------------------- END OF LOG ----------------------
First, I tried just removing postgis to upgrade then add it back again. I used: drop extension postgis cascade;. However, this generated the same error
Second, I tried running SELECT postgis_extensions_upgrade();. However, it gives me the following error:
NOTICE: Updating extension postgis_raster from unpackaged to 3.1.5
ERROR: function st_convexhull(raster) does not exist
CONTEXT: SQL statement "ALTER EXTENSION postgis_raster UPDATE TO "3.1.5";"
PL/pgSQL function postgis_extensions_upgrade() line 82 at EXECUTE
SQL state: 42883
Third, I tried to do a manual snapshot and upgrade the snapshot. Same results.
One additional piece of information, I ran SELECT PostGIS_Full_Version(); and this is what it returns:
"POSTGIS=""3.1.5 c60e4e3"" [EXTENSION] PGSQL=""110"" GEOS=""3.7.3-CAPI-1.11.3 b50468f"" PROJ=""Rel. 5.2.0, September 15th, 2018"" GDAL=""GDAL 2.3.1, released 2018/06/22"" LIBXML=""2.9.1"" LIBJSON=""0.12.1"" LIBPROTOBUF=""1.3.0"" WAGYU=""0.5.0 (Internal)"" TOPOLOGY RASTER (raster lib from ""2.4.5 r16765"" need upgrade) (raster procs from ""2.4.4 r16526"" need upgrade)"
As you'll notice, the raster lib is old but I can't really figure out how to upgrade it. I think this is what is causing me problems but I don't know how to overcome it.
I appreciate any thoughts.
I ended up finally giving up on this after many failed attempts. I ended up solving this by:
Spinning up a new instance on the desired postgres version
Using pg_dump on the old version (schema and data)
Using pg_restore on the new version
I'm not sure if I did something wrong with the above but I found my sequences were out of sync on a number of tables. I wrote some scripts to reset the sequence values after doing this. I had to use something like this to re-sync those sequences:
SELECT setval('the_sequence', (SELECT MAX(the_primary_key) FROM the_table)+1);
I wasted enough time and this got me past the issue. Hopefully the next upgrade doesn't give me this much trouble.

Warnings trying to read Spark 1.6.X Parquet into Spark 2.X

When attempting to load a spark 1.6.X parquet file into spark 2.X I am seeing many WARN level statements.
16/08/11 12:18:51 WARN CorruptStatistics: Ignoring statistics because created_by could not be parsed (see PARQUET-251): parquet-mr version 1.6.0
org.apache.parquet.VersionParser$VersionParseException: Could not parse created_by: parquet-mr version 1.6.0 using format: (.+) version ((.*) )?\(build ?(.*)\)
at org.apache.parquet.VersionParser.parse(VersionParser.java:112)
at org.apache.parquet.CorruptStatistics.shouldIgnoreStatistics(CorruptStatistics.java:60)
at org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetStatistics(ParquetMetadataConverter.java:263)
at org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetMetadata(ParquetMetadataConverter.java:567)
at org.apache.parquet.format.converter.ParquetMetadataConverter.readParquetMetadata(ParquetMetadataConverter.java:544)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:431)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:386)
at org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.initialize(SpecificParquetRecordReaderBase.java:107)
at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.initialize(VectorizedParquetRecordReader.java:109)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$buildReader$1.apply(ParquetFileFormat.scala:369)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$buildReader$1.apply(ParquetFileFormat.scala:343)
at [rest of stacktrace omitted]
I am running 2.1.0 release and there are multitudes of these warnings. Is there any way - short of changing logging level to ERROR - to suppress these?
It seems these were the result of a fix made - but the warnings may not yet be removed. Here are some details from that JIRA:
https://issues.apache.org/jira/browse/SPARK-17993
I have built the code from the PR and it indeed succeeds reading the
data. I have tried doing df.count() and now I'm swarmed with
warnings like this (they are just keep getting printed endlessly in
the terminal):
Setting the logging level to ERROR is a last ditch approach: it is swallowing messages we rely upon for standard monitoring. Has anyone found a workaround to this?
For the time being - i.e until/unless this spark/parquet bug were fixed - I will be adding the following to the log4j.properties:
log4j.logger.org.apache.parquet=ERROR
The location is:
when running against external spark server: $SPARK_HOME/conf/log4j.properties
when running locally inside Intellij (or other IDE): src/main/resources/log4j.properties

Postgis - migration from 1.5 to 2.3 - workaround

Fellows,
I already had a really hard time trying to make a migration from a postgis 1.5 to 2.3. Actually I had many attempts to do that with all version of postgis: 2.0, 2.1, 2.2 and now 2.3.
As I have spent a few weeks with that, and as I am ready to drop that I keep using my old version of postgresql and postgis, I hope my question finds a echo somewhere...
I only want to migrate one table field, a geometry field. So before trying to do that, I would like to hear if any of you have experience with that:
The idea is to select the field from the old postgis (postgresql 9.2, postgis 1.5), and update the table to the new postgis (postgresql 9.6, postgis 2.3).
Anyone can say anything about it?
EDIT---
I have just tryed to import the table I needed, but I got an error:
violation of constraint "enforce_srid_the_geom".
:(
Thanks a lot.
If you need to migrate just a single table with a geometry column and you are getting troubles with the procedure described here http://www.postgis.org/docs/postgis_installation.html#hard_upgrade
I'd suggest a workaround: create a text field in the old db, e.g. wkt_geom, then execute
update tablename set wkt_geom = st_astext(the_geom)
Now drop the geometry column, export the old db and import into the new one. Then, create the geometry column and
update tablename set the_geom = st_geomfromtext(wkt_geom, SRID_HERE)

Cassandra no viable alternative at input Like

I am very new to Cassandra and am trying to use the new LIKE feature but keep getting the error
Line 1: no viable alternative at input 'LIKE'
I am using DataStax DevCenter and am following the examples on https://docs.datastax.com/en/cql/3.3/cql/cql_using/useSASIIndex.html .I am using Cassandra version 3.7.0 and CQL 3.4.2 and the Datastex version is 1.60 community . I have a table named zips with a text field called city that has 10,000 records and am simply using this CQL code
SELECT * FROM "MyTable".zips WHERE city LIKE 'M%';
Before that I added an index using
CREATE CUSTOM INDEX fn_prefix ON "MyTable".zips (city) USING 'org.apache.cassandra.index.sasi.SASIIndex';
I know that the index worked because it allowed me to do this query
SELECT * FROM "Exoler".zips WHERE city='Miami';
without using allow filter and it returns values. Any suggestions would be great as stated I am very new to this.
If you use Cassandra 3.9 and Datastax DevCenter version 1.5.0 or 1.6.0 it won't support LIKE (atleast on Windows). The result is only "no viable alternative at input 'LIKE'"
But it works fine if you use command prompt:
WINDOWS-Key
cmd
"%CASSANDRA_HOME%\bin\cqlsh"
It is just a bug in Datastax DevCenter I guess.

Strange directory structure after upgrade cassandra dse to 4.8.5

I've just upgraded my dse cluster to the newest version of dse (4.8.5).
After upgrade and first backup taken by opscenter i noticed that dir of my keyspace has strange subdirectories. Before it was only subdirectories with name corresponding to name of column family, but now there are extra subdirectories which names begins with the column family name and end with some id. For exapmle:
adresse
adresse-489b600c299634da953e3102af80b02b
but i have only column family: adresse.
Could you explain this strange behaviour?
Thanks
Przemek
What is your previous version of Cassandra? Are you aware of columnFamily id?
I found answer. In Cassandra 2.0 dir for snapshots was: data_directory_location/keyspace_name/table_name/snapshots/snapshot_name but in cassandra 2.1 is: data_directory_location/keyspace_name/table_name-UUID/snapshots/snapshot_name

Resources