While doing system update ( or ant update system), we are getting below error (hmchistoryentries doesn't exist). Any one faced this before?
Per documentation, it seems this is a deprecated item type. Though we are not using hmc, we are not sure which extension is using this itemtype. Appreciate your help.
[java] Caused by: org.apache.ddlutils.DatabaseOperationException: java.sql.SQLSyntaxErrorException: Table 'hybrisD2C.hmchistoryentries' doesn't exist[java] at org.apache.ddlutils.platform.PlatformImplBase.readModelFromDatabase(PlatformImplBase.java:1891)[java] at org.apache.ddlutils.platform.PlatformImplBase.readModelFromDatabase(PlatformImplBase.java:1869)[java] at de.hybris.bootstrap.ddl.HybrisSchemaGenerator.update(HybrisSchemaGenerator.java:225)[java] at de.hybris.platform.core.Initialization.initializeSchemaAndTypeSystemFullyNewStyle(Initialization.java:1245)[java] at de.hybris.platform.core.Initialization.initialize(Initialization.java:1121)[java] at de.hybris.platform.core.Initialization.createEmptySystemOrUpdate(Initialization.java:776)[java] at de.hybris.platform.core.Initialization.access$4(Initialization.java:756)[java] at de.hybris.platform.core.Initialization$4.call(Initialization.java:563)[java] at de.hybris.platform.core.Initialization$4.call(Initialization.java:1)[java] at de.hybris.platform.core.Initialization$SessionRecoveryAfterRegistryStartupAwareExecutor.execute(Initialization.java:698)[java] at de.hybris.platform.core.Initialization.doInitializeImpl(Initialization.java:566)[java] at de.hybris.platform.core.Initialization.access$5(Initialization.java:488)[java] at de.hybris.platform.core.Initialization$5.call(Initialization.java:812)[java] at de.hybris.platform.core.Initialization$5.call(Initialization.java:1)[java] at de.hybris.platform.core.system.InitializationLockHandler.performLocked(InitializationLockHandler.java:80)[java] at de.hybris.platform.core.Initialization.doInitialize(Initialization.java:844)[java] at de.hybris.ant.taskdefs.InitPlatformAntPerformableImpl.performImpl(InitPlatformAntPerformableImpl.java:106)[java] at de.hybris.ant.taskdefs.AbstractAntPerformable.doPerform(AbstractAntPerformable.java:92)[java] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[java] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[java] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[java] at java.lang.reflect.Method.invoke(Method.java:498)[java] at bsh.Reflect.invokeMethod(Reflect.java:131)[java] at bsh.Reflect.invokeObjectMethod(Reflect.java:77)[java] at bsh.Name.invokeMethod(Name.java:852)[java] at bsh.BSHMethodInvocation.eval(BSHMethodInvocation.java:69)[java] ... 16 more
Can you search all *-items.xml files to find out what extension it is coming from? You might also want to check your the localextensions.xml file if the extension or any extension depending on it is there. If it's there, you can remove it.
This error happened due to the incorrect version of DB connector. We were using MySQL 8.0 and mysql-connector-java-8.0.19.jar connector. But SAP officially doesn't support MySQL 8.0 with 1808 version. After downgrading it to MySql 5.x and mysql-connector-java-5.1.x-bin.jar, this error no longer exists.
Thank you.
Related
I have an RDS instance running Postgres 11.16. I'm trying to upgrade it to 12.11 but it's giving me errors on PostGIS. If I try a "modify" I get the following error in the precheck log file:
Upgrade could not be run on Sun Sep 18 06:05:13 2022
The instance could not be upgraded from 11.16.R1 to 12.11.R1 because of following reasons. Please take appropriate action on databases that have usages incompatible with requested major engine version upgrade and try again.
Following usages in database 'XXXXX' need to be corrected before upgrade:
-- The instance could not be upgraded because there are one or more databases with an older version of PostGIS extension or its dependent extensions (address_standardizer, address_standardizer_data_us, postgis_tiger_geocoder, postgis_topology, postgis_raster) installed. Please upgrade all installations of PostGIS and drop its dependent extensions and try again.
----------------------- END OF LOG ----------------------
First, I tried just removing postgis to upgrade then add it back again. I used: drop extension postgis cascade;. However, this generated the same error
Second, I tried running SELECT postgis_extensions_upgrade();. However, it gives me the following error:
NOTICE: Updating extension postgis_raster from unpackaged to 3.1.5
ERROR: function st_convexhull(raster) does not exist
CONTEXT: SQL statement "ALTER EXTENSION postgis_raster UPDATE TO "3.1.5";"
PL/pgSQL function postgis_extensions_upgrade() line 82 at EXECUTE
SQL state: 42883
Third, I tried to do a manual snapshot and upgrade the snapshot. Same results.
One additional piece of information, I ran SELECT PostGIS_Full_Version(); and this is what it returns:
"POSTGIS=""3.1.5 c60e4e3"" [EXTENSION] PGSQL=""110"" GEOS=""3.7.3-CAPI-1.11.3 b50468f"" PROJ=""Rel. 5.2.0, September 15th, 2018"" GDAL=""GDAL 2.3.1, released 2018/06/22"" LIBXML=""2.9.1"" LIBJSON=""0.12.1"" LIBPROTOBUF=""1.3.0"" WAGYU=""0.5.0 (Internal)"" TOPOLOGY RASTER (raster lib from ""2.4.5 r16765"" need upgrade) (raster procs from ""2.4.4 r16526"" need upgrade)"
As you'll notice, the raster lib is old but I can't really figure out how to upgrade it. I think this is what is causing me problems but I don't know how to overcome it.
I appreciate any thoughts.
I ended up finally giving up on this after many failed attempts. I ended up solving this by:
Spinning up a new instance on the desired postgres version
Using pg_dump on the old version (schema and data)
Using pg_restore on the new version
I'm not sure if I did something wrong with the above but I found my sequences were out of sync on a number of tables. I wrote some scripts to reset the sequence values after doing this. I had to use something like this to re-sync those sequences:
SELECT setval('the_sequence', (SELECT MAX(the_primary_key) FROM the_table)+1);
I wasted enough time and this got me past the issue. Hopefully the next upgrade doesn't give me this much trouble.
enter image description hereWhy does the fuction Utf8Helper::setCollatorLanguage in arangodb sdk always return false?
It is at fault ERROR ERROR in the Collator: : createInstance < > : U_FILE_ACCESS_ERRORï¼›And It's lead to failed to initialise ICU; ICU_DATA= "F:\\work_lc\\arangodb-2.6\\Build32\\bin\\..\\\\share\\arangodb\\";This project where to copy from others, it can be used, but i'm not,I just wonder what configuration file not produced
You need to make shure, ICU is able to load its locale database.
See our cookbook regarding windows compilation how to achieve this.
Please note that ArangoDB 2.6 is way out of date, and you should work with a more recent version.
More recent versions will also provide better error messages in such situations via the windows event log.
With bdutil, the latest version of tarball I can find is on spark 1.3.1:
gs://spark-dist/spark-1.3.1-bin-hadoop2.6.tgz
There are a few new DataFrame features in Spark 1.4 that I want to use. Any chance the Spark 1.4 image be available for bdutil, or any workaround?
UPDATE:
Following the suggestion from Angus Davis, I downloaded and pointed to spark-1.4.1-bin-hadoop2.6.tgz, the deployment went well; however, run into error when calling SqlContext.parquetFile(). I cannot explain why this exception is possible, GoogleHadoopFileSystem should be a subclass of org.apache.hadoop.fs.FileSystem. Will continue investigate on this.
Caused by: java.lang.ClassCastException: com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem cannot be cast to org.apache.hadoop.fs.FileSystem
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2595)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:354)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.hive.metastore.Warehouse.getFs(Warehouse.java:112)
at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:144)
at org.apache.hadoop.hive.metastore.Warehouse.getWhRoot(Warehouse.java:159)
at org.apache.hadoop.hive.metastore.Warehouse.getDefaultDatabasePath(Warehouse.java:177)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:504)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:523)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:397)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:356)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:54)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4944)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:171)
Asked a separate question about the exception here
UPDATE:
The error turned out to be a Spark defect; resolution/workaround provided in the above question.
Thanks!
Haiying
If a local workaround is acceptable, you can copy the spark-1.4.1-bin-hadoop2.6.tgz from an apache mirror into a bucket that you control. You can then edit extensions/spark/spark-env.sh and change SPARK_HADOOP2_TARBALL_URI='<your copy of spark 1.4.1>' (make certain that the service account running your VMs has permission to read the tarball).
Note that I haven't done any testing to see if Spark 1.4.1 works out of the box right now, but I'd be interested in hearing your experience if you decide to give it a go.
i am trying to execute tasktracker on Cygwin but following error occur's as:-
mapred.TaskTracker: Process Tree implementation is missing on this system. TaskMemoryManager is disabled.
Rest all (i.e. Namenode,Secondarynamenode,Jobtracker and Datanode) working properly through cygwin but the issue is with the Tasktracker.I am hadoop version:hadoop-19.0.1
So,How I get rid of it.If anybody knows please help!.
Your Help will be appreciated!
I didn't encountered this specific problem but ...
Make sure that you are using the same hadoop version that it is in use on the cluster.
Update Hadoop to more recent version if possible.
The following patches may address (or maybe not) your problem:
https://issues.apache.org/jira/browse/HADOOP-6230
https://issues.apache.org/jira/browse/MAPREDUCE-834
When I attempt to add a new file to the solution -- even a general C# empty class, I get an error:
The requested value 'DoNotChange' was not found. See screenshot.
This just started happening yesterday. I installed the monotouch-4.0.0.dmg, but have since rolled back to 3.2.6, but the problem remains.
I think there may be a fairly widespread issue, as this new StackOverflow question seems eerily similar.
Anyone have any ideas on how to recover?
Environment:
MonoTouch Professional 3.2.6 (4.0.0)
MonoDevelop 2.4.2 release 20402004
OSX 10.6.7
UPDATE: On a whim I tried to create a new empty .cs file outside of MT, and then add it to the project -- that worked, so at least there is a temporary workaround.
It looks like your formatting policy options are triggering a bug in the code formatter. Try resetting it by removing the file ~/.config/MonoDevelop/DefaultPolicies.xml