Apache Cassandra. UnavailableException While Trying to insert a record - cassandra

I am very new to Cassandra, and some how configured it. I was following This Link
.
Everything was fine. But in the end when I am trying to insert a record, it gives me the following exception. Today since afternoon I am trying to fix this. Googled a lot, but could not reach anywhere.
Any help on this will be greatly appretiated.
[default#DEMO] set Users[1234][name] = scott;
null
UnavailableException()
at org.apache.cassandra.thrift.Cassandra$insert_result.read(Cassandra.java:16077)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_insert(Cassandra.java:801)
at org.apache.cassandra.thrift.Cassandra$Client.insert(Cassandra.java:785)
at org.apache.cassandra.cli.CliClient.executeSet(CliClient.java:909)
at org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:222)
at org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:201)
at org.apache.cassandra.cli.CliMain.main(CliMain.java:328)
[default#DEMO]
Thank you
Achyuth

this is old one, I want to share my experience.
I had same issue when I was setting the qa environment. Every thing configured fine: including : cassandra-topology.properties. But the nodetool ring display the unknown DC value for all nodes since the default is set UNKNOWN. That told me that the cassandra-topology.properties is not right in some way. After tried several things still no luck, I decided to create my own cassandra-topology.properties file and re-type every thing with vi, then it starts fine.
So if you have issue, run nodetool ring first to see if the DC set is what it should be.

Related

Failed to execute 'table' on org.apache.spark.sql.SparkSession

I have a Spark + Hive application.
It works fine. But at some point I had to create another Hive environment.
So I ran show create table ... and recreated the same view (with underlying tables). And added some data.
I can query the data from hive cli, etc.
but whenever I run my application it fails with
ERROR Failed to execute 'table' on 'org.apache.spark.sql.SparkSession' with args=([Type=java.lang.String, Value: <view name>])
I believe it refers to the line code when I can sparkSession.table(<view-name>)
What steps can be executed to troubleshoot a such issue?
UPD
Session declaration (definitely tried to create a session without this configuration)
.Config("spark.hadoop.google.cloud.auth.service.account.enable", "true")
.Config("spark.hadoop.google.cloud.auth.service.account.json.keyfile", "some.file")
.Config("fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem")
.Config("spark.sql.debug.maxToStringFields", int64 2048)
.Config("spark.debug.maxToStringFields", int64 2048)
Maybe a bit trivial, but when it comes to troubleshooting this kind of an issue, really try and get to the root of the problem with a minimal set up:
I generally start off by starting the spark-shell.
Check whether it is possible to run spark.sql("SHOW DATABASES").show(20, false). If this fails, it's probably something with your Hive configuration, indeed.
Try and see whether you can run spark.table("your_table"). If not, it'll probably give you a clearer error (such as Table or view not found: ...).
If all of the above works, try to strip your application such that it only does that spark.table, which did work in your spark-shell at that point in time. If that suddenly doesn't work, it might have to do with how the SparkSession is created in your application.
If that works, try and uncomment the code piece by piece, until you're back to your original code to better pinpoint where it fails.

Memory Monitor using ':C XXXX' in RDi not Showing Variable Value

When monitoring memory using the :C XXXX option, the memory values do not show when debugging. You can monitor up to 4000 characters by using the :C option when monitoring memory:
I have ran into this problem twice now. I am using IBM Rational Developer for i Version: 9.6.0.0, with the Java JDK/JRE v8u45.
Here are the values when debugging, and my data structure definition:
dcl-ds dsSQL qualified inz;
fullStmt varchar( 9360 ) inz;
end-ds;
Once I click on the element, all I see is ``. There is nothing in the value but that, but you can clearly see that dsSQL.fullStmt is not empty. I use this option daily and 99.99% of the time it works fine. I have to restart a million times, reset RDi to start with -c and recompile the program over and over to get it to work right.
Anyone have any idea how to fix this? I would give you the 'Error Log' but it is constantly filled and nothing in there seems to point to that issue. When adding or looking at that variable during debug, no errors are thrown.
You probably want to update to the latest 9.6.0.6 release. They have fixed at least a few memory problems with the debugger between your release and the latest.
https://www-01.ibm.com/support/docview.wss?uid=swg27038481

Unable to create or update an Asset\Image when saving a Pimcore object

My goal is to create / update objects by using a csv file. csv file is fine and allready heavy used.
I wrote a complete script for that, most setSomething() works very well with text, structured Datas etc.. but i'm facing a trouble with the Images. I read the doc twice...
https://pimcore.com/docs/4.6.x/Development_Documentation/Objects/Object_Classes/Data_Types/Image_Types.html
$image = Asset\Image::getByPath("/examples/example1.jpg");
$object->setImage($image);
$object->save();
and that exactly what's i did to create / update $article
$marque = Pimcore\Model\Asset\Image::getByPath("/Catalog/Marques/".$image);
$article->setEarg($marque);
$article->save();
please note that :
$image exists,
$marque is found, and is an instance of Asset\Image
setEarg($marque) is properly defined to set an Image data field
in UI all works fine
cache cleared
Is it possible i misunderstood something ?
I also searched in passed Issues, no luck.
https://github.com/pimcore/pimcore/issues
Maybe someone did face the same problem ? Anc could a hint on how to solve that ?
Regards,
I'm confused...this code is working as it should.
It seems to a cache problem, even if i cleared it. In the UI image did appear after some minutes / hour. Now it appear immedialtly. This question can be closed, but this push me to another one... how did this cache / deffer problem happened. If i find something i will update here.

MongoDB "open files limit" error during repairDatabase

I'm trying to do repairDatabase of MongoDB on Ubuntu 16.04 but it fails with an error "errno:24 Too many open files" ("code" : 16818).
I've raised "ulimit -n" up to 1024000, restarted the server, but still getting the same error.
It does not seem possible to raise it higher and I'm stuck with no ideas. Please help!
We have faced similar issue. First please make sure number of file descriptors used by "mongod" process while running repairDatabase() command. You can verify this with the help of "lsof -p mongod_pid" Also please note, if you want to change "max number of process", you need to edit "/etc/security/limits.conf" file by adding entry for mongod process.
Edit:
Also there is already feature request to open file per database as currently "wiredtiger" opens one file per collection and one for index. Also one should seriously look into horizontal scaling by sharding if cost is not a serious issue.

How Can I Save the Indexed Data with Elasticsearch Permanently?

I followed the ElasticSearch Java-Api Guide.I added Java_Home to Computer->Settings->Advanced Settings->Environment Variables.And I run ElasticSearch -> elasticsearch-service-x64.exe and run as administrator elasticsearch.bat.When I did these settings ,I wrote localhost:9200 to browser and I got information abaout my client or node I guess.After a while I wrote localhost:9200 again but this time anything showed up.
I dont know is it the reason of my problem.But I know I cant keep indexed data permanently.When I indexed , data is going away in 5 seconds and searching operation dont give me any Hits.Meanwhile please tolerate my poor english.
In contrast to SOLR, elasticsearch has no notion of a commit, so everything you feed to the server without error gets indexed.

Resources