I'm trying to use the OpsCenter with my local multi-node development cluster created with CCM. I have manually installed and configured the Agents for each node using these instructions. I created my custom keyspace and its column families by uploading a SOURCE file in the CQLSH interface
I get the following error when clicking on Data > MyKeySpace > MyColumnFamily:
Error loading column family: Call to /test_cluster/keyspaces/flashcardsapp/cf/tag timed out.
I am however able to view the column families in the OpsCenter keyspace.
I am seeing the following in the OpsCenter log:
2015-03-14 07:58:35-0600 [] Unhandled Error
Traceback (most recent call last):
File "/Users/justinrobbins/Documents/dev/cassandra/opscenter-5.1.0/lib/py-osx/2.7/amd64/twisted/internet/defer.py", line 1076, in gotResult
_inlineCallbacks(r, g, deferred)
File "/Users/justinrobbins/Documents/dev/cassandra/opscenter-5.1.0/lib/py-osx/2.7/amd64/twisted/internet/defer.py", line 1063, in _inlineCallbacks
deferred.callback(e.value)
File "/Users/justinrobbins/Documents/dev/cassandra/opscenter-5.1.0/lib/py-osx/2.7/amd64/twisted/internet/defer.py", line 361, in callback
self._startRunCallbacks(result)
File "/Users/justinrobbins/Documents/dev/cassandra/opscenter-5.1.0/lib/py-osx/2.7/amd64/twisted/internet/defer.py", line 455, in _startRunCallbacks
self._runCallbacks()
--- <exception caught here> ---
File "/Users/justinrobbins/Documents/dev/cassandra/opscenter-5.1.0/lib/py-osx/2.7/amd64/twisted/internet/defer.py", line 542, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "build/lib/python2.7/site-packages/opscenterd/TwistedRouter.py", line 226, in controllerSucceeded
File "build/lib/python2.7/site-packages/opscenterd/WebServer.py", line 3953, in default_write
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 250, in dumps
sort_keys=sort_keys, **kw).encode(obj)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 270, in iterencode
return _iterencode(o, 0)
File "build/lib/python2.7/site-packages/opscenterd/WebServer.py", line 261, in default
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 184, in default
raise TypeError(repr(o) + " is not JSON serializable")
exceptions.TypeError: UUID('457d5450-ca0b-11e4-a99a-53fff8597215') is not JSON serializable
My environment is as follows:
Cassandra: dsc-cassandra-2.1.2
OpsCenter: opscenter-5.1.0
Agents: datastax-agent-5.1.0
OS: OSX 10.10.1
There’s a known bug in OpsCenter where UUID columns in Cassandra 2.1.x are not handled properly. I am not aware of any workarounds (switching from UUID columns or downgrading C* to 2.0.x should work, but it might be a bit too much work.)
It’s going to be fixed in the upcoming patch release of OpsCenter 5.1 (not 5.1.1 though)
Related
After installing and starting memsql-ops, it shows the following error:
# ./memsql-ops start
Starting MemSQL Ops...
Exception in thread Thread-7:
Traceback (most recent call last):
File "/usr/local/updated-openssl/lib/python3.4/threading.py", line 921, in _bootstrap_inner
File "/usr/local/updated-openssl/lib/python3.4/threading.py", line 869, in run
File "/memsql_platform/memsql_platform/agent/daemon/manage.py", line 200, in startup_watcher
File "/memsql_platform/memsql_platform/network/api_client.py", line 34, in call
File "/usr/local/updated-openssl/lib/python3.4/site-packages/simplejson/__init__.py", line 516, in loads
File "/usr/local/updated-openssl/lib/python3.4/site-packages/simplejson/decoder.py", line 370, in decode
File "/usr/local/updated-openssl/lib/python3.4/site-packages/simplejson/decoder.py", line 400, in raw_decode
simplejson.scanner.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Does anyone know the issue?
OS : CentOS 6.7
Memsql : 5.1.0 Enterprise Trial
It is likely that you have another server running on the port Ops is trying to start with (default 9000) that is returning data which is not JSON decodable. The solution is to either start MemSQL Ops on a different port, or kill the server running at that port.
We will fix this bug in an upcoming release of Ops! Thanks for pointing it out.
I am trying to load data to Google bigquery using bq load from a named pipe.
Console Window1:
$ mkfifo /usr/pipe1
$ cat /dev1/item.dat > /usr/pipe1
Console Window2:
$ bq load --source_format=CSV projectid:dataset.itemtbl /usr/pipe1 field1:integer,field2:integer
Got the following error:
BigQuery error in load operation: Source path is not a file: /usr/pipe1
The BigQuery client bq.py does not support named pipes. It explicitly requires files:
https://code.google.com/p/google-bigquery-tools/source/browse/bq/bigquery_client.py?r=30df4638ff2ddb01d3f495af5c131ed3c2cfbd04#617
Allowing named pipes is a good feature suggestion. You can request it here:
https://code.google.com/p/google-bigquery/issues/list
It looks like you could tweak your copy of bigquery_client.py pretty easily to make this work as well. Good luck!
The bq load command it doesn't support pipe files.
this is the error when you change the code to bypass the pipe file validation.
== Error trace ==
Traceback (most recent call last):
File "/usr/local/share/google/google-cloud-sdk/platform/bq/bq.py", line 1001, in RunSafely
return_value = self.RunWithArgs(*args, **kwds)
File "/usr/local/share/google/google-cloud-sdk/platform/bq/bq.py", line 1355, in RunWithArgs
job = client.Load(table_reference, source, schema=schema, **opts)
File "/usr/local/share/google/google-cloud-sdk/platform/bq/bigquery_client.py", line 3504, in Load
upload_file=upload_file, **kwds)
File "/usr/local/share/google/google-cloud-sdk/platform/bq/bigquery_client.py", line 2924, in ExecuteJob
location=location)
File "/usr/local/share/google/google-cloud-sdk/platform/bq/bigquery_client.py", line 2901, in RunJobSynchronously
location=location)
File "/usr/local/share/google/google-cloud-sdk/platform/bq/bigquery_client.py", line 2755, in StartJob
resumable=resumable)
File "/usr/local/share/google/google-cloud-sdk/platform/bq/third_party/oauth2client_4_0/_helpers.py", line 134, in positional_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/share/google/google-cloud-sdk/platform/bq/third_party/googleapiclient/http.py", line 562, in __init__
resumable=resumable)
File "/usr/local/share/google/google-cloud-sdk/platform/bq/third_party/oauth2client_4_0/_helpers.py", line 134, in positional_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/share/google/google-cloud-sdk/platform/bq/third_party/googleapiclient/http.py", line 439, in __init__
self._fd.seek(0, os.SEEK_END)
IOError: [Errno 29] Illegal seek
========================================
Unexpected exception in load operation: You have encountered a bug in the
BigQuery CLI. Please file a bug report in our
public issue tracker:
https://issuetracker.google.com/issues/new?component=187149&template=0
Please include a brief description of the steps that led to this issue, as well
as any rows that can be made public from the following information:
Hora : 2018-10-05 10:04:02
Ok, I've got hue 3.8 pointing at my EMR cluster, and it's mostly working. THe one thing I'm missing that I really care about at this point is spark notebook
when I attempt to choose a language for a snippet, there is an error, "No usable value for lang Did not find value which can be converted into java.lang.String (error 400)" and the logs say this:
[03/Jun/2015 11:38:59 -0700] decorators ERROR error running <function create_session at 0x7fe30acd1d70>
Traceback (most recent call last):
File "/usr/local/hue/apps/spark/src/spark/decorators.py", line 77, in decorator
return func(*args, **kwargs)
File "/usr/local/hue/apps/spark/src/spark/api.py", line 44, in create_session
response['session'] = get_api(request.user, snippet).create_session(lang=snippet['type'])
File "/usr/local/hue/apps/spark/src/spark/models.py", line 284, in create_session
response = api.create_session(kind=lang)
File "/usr/local/hue/apps/spark/src/spark/job_server_api.py", line 87, in create_session
return self._root.post('sessions', data=json.dumps(kwargs), contenttype='application/json')
File "/usr/local/hue/desktop/core/src/desktop/lib/rest/resource.py", line 122, in post
return self.invoke("POST", relpath, params, data, self._make_headers(contenttype, headers))
File "/usr/local/hue/desktop/core/src/desktop/lib/rest/resource.py", line 78, in invoke
urlencode=self._urlencode)
File "/usr/local/hue/desktop/core/src/desktop/lib/rest/http_client.py", line 161, in execute
raise self._exc_class(ex)
RestException: No usable value for lang
Did not find value which can be converted into java.lang.String (error 400)
Is this a problem with the software or my config?
THis might be tied to the fact that attempting to run sudo ./hue livy_server yields:
Failed to run spark-submit executable: java.io.IOException:
Cannot run program "spark-submit": error=2, No such file or directory
spark-submit does in fact exist and is in path
The
spark-submit
command comes from Spark, it needs to be present on the Hue machine.
I installed CCM utility of Cassandra, but when I try to create a cluster with the following command:
root#ubuntu2:~# ccm create cluster1 -v=2.0.9
I get the following error:
Traceback (most recent call last):
File "/usr/local/bin/ccm", line 5, in
pkg_resources.run_script('ccm==2.0', 'ccm')
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 528, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 1401, in run_script
exec(script_code, namespace, namespace)
File "/usr/local/lib/python2.7/dist-packages/ccm-2.0-py2.7.egg/EGG-INFO/scripts/ccm", line 72, in
File "build/bdist.linux-x86_64/egg/ccmlib/cmds/cluster_cmds.py", line 125, in run
File "build/bdist.linux-x86_64/egg/ccmlib/cluster.py", line 51, in init
File "build/bdist.linux-x86_64/egg/ccmlib/cluster.py", line 64, in load_from_repository
File "build/bdist.linux-x86_64/egg/ccmlib/repository.py", line 40, in setup
File "build/bdist.linux-x86_64/egg/ccmlib/repository.py", line 211, in download_version
ccmlib.common.ArgumentError: Invalid version =2.0.9 (underlying error is: HTTP Error 404: Not Found)
Changing to any other version of Cassandra in the command does not help.
There is no issue with Internet connection.
Will appreciate help.
Try ... -v 2.0.9
As the error suggests, the version number '=2.0.9' is not valid.
We recently upgraded to cassandra 2.0.1 with cqlsh 4.0.1. I am seeing timeout errors/ broken pipe while using the cqlsh client. Please see error trace below. I have verified that the cluster is Up using nodetool and I am able to read/write using mapreduce. Please advice.
Thanks,
Prateek
Traceback (most recent call last):
File "./bin/cqlsh", line 897, in perform_statement_untraced
self.cursor.execute(statement, decoder=decoder)
File "./bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/cursor.py", line 80, in execute
response = self.get_response(prepared_q, cl)
File "./bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/thrifteries.py", line 77, in get_response
return self.handle_cql_execution_errors(doquery, compressed_q, compress, cl)
File "./bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/thrifteries.py", line 96, in handle_cql_execution_errors
return executor(*args, **kwargs)
File "./bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/cassandra/Cassandra.py", line 1782, in execute_cql3_query
self.send_execute_cql3_query(query, compression, consistency)
File "./bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/cassandra/Cassandra.py", line 1793, in send_execute_cql3_query
self._oprot.trans.flush()
File "./bin/../lib/thrift-python-internal-only-0.9.1.zip/thrift/transport/TTransport.py", line 292, in flush
self.__trans.write(buf)
File "./bin/../lib/thrift-python-internal-only-0.9.1.zip/thrift/transport/TSocket.py", line 128, in write
plus = self.handle.send(buff)
error: [Errno 32] Broken pipe
If you have an open cqlsh session, it will always give you Errno 32 if the Cassandra instance that it connected to was stopped or even just restarted. You will have to restart cqlsh in order to re-establish a connection to the server.
If you see this problem without having stopped or restarted a Cassandra server, then please supply and additional details about conditions that lead up to this error.