Dataflow job fails with HttpError, NotImplementedError - python-3.x
I'm running a Dataflow job which I think should work, and is failing after 1.5 hrs with what looks like network errors. It works fine when run against a subset of the data.
The first trouble sign is a whole string of warnings like this:
Refusing to split <dataflow_worker.shuffle.GroupedShuffleRangeTracker object at 0x7f2bcb629950> at b'\xa4r\xa6\x85\x00\x01': proposed split position is out of range [b'\xa4^E\xd2\x00\x01', b'\xa4r\xa6\x85\x00\x01'). Position of last group processed was b'\xa4r\xa6\x84\x00\x01'.
Then there are four errors which seem to be about writing CSV files to GCS:
Error in _start_upload while inserting file gs://(redacted).csv: Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/apache_beam/io/gcp/gcsio.py", line 565, in _start_upload self._client.objects.Insert(self._insert_request, upload=self._upload) File "/usr/local/lib/python3.7/site-packages/apache_beam/io/gcp/internal/clients/storage/storage_v1_client.py", line 1156, in Insert upload=upload, upload_config=upload_config) File "/usr/local/lib/python3.7/site-packages/apitools/base/py/base_api.py", line 731, in _RunMethod return self.ProcessHttpResponse(method_config, http_response, request) File "/usr/local/lib/python3.7/site-packages/apitools/base/py/base_api.py", line 737, in ProcessHttpResponse self.__ProcessHttpResponse(method_config, http_response, request)) File "/usr/local/lib/python3.7/site-packages/apitools/base/py/base_api.py", line 604, in __ProcessHttpResponse http_response, method_config=method_config, request=request) apitools.base.py.exceptions.HttpError: HttpError accessing <https://www.googleapis.com/resumable/upload/storage/v1/b/(redacted).csv&uploadType=resumable&upload_id=(redacted)>: response: <{'content-type': 'text/plain; charset=utf-8', 'x-guploader-uploadid': '(redacted)', 'content-length': '0', 'date': 'Wed, 08 Jul 2020 22:17:28 GMT', 'server': 'UploadServer', 'status': '503'}>, content <>
Error in _start_upload while inserting file gs://(redacted).csv: Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/apache_beam/io/gcp/gcsio.py", line 565, in _start_upload self._client.objects.Insert(self._insert_request, upload=self._upload) File "/usr/local/lib/python3.7/site-packages/apache_beam/io/gcp/internal/clients/storage/storage_v1_client.py", line 1156, in Insert upload=upload, upload_config=upload_config) File "/usr/local/lib/python3.7/site-packages/apitools/base/py/base_api.py", line 715, in _RunMethod http_request, client=self.client) File "/usr/local/lib/python3.7/site-packages/apitools/base/py/transfer.py", line 908, in InitializeUpload return self.StreamInChunks() File "/usr/local/lib/python3.7/site-packages/apitools/base/py/transfer.py", line 1020, in StreamInChunks additional_headers=additional_headers) File "/usr/local/lib/python3.7/site-packages/apitools/base/py/transfer.py", line 971, in __StreamMedia self.RefreshResumableUploadState() File "/usr/local/lib/python3.7/site-packages/apitools/base/py/transfer.py", line 873, in RefreshResumableUploadState self.stream.seek(self.progress) File "/usr/local/lib/python3.7/site-packages/apache_beam/io/filesystemio.py", line 301, in seek offset, whence, self.position, self.last_block_position)) NotImplementedError: offset: 0, whence: 0, position: 411, last: 411
The Dataflow job ID is 2020-07-07_13_08_31-7649894576933400587 -- if anyone from Google Cloud Support is able to look at this I'd be very grateful. Thanks very much.
P.S I asked a similar question last year (Dataflow job fails at BigQuery write with backend errors), the resolution was to use --experiments=use_beam_bq_sink -- I am already doing this.
You can safely ignore "Refusing to split" errors. This just means that the split position Dataflow service provided probably was received by the worker after that position was already read by the worker. Hence the worker has to ignore the split request.
Error "Error in _start_upload while inserting" seems more problematic and seems to be similar to https://issues.apache.org/jira/browse/BEAM-7014. I suspect this to be a rare flake though so I'm not sure if this was the reason for your job failure (the job only fails of the same workitem failed four times).
Can you contact Google Cloud support so that they can look into your job ?
I will mention this in the JIRA.
Related
Encountered an internal AutoML error- ClientException: Message: No objects to concatenate
I am trying to implement Hierarchical time series forecasting on azureautoml pipelines. I followed this notebook for implementation https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb While I ran training pipeline on compute instance it worked, but when I am running the same on compute cluster it breaks at hts-proportion-calculation part. This is the error I am getting, system error: Encountered an internal AutoML error. Error Message/Code: ClientException. Additional Info: ClientException: Message: No objects to concatenate InnerException: None ErrorResponse { "error": { "message": "No objects to concatenate" } } logs : Loading arguments for scenario proportions-calculation adding argument --input-medatadata adding argument --hts-graph adding argument --enable-event-logger Input arguments dict is {'--input-medatadata': '/mnt/azureml/cr/j/85509be625484b6caa3c1d97b7ab2e33/cap/data-capability/wd/INPUT_automl_training_workspaceblobstore/azureml/17ca5ae7-7269-4246-888f-e781071e3f5c/automl_training', '--hts-graph': '/mnt/azureml/cr/j/85509be625484b6caa3c1d97b7ab2e33/cap/data-capability/wd/INPUT_hts_graph_workspaceblobstore/azureml/a2c1b15a-c895-41e8-b6a6-1ca37ebe9e77/hts_graph', '--enable-event-logger': None} Unknown file to proceed outputs.txt processing: outputs.txt with type None. Cleaning up all outstanding Run operations, waiting 300.0 seconds 3 items cleaning up... Cleanup took 0.001676321029663086 seconds Traceback (most recent call last): File "proportions_calculation_wrapper.py", line 47, in <module> runtime_wrapper.run() File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_many_models/automl_pipeline_step_wrapper.py", line 63, in run self._run() File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_hts/proportions_calculation.py", line 44, in _run proportions_calculation(self.arguments_dict, self.event_logger, script_run=self.step_run) File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_hts/proportions_calculation.py", line 173, in proportions_calculation proportion_files_list, forecasting_parameters.time_column_name, graph.label_column_name File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_hts/proportions_calculation.py", line 92, in calculate_time_agg_sum_for_all_files df = pd.concat(pool.map(concat_func, files_batches), ignore_index=True) File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/pandas/util/_decorators.py", line 311, in wrapper return func(*args, **kwargs) File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/pandas/core/reshape/concat.py", line 304, in concat sort=sort, File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/pandas/core/reshape/concat.py", line 351, in __init__ raise ValueError("No objects to concatenate") ValueError: No objects to concatenate Please let me know how can I resolve this issue ?
This error was incurred as Iteration timeout was not less than experiment timeout , but the system error & logs are a kind of misleading. df = pd.concat(pool.map(concat_func, files_batches), ignore_index=True) logs was pointing to pandas "No objects to concatenate" This error can be overcome by setting iterationtimeout value less than experimenttime out value. I had set iteration_timeout_minutes=60 which caused the error. automl_settings = AutoMLConfig( task="forecasting", primary_metric="normalized_root_mean_squared_error", experiment_timeout_hours=1, label_column_name=label_column_name, track_child_runs=False, forecasting_parameters=forecasting_parameters, pipeline_fetch_max_batch_size=15, model_explainability=model_explainability, n_cross_validations="auto", # Feel free to set to a small integer (>=2) if runtime is an issue. cv_step_size="auto", # The following settings are specific to this sample and should be adjusted according to your own needs. iteration_timeout_minutes=10, iterations=15, )
We are able to run the sample successfully using the compute cluster as given below. from azureml.core.compute import ComputeTarget, AmlCompute # Name your cluster compute_name = "hts-compute" if compute_name in ws.compute_targets: compute_target = ws.compute_targets[compute_name] if compute_target and type(compute_target) is AmlCompute: print("Found compute target: " + compute_name) else: print("Creating a new compute target...") provisioning_config = AmlCompute.provisioning_configuration( vm_size="STANDARD_D16S_V3", max_nodes=20 ) # Create the compute target compute_target = ComputeTarget.create(ws, compute_name, provisioning_config) # Can poll for a minimum number of nodes and for a specific timeout. # If no min node count is provided it will use the scale settings for the cluster compute_target.wait_for_completion( show_output=True, min_node_count=None, timeout_in_minutes=20 ) # For a more detailed view of current cluster status, use the 'status' property print(compute_target.status.serialize())
UnicodeDecodeError: invalid start byte in METADATA file at path:
I see that several Python-package related files have gibberish at their end. Due to this, I am unable to do several pip operations (even basic ones like "pip list"). (Usually, I use conda by the way) For example. When I pressed pip list. I get the following error. ERROR: Exception: Traceback (most recent call last): File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_internal\cli\base_command.py", line 173, in _main status = self.run(options, args) File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_internal\commands\list.py", line 179, in run self.output_package_listing(packages, options) File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_internal\commands\list.py", line 255, in output_package_listing data, header = format_for_columns(packages, options) File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_internal\commands\list.py", line 307, in format_for_columns row = [proj.raw_name, str(proj.version)] File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_internal\metadata\base.py", line 163, in raw_name return self.metadata.get("Name", self.canonical_name) File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_internal\metadata\pkg_resources.py", line 96, in metadata return get_metadata(self._dist) File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_internal\utils\packaging.py", line 48, in get_metadata metadata = dist.get_metadata(metadata_name) File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 1424, in get_metadata return value.decode('utf-8') UnicodeDecodeError: 'utf-8' codec can't decode byte 0xfd in position 14097: invalid start byte in METADATA file at path: c:\users\shan_jaffry\miniconda3\envs\sql_version\lib\site-packages\hupper-1.10.2.dist-info\METADATA I went into the file META and found the following gibberish at the end. This (I found) has been done in several other files i.e. end of files are appended with gibberish and the actual thin is removed. Any help? > 0.1 (2016-10-21) > ================ > - > - Initial rele9ýl·øA
I found that the by manually going to the site-packages folder, and removing the two folders, :: hupper and hupper-1.10.2.dist-info and then installing hupper again using "pip install hupper", problem was solved. The issue was that the hupper package (and hupper-1.10.2.dist-info) were corrupted. Hence uninstall and re-install helped.
TinkerPop running a very long length query crashes
Massive Query 3500~ characters: g.V().hasLabel("Software").filter(hasId(8192,8193,8194,8195,8196,8197,8198,8199,8200,8201,8202,8203,8204,8205,8206,8207,8208,8209,8210,8211,8212,8213,8214,8215,8216,8217,8218,8219,8220,8221,8222,8223,8224,8225,8226,8227,8228,8229,8230,8231,8232,8233,8234,8235,8236,8237,8238,8239,8240,8241,8242,8243,8244,8245,8246,8247,8248,8249,8250,8251,8252,8253,8254,8255,8256,8257,8258,8259,8260,8261,8262,8263,8264,8265,8266,8267,8268,8269,8270,8271,8272,8273,8274,8275,8276,8277,8278,8279,8280,8281,8282,8283,8284,8285,8286,8287,8288,8289,8290,8291,8292,8293,8294,8295,8296,8297,8298,8299,8300,8301,8302,8303,8304,8305,8306,8307,8308,8309,8310,8311,8312,8313,8314,8315,8316,8317,8318,8319,8320,8321,8322,8323,8324,8325,8326,8327,8328,8329,8330,8331,8332,8333,8334,8335,8336,8337,8338,8339,8340,8341,8342,10197,2448,2449,2450,2451,2452,2453,2454,2455,2456,2457,2458,2459,2460,2461,2462,2463,2464,2465,2466,2467,2468,2469,2470,2471,2472,2473,2474,2475,2476,2477,2478,2479,2480,2481,2482,2483,2484,2485,2486,2487,2488,2489,2490,2491,2492,2493,2494,2495,2496,2497,2498,2499,2500,2501,2502,2503,2504,2505,2506,2507,2508,2509,2510,2511,2512,2513,2514,2515,2516,2517,2518,2519,2520,2521,2522,2523,2524,2525,2526,2527,2528,2529,2530,2531,2532,2533,2534,2535,2536,2537,2538,2539,2540,2541,2542,2543,2544,2545,2546,2547,2548,2549,2550,2551,2552,2553,2554,2555,2556,2557,2558,2559,2560,2561,2562,2563,2564,2565,2566,2567,2568,2569,2570,2571,2572,2573,2574,2575,2576,2577,2578,2579,2580,2581,2582,2583,2584,2585,2586,2587,2588,2589,2590,2591,2592,2593,2594,2595,2596,2597,2598,2599,2600,2601,2602,2603,2604,2605,2606,2607,2608,2609,2610,2611,2612,2613,2614,2615,2616,2617,2618,2619,2620,2621,2622,2623,2624,2625,7839,7840,7841,7842,7843,7844,7845,7846,7847,7848,7849,7850,7851,7852,7853,7854,7855,7856,7857,7858,7859,7860,7861,7862,7863,7864,7865,7866,7867,7868,7869,7870,7871,7872,7873,7874,7875,7876,7877,7878,7879,7880,7881,7882,7883,7884,7885,7886,7887,7888,7889,7890,7891,7892,7893,7894,7895,7896,7897,7898,7899,7900,7901,7902,7903,7904,7905,7906,7907,7908,7909,7910,7911,7912,7913,7914,7915,7916,7917,7918,7919,7920,7921,7922,7923,7924,7925,7926,7927,7928,7929,7930,7931,7932,7933,7934,7935,7936,7937,7938,7939,7940,7941,7942,7943,7944,7945,7946,7947,7948,7949,7950,7951,7952,7953,7954,7955,7956,7957,7958,7959,7960,7961,7962,7963,7964,7965,7966,7967,7968,7969,7970,7971,7972,7973,7974,7975,7976,7977,7978,7979,7980,7981,7982,7983,7984,7985,7986,7987,7988,7989,7990,7991,7992,7993,7994,7995,7996,7997,7998,7999,8000,8001,8002,8003,8004,8005,8006,8007,8008,8009,8010,8011,8012,8013,8014,8015,8016,8017,8018,8019,8020,8021,8022,8023,8024,8025,8026,8027,8028,8029,8030,8031,8032,8033,8034,8035,8036,8037,8038,8039,8040,8041,8042,8043,8044,8045,8046,8047,8048,8049,8050,8051,8052,8053,8054,8055,8056,8057,8058,8059,8060,8061,8062,8063,8064,8065,8066,8067,8068,8069,8070,8071,8072,8073,8074,8075,8076,8077,8078,8079,8080,8081,8082,8083,8084,8085,8086,8087,8088,8089,8090,8091,8092,8093,8094,8095,8096,8097,8098,8099,8100,8101,8102,8103,8104,8105,8106,8107,8108,8109,8110,8111,8112,8113,8114,8115,8116,8117,8118,8119,8120,8121,8122,8123,8124,8125,8126,8127,8128,8129,8130,8131,8132,8133,8134,8135,8136,8137,8138,8139,8140,8141,8142,8143,8144,8145,8146,8147,8148,8149,8150,8151,8152,8153,8154,8155,8156,8157,8158,8159,8160,8161,8162,8163,8164,8165,8166,8167,8168,8169,8170,8171,8172,8173,8174,8175,8176,8177,8178,8179,8180,8181,8182,8183,8184,8185,8186,8187,8188,8189,8190,8191)) .values("name") And it crashed badly, my guess is there is some kind of limit in the query length. If my assumption of length problem is correct, is there any work around for this??? From Python I am running it like: client = driver.Client(GREMLIN_URL, GREMLIN_VAR) client.submit(query) Stacktrace: Traceback (most recent call last): File "/home/galaxia/Documents/bitbucket repo/ecodrone/ecodrone/test/test2.py", line 263, in <module> """)) File "/home/galaxia/Documents/bitbucket repo/ecodrone/ecodrone/GremlinConnector.py", line 22, in execute_query results = future_results.result() File "/usr/lib/python3.5/concurrent/futures/_base.py", line 405, in result return self.__get_result() File "/usr/lib/python3.5/concurrent/futures/_base.py", line 357, in __get_result raise self._exception File "/home/galaxia/PycharmProjects/helloworld/venv/lib/python3.5/site-packages/gremlin_python/driver/resultset.py", line 81, in cb f.result() File "/usr/lib/python3.5/concurrent/futures/_base.py", line 398, in result return self.__get_result() File "/usr/lib/python3.5/concurrent/futures/_base.py", line 357, in __get_result raise self._exception File "/usr/lib/python3.5/concurrent/futures/thread.py", line 55, in run result = self.fn(*self.args, **self.kwargs) File "/home/galaxia/PycharmProjects/helloworld/venv/lib/python3.5/site-packages/gremlin_python/driver/connection.py", line 77, in _receive self._protocol.data_received(data, self._results) File "/home/galaxia/PycharmProjects/helloworld/venv/lib/python3.5/site-packages/gremlin_python/driver/protocol.py", line 106, in data_received "{0}: {1}".format(status_code, data["status"]["message"])) gremlin_python.driver.protocol.GremlinServerError: 597: startup failed: General error during class generation: 683 java.lang.ArrayIndexOutOfBoundsException: 683 at org.codehaus.groovy.classgen.asm.CallSiteWriter.getCreateArraySignature(CallSiteWriter.java:58) at org.codehaus.groovy.classgen.asm.CallSiteWriter.makeCallSite(CallSiteWriter.java:317) at org.codehaus.groovy.classgen.asm.InvocationWriter.makeCachedCall(InvocationWriter.java:307) at org.codehaus.groovy.classgen.asm.InvocationWriter.makeCall(InvocationWriter.java:397) at org.codehaus.groovy.classgen.asm.InvocationWriter.makeCall(InvocationWriter.java:104) at org.codehaus.groovy.classgen.asm.InvocationWriter.writeInvokeStaticMethod(InvocationWriter.java:515) at org.codehaus.groovy.classgen.AsmClassGenerator.visitStaticMethodCallExpression(AsmClassGenerator.java:807) at org.codehaus.groovy.ast.expr.StaticMethodCallExpression.visit(StaticMethodCallExpression.java:46) at org.codehaus.groovy.classgen.asm.CallSiteWriter.makeCallSite(CallSiteWriter.java:303) at org.codehaus.groovy.classgen.asm.InvocationWriter.makeCachedCall(InvocationWriter.java:307) at org.codehaus.groovy.classgen.asm.InvocationWriter.makeCall(InvocationWriter.java:397) at org.codehaus.groovy.classgen.asm.InvocationWriter.makeCall(InvocationWriter.java:104) at org.codehaus.groovy.classgen.asm.InvocationWriter.makeInvokeMethodCall(InvocationWriter.java:88) at org.codehaus.groovy.classgen.asm.InvocationWriter.writeInvokeMethod(InvocationWriter.java:464) at org.codehaus.groovy.classgen.AsmClassGenerator.visitMethodCallExpression(AsmClassGenerator.java:771) at org.codehaus.groovy.ast.expr.MethodCallExpression.visit(MethodCallExpression.java:66) at org.codehaus.groovy.classgen.asm.CallSiteWriter.prepareSiteAndReceiver(CallSiteWriter.java:235) at org.codehaus.groovy.classgen.asm.CallSiteWriter.prepareSiteAndReceiver(CallSiteWriter.java:224) at org.codehaus.groovy.classgen.asm.CallSiteWriter.makeCallSite(CallSiteWriter.java:272) at org.codehaus.groovy.classgen.asm.InvocationWriter.makeCachedCall(InvocationWriter.java:307) at org.codehaus.groovy.classgen.asm.InvocationWriter.makeCall(InvocationWriter.java:397) at org.codehaus.groovy.classgen.asm.InvocationWriter.makeCall(InvocationWriter.java:104) at org.codehaus.groovy.classgen.asm.InvocationWriter.makeInvokeMethodCall(InvocationWriter.java:88) at org.codehaus.groovy.classgen.asm.InvocationWriter.writeInvokeMethod(InvocationWriter.java:464) at org.codehaus.groovy.classgen.AsmClassGenerator.visitMethodCallExpression(AsmClassGenerator.java:771) at org.codehaus.groovy.ast.expr.MethodCallExpression.visit(MethodCallExpression.java:66) at org.codehaus.groovy.classgen.asm.StatementWriter.writeReturn(StatementWriter.java:590) at org.codehaus.groovy.classgen.asm.OptimizingStatementWriter.writeReturn(OptimizingStatementWriter.java:324) at org.codehaus.groovy.classgen.AsmClassGenerator.visitReturnStatement(AsmClassGenerator.java:620) at org.codehaus.groovy.ast.stmt.ReturnStatement.visit(ReturnStatement.java:49) at org.codehaus.groovy.classgen.asm.StatementWriter.writeBlockStatement(StatementWriter.java:85) at org.codehaus.groovy.classgen.asm.OptimizingStatementWriter.writeBlockStatement(OptimizingStatementWriter.java:159) at org.codehaus.groovy.classgen.AsmClassGenerator.visitBlockStatement(AsmClassGenerator.java:570) at org.codehaus.groovy.ast.stmt.BlockStatement.visit(BlockStatement.java:71) at org.codehaus.groovy.ast.ClassCodeVisitorSupport.visitClassCodeContainer(ClassCodeVisitorSupport.java:104) at org.codehaus.groovy.ast.ClassCodeVisitorSupport.visitConstructorOrMethod(ClassCodeVisitorSupport.java:115) at org.codehaus.groovy.classgen.AsmClassGenerator.visitStdMethod(AsmClassGenerator.java:434) at org.codehaus.groovy.classgen.AsmClassGenerator.visitConstructorOrMethod(AsmClassGenerator.java:387) at org.codehaus.groovy.ast.ClassCodeVisitorSupport.visitMethod(ClassCodeVisitorSupport.java:126) at org.codehaus.groovy.classgen.AsmClassGenerator.visitMethod(AsmClassGenerator.java:511) at org.codehaus.groovy.ast.ClassNode.visitContents(ClassNode.java:1081) at org.codehaus.groovy.ast.ClassCodeVisitorSupport.visitClass(ClassCodeVisitorSupport.java:53) at org.codehaus.groovy.classgen.AsmClassGenerator.visitClass(AsmClassGenerator.java:233) at org.codehaus.groovy.control.CompilationUnit$17.call(CompilationUnit.java:825) at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1065) at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:603) at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581) at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558) at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298) at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268) at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:254) at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:211) at org.apache.tinkerpop.gremlin.groovy.jsr223.GremlinGroovyScriptEngine$2.lambda$load$0(GremlinGroovyScriptEngine.java:166) at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590) at java.util.concurrent.CompletableFuture.asyncSupplyStage(CompletableFuture.java:1604) at java.util.concurrent.CompletableFuture.supplyAsync(CompletableFuture.java:1830) at org.apache.tinkerpop.gremlin.groovy.jsr223.GremlinGroovyScriptEngine$2.load(GremlinGroovyScriptEngine.java:164) at org.apache.tinkerpop.gremlin.groovy.jsr223.GremlinGroovyScriptEngine$2.load(GremlinGroovyScriptEngine.java:159) at com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalLoadingCache.lambda$new$0(BoundedLocalCache.java:3117) at com.github.benmanes.caffeine.cache.LocalCache.lambda$statsAware$0(LocalCache.java:144) at com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$16(BoundedLocalCache.java:1968) at java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1892) at com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:1966) at com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:1949) at com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:113) at com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:67) at org.apache.tinkerpop.gremlin.groovy.jsr223.GremlinGroovyScriptEngine.getScriptClass(GremlinGroovyScriptEngine.java:586) at org.apache.tinkerpop.gremlin.groovy.jsr223.GremlinGroovyScriptEngine.eval(GremlinGroovyScriptEngine.java:393) at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:233) at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.lambda$eval$0(GremlinExecutor.java:263) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 1 error Summary: I am trying to do Vendor independent text search, and posted my problems in stackoverflow and google groups. It seemed pretty clear that there is no solution so such a thing, at the moment. So I attempted to do this, Fetch all values with g.V().hasLabel("software").project("id", "name").by(id()).by("name") In do code perform text search Fetch all those vertices by its mapped ids. Update: This seems arr=[1,2,3,....n].toArray() g.V().filter(hasId(arr)).values("name") and not this g.V().filter(hasId(1,2,3,....n)).values("name")
If you send large scripts to Gremlin Server you can expect to see some problems. Large scripts have long compilation times and they can exceed the maximum byte size the JVM allows for a method. Your really long traversal string really doesn't need to be that long if you do something that you should be doing anyway - parameterizing your queries. First, let simplify your traversal: g.V().hasLabel("Software").filter(hasId(8192,8193,8194....)).values("name") is really just: g.V().hasId(8192,8193,8194....).values("name") If you have the actual vertex identifier then you already have the unique id and thus do not require the vertex label filter for "Software". We can then further simplify down to: g.V(8192,8193,8194....).values("name") Now, let's parameterize the script: g.V(ids).values("name") and sent from the gremlin-python driver the code looks like: client.submit("g.V(ids).values('name')",{'ids':[8192,8193,8194....]}).next() You will see a massive improvement in performance (especially on repeated calls) by taking this approach.
stripe.error.APIConnectionError connected to openssl
I am dealing with the following error Traceback (most recent call last): File "once.py", line 1757, in <module> once() File "once.py", line 55, in once stripe.Charge.all() File "/Library/Python/2.7/site-packages/stripe/resource.py", line 438, in all return cls.list(*args, **params) File "/Library/Python/2.7/site-packages/stripe/resource.py", line 450, in list response, api_key = requestor.request('get', url, params) File "/Library/Python/2.7/site-packages/stripe/api_requestor.py", line 150, in request method.lower(), url, params, headers) File "/Library/Python/2.7/site-packages/stripe/api_requestor.py", line 281, in request_raw method, abs_url, headers, post_data) File "/Library/Python/2.7/site-packages/stripe/http_client.py", line 139, in request self._handle_request_error(e) File "/Library/Python/2.7/site-packages/stripe/http_client.py", line 159, in _handle_request_error raise error.APIConnectionError(msg) stripe.error.APIConnectionError: Unexpected error communicating with Stripe. If this problem persists, let us know at support#stripe.com. I get this error when running a simple test program which stripe suggested import stripe stripe.api_key = "blah bla" stripe.api_base = "https://api-tls12.stripe.com" print "stripe.VERSION = ", stripe.VERSION if stripe.VERSION in ("1.13.0", "1.14.0", "1.14.1", "1.15.1", "1.16.0", "1.17.0", "1.18.0", "1.19.0"): print "Bindings update required." try: stripe.Charge.all() print "TLS 1.2 supported, no action required." except stripe.error.APIConnectionError: print "TLS 1.2 is not supported. You will need to upgrade your integration." raise I do not understand why I get this error, since my stripe version is high enough stripe.VERSION = 1.55.2 and my openssl version does support TLS? >>$ openssl version OpenSSL 1.0.2k 26 Jan 2017 >>$ which openssl /usr/bin/openssl any ideas how to debug this further? I am lost...
ok I don't know what exactly caused the problem, but I got it working by changing the client client = stripe.http_client.PycurlClient() stripe.default_http_client = client I think the requests package is the default. pycurl seems to work in my case...
Pinterest API search not working anymore
I was looking for pinterest API endpoints... I've founded some URL.. https://api.pinterest.com/v3/domains/<domains>/search/pins/?query=<query>&access_token=<access_token> I was able to generate the access_token..but every time I've tried a POST on that URL it gave me: { "status":"failure", "code":12, "host":"ngapi2-b2fc674c", "generated_at":"Mon, 09 Feb 2015 17:45:29 +0000", "message":"Something went wrong on our end. Sorry about that.", "data":"path: /v3/domains/www.vtracker.com.br/search/pins/\nparams: [('access_token', [u'blablahblahblah']), ('query', [u'como'])]\nTraceback (most recent call last):\n File \"/mnt/virtualenv/local/lib/python2.7/site-packages/flask/app.py\", line 1504, in wsgi_app\n response = self.full_dispatch_request()\n File \"/mnt/virtualenv/local/lib/python2.7/site-packages/flask/app.py\", line 1264, in full_dispatch_request\n rv = self.handle_user_exception(e)\n File \"/mnt/virtualenv/local/lib/python2.7/site-packages/flask/app.py\", line 1262, in full_dispatch_request\n rv = self.dispatch_request()\n File \"/mnt/virtualenv/local/lib/python2.7/site-packages/flask/app.py\", line 1248, in dispatch_request\n return self.view_functions[rule.endpoint](**req.view_args)\n File \"../api/pin_api.py\", line 715, in __call__\n self._perform_auth()\n File \"../api/pin_api.py\", line 848, in _perform_auth\n authorization.perform(dictified_values, request.cookies, request.headers)\n File \"../api/pin_api.py\", line 271, in perform\n params, cookies, headers)\n File \"../api/pin_api.py\", line 121, in perform\n headers=headers)\n File \"../api/decorators.py\", line 212, in verify_user_authorization\n core.Consumer.manager.get_scope_as_int(required_scope)):\n File \"../core/managers/consumer_manager.py\", line 479, in check_scope\n scope = migrate_legacy_scope(scope)\n File \"../core/managers/consumer_manager.py\", line 475, in migrate_legacy_scope\n if ~scope & old == 0:\nTypeError: bad operand type for unary ~: 'NoneType'\n" } Is Pinterest API v3 closed or some other problem is going on? Thks
You must first ensure that your app has been approved by Pinterest. You may need to reapply for approval (I had to reapply for my app). Once you are approved, you will see a link on your app page called "Visit API docs". This will link to the V3 documentation (https://developers.pinterest.com/docs/redoc/pinner_app). At least, this is the documentation I have been given access to. If you have a different type of app, maybe you will have access to other documentation. After your app has been approved, the section of the documentation you will be interested in is "Search user pins" (https://developers.pinterest.com/docs/redoc/pinner_app/#tag/search). The endpoint is: https://api.pinterest.com/v3/search/user_pins/{user}/ The documentation provides details about the query parameters that are allowed and the response data.