Reading the medusa_s3_credentials not work on medusa - python-3.x

Describe the bug
Python rises an error during the initialization of medusa container
Environment:
```
apiVersion: v1
kind: Secret
metadata:
name: medusa-bucket-key
type: Opaque
stringData:
medusa_s3_credentials: |-
[default]
aws_access_key_id = xxxxxx
aws_secret_access_key = xxxxxxxx
```
medusa-operator version:
0.12.2
Helm charts version info
apiVersion: v2
name: k8ssandra
type: application
version: 1.6.0-SNAPSHOT
dependencies:
* name: cass-operator
version: 0.35.2
* name: reaper-operator
version: 0.32.3
* name: medusa-operator
version: 0.32.0
* name: k8ssandra-common
version: 0.28.4
Kubernetes version information:
v1.23.1
Kubernetes cluster kind:
EKS
Operator logs:
MEDUSA_MODE = GRPC sleeping for 0 sec Starting Medusa gRPC service INFO:root:Init service [2022-05-10 12:56:28,368] INFO: Init service DEBUG:root:Loading storage_provider: s3 [2022-05-10 12:56:28,368] DEBUG: Loading storage_provider: s3 DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): 169.254.169.254:80 [2022-05-10 12:56:28,371] DEBUG: Starting new HTTP connection (1): 169.254.169.254:80 DEBUG:urllib3.connectionpool:http://169.254.169.254:80 "PUT /latest/api/token HTTP/1.1" 200 56 [2022-05-10 12:56:28,373] DEBUG: http://169.254.169.254:80 "PUT /latest/api/token HTTP/1.1" 200 56 DEBUG:root:Reading AWS credentials from /etc/medusa-secrets/medusa_s3_credentials [2022-05-10 12:56:28,373] DEBUG: Reading AWS credentials from /etc/medusa-secrets/medusa_s3_credentials Traceback (most recent call last): File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main "**main**", mod_spec) File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/cassandra/medusa/service/grpc/server.py", line 297, in <module> server.serve() File "/home/cassandra/medusa/service/grpc/server.py", line 60, in serve medusa_pb2_grpc.add_MedusaServicer_to_server(MedusaService(config), self.grpc_server) File "/home/cassandra/medusa/service/grpc/server.py", line 99, in **init** self.storage = Storage(config=self.config.storage) File "/home/cassandra/medusa/storage/**init**.py", line 72, in **init** self.storage_driver = self._connect_storage() File "/home/cassandra/medusa/storage/**init**.py", line 92, in _connect_storage s3_storage = S3Storage(self._config) File "/home/cassandra/medusa/storage/s3_storage.py", line 40, in **init** super().**init**(config) File "/home/cassandra/medusa/storage/abstract_storage.py", line 39, in **init** self.driver = self.connect_storage() File "/home/cassandra/medusa/storage/s3_storage.py", line 78, in connect_storage profile = aws_config[aws_profile] File "/usr/lib/python3.6/configparser.py", line 959, in **getitem** raise KeyError(key) KeyError: 'default'
Which could be the problem?
thanks
Cristian

Related

AWS SAM API endpoint using a Lambda is throwing error when it runs locally with "sam start local-api"

I am building a simple API endpoint using AWS Lambda function and API Gateway. I am orchestrating those resources using SAM. I can deploy my function to the AWS cloud and access the endpoint. It's working as expected.
Here is my lambda function.
exports.lambdaHandler = async (event) => ({
statusCode: 200,
body: {
message: `Hello World!`
}
})
Here is my SAM CloudFormation template
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: HTTP API
Globals:
Function:
Timeout: 5
Handler: app.lambdaHandler
Runtime: nodejs16.x
Resources:
LambdaFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: src/
Events:
Api:
Type: Api
Properties:
Path: /
Method: GET
Outputs:
WebEndpoint:
Description: "API Gateway endpoint URL for Prod stage"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/"
But I am getting the error when I run the endpoint locally in the Docker container.
I run the following command to run the API locally.
sam build --use-container && sam local start-api
When I run that, the docker image was build successfully and there is no error in the console. In the console, it gives me a local endpoint, http://127.0.0.1:3000/.
When I visit the endpoint in the browser, I am getting this response.
{"message":"Internal server error"}
I also seeing the following errors in the console.
Invoking app.lambdaHandler (nodejs16.x)
Image was not found.
Removing rapid images for repo public.ecr.aws/sam/emulation-nodejs16.x
Building image........................
Failed to build Docker Image
NoneType: None
Exception on / [GET]
Traceback (most recent call last):
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/flask/app.py", line 2073, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/flask/app.py", line 1518, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/flask/app.py", line 1516, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/flask/app.py", line 1502, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/apigw/local_apigw_service.py", line 361, in _request_handler
self.lambda_runner.invoke(route.function_name, event, stdout=stdout_stream_writer, stderr=self.stderr)
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/commands/local/lib/local_lambda.py", line 137, in invoke
self.local_runtime.invoke(
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/lib/telemetry/metric.py", line 315, in wrapped_func
return_value = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/lambdafn/runtime.py", line 177, in invoke
container = self.create(function_config, debug_context, container_host, container_host_interface)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/lambdafn/runtime.py", line 73, in create
container = LambdaContainer(
^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/docker/lambda_container.py", line 93, in __init__
image = LambdaContainer._get_image(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/docker/lambda_container.py", line 236, in _get_image
return lambda_image.build(runtime, packagetype, image, layers, architecture, function_name=function_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/docker/lambda_image.py", line 164, in build
self._build_image(
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/docker/lambda_image.py", line 279, in _build_image
raise ImageBuildException("Error building docker image: {}".format(log["error"]))
samcli.commands.local.cli_common.user_exceptions.ImageBuildException: Error building docker image: The command '/bin/sh -c mv /var/rapid/aws-lambda-rie-x86_64 /var/rapid/aws-lambda-rie && chmod +x /var/rapid/aws-lambda-rie' returned a non-zero code: 1
2023-01-21 09:29:27 127.0.0.1 - - [21/Jan/2023 09:29:27] "GET / HTTP/1.1" 502 -
2023-01-21 09:29:27 127.0.0.1 - - [21/Jan/2023 09:29:27] "GET /favicon.ico HTTP/1.1" 403 -
What is wrong with my code and how can I fix it?
We had a similar problem only on MacOS so if you are using MacOS this might help you (there is homebrew in your error response so i guess you are)
In our case aws-sam-cli was installed via brew.
What we did was deleting aws-sam-cli and installed it via Command Line - All users using arm64.pkg
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/install-sam-cli.html
After that we got
=== ERROR: Error: Protocol error (Target.setDiscoverTargets): Target closed.
but this might be related to our project only.

Cant access redis from fastapi api when they are in a same cluster, same pod, using a single node cluster via docker desktop

2023-01-07 21:05:14 File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 407, in run_asgi
2023-01-07 21:05:14 result = await app( # type: ignore[func-returns-value]
2023-01-07 21:05:14 File "/usr/local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
2023-01-07 21:05:14 return await self.app(scope, receive, send)
2023-01-07 21:05:14 File "/usr/local/lib/python3.10/site-packages/fastapi/applications.py", line 270, in __call__
2023-01-07 21:05:14 await super().__call__(scope, receive, send)
2023-01-07 21:05:14 File "/usr/local/lib/python3.10/site-packages/starlette/applications.py", line 124, in __call__
2023-01-07 21:05:14 await self.middleware_stack(scope, receive, send)
2023-01-07 21:05:14 File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
2023-01-07 21:05:14 raise exc
2023-01-07 21:05:14 File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
2023-01-07 21:05:14 await self.app(scope, receive, _send)
2023-01-07 21:05:14 File "/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
2023-01-07 21:05:14 raise exc
2023-01-07 21:05:14 File "/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
2023-01-07 21:05:14 await self.app(scope, receive, sender)
2023-01-07 21:05:14 File "/usr/local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
2023-01-07 21:05:14 raise e
2023-01-07 21:05:14 File "/usr/local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
2023-01-07 21:05:14 await self.app(scope, receive, send)
2023-01-07 21:05:14 File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 706, in __call__
2023-01-07 21:05:14 await route.handle(scope, receive, send)
2023-01-07 21:05:14 File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
2023-01-07 21:05:14 await self.app(scope, receive, send)
2023-01-07 21:05:14 File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 66, in app
2023-01-07 21:05:14 response = await func(request)
2023-01-07 21:05:14 File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 235, in app
2023-01-07 21:05:14 raw_response = await run_endpoint_function(
2023-01-07 21:05:14 File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 161, in run_endpoint_function
2023-01-07 21:05:14 return await dependant.call(**values)
2023-01-07 21:05:14 File "/app/./main.py", line 49, in login
2023-01-07 21:05:14 stored_password = await redis_client.hget(name=user.username, key="password")
2023-01-07 21:05:14 File "/usr/local/lib/python3.10/site-packages/aioredis/client.py", line 1082, in execute_command
2023-01-07 21:05:14 conn = self.connection or await pool.get_connection(command_name, **options)
2023-01-07 21:05:14 File "/usr/local/lib/python3.10/site-packages/aioredis/connection.py", line 1416, in get_connection
2023-01-07 21:05:14 await connection.connect()
2023-01-07 21:05:14 File "/usr/local/lib/python3.10/site-packages/aioredis/connection.py", line 698, in connect
2023-01-07 21:05:14 raise ConnectionError(self._error_message(e))
2023-01-07 21:05:14 aioredis.exceptions.ConnectionError: Error 111 connecting to redis:6379. 111.
deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-service
spec:
replicas: 1
selector:
matchLabels:
app: auth-service
template:
metadata:
labels:
app: auth-service
spec:
containers:
- name: auth-service
image: localhost:5000/auth-service:latest
ports:
- containerPort: 8000
resources:
limits:
cpu: "1"
memory: "1Gi"
requests:
cpu: "0.5"
memory: "500Mi"
envFrom:
- secretRef:
name: auth-service-fastapi-secrets
- name: redis
image: redis:alpine
ports:
- containerPort: 6379
resources:
limits:
cpu: "1"
memory: "1Gi"
requests:
cpu: "0.5"
memory: "500Mi"
---
apiVersion: v1
kind: Service
metadata:
name: auth-service
spec:
type: NodePort
selector:
app: auth-service
ports:
- port: 8000
targetPort: 8000
nodePort: 30080
---
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
selector:
app: redis
ports:
- port: 6379
targetPort: 6379
my docker-compose did work:
docker-compose.yml:
version: "3"
services:
auth-service:
build: .
ports:
- "8000:8000"
environment:
REDIS_HOST: redis
REDIS_PORT: 6379
env_file: fastapi.env
depends_on:
- redis
redis:
image: "redis:alpine"
volumes:
- redis_data:/data
volumes:
redis_data:
fastapi code:
import os
import bcrypt
import jwt
import aioredis
from datetime import datetime
from fastapi import FastAPI, HTTPException, Cookie, Depends, Response
from pydantic import BaseModel, EmailStr
app = FastAPI()
# user model
class User(BaseModel):
username: str
password: str
# env variables
SECRET_KEY = os.environ["JWT_SECRET"]
# redis connection
redis_client = aioredis.from_url(
"redis://redis:6379", encoding="utf-8", decode_responses=True)
So I am not sure what the problem is.
I have tried talking to chatGPT, didn't quite worked for me, also, I tried using the cluster ip of redis in the fastapi code instead of the name "redis":
aioredis.from_url( "redis://redis:6379", encoding="utf-8", decode_responses=True)
still not working:
ConnectionRefusedError: [Errno 111] Connect call failed ('10.107.169.72', 6379)
saw some similar question on stackoverflow, but still confused after reading it.
When your application code connects to redis:6379, in Kubernetes, it connects to the Service named redis in the same namespace. In the setup you've shown, that forwards requests to Pods with a label app: redis. There aren't any of those Pods, though, which results in your error.
You should also be able to see this comparing kubectl describe service auth-service and kubectl describe service redis. The redis service should end with a line like
Endpoints: <none>
which is usually a sign that the Service's selector: doesn't match the Pods' labels:.
In your case, the right answer is to split the Deployment into two, with only one container each. Especially:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
...
spec:
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:alpine
This has a couple of other technical advantages. If you rebuild your application and change the image: tag in your main Deployment, the restart won't also restart Redis, so you'll keep your cache. If you set the main application to have multiple replicas:, they'll all share the same single Redis in the other Deployment.
(If you want to set up your Redis to also persist its data to disk, use a StatefulSet rather than a Deployment. This is a more complicated setup, and comes with requirements like an additional Service. If you're fine with your Redis occasionally losing its state than a Deployment is fine.)
According to this answer, you can access other containers in the same pod on localhost, so in your case localhost:6379.
I would also consider why you're running both your application and redis on the same pod. The whole point of using kubernetes is to be able to scale your application, and having several containers in the same pod makes it impossible to scale your app without also scaling redis, and vice versa.

InvalidClientTokenId error when trying to access SQS from Celery

When I try to connect to SQS using Celery, I'm not able to. The worker crashes with the following message:
[2022-10-28 14:05:39,237: CRITICAL/MainProcess] Unrecoverable error: ClientError('An error occurred (InvalidClientTokenId) when calling the GetQueueAttributes operation: The security token included in the request is invalid.')
Traceback (most recent call last):
File "/home/aditya/.local/lib/python3.10/site-packages/celery/worker/worker.py", line 203, in start
self.blueprint.start(self)
File "/home/aditya/.local/lib/python3.10/site-packages/celery/bootsteps.py", line 116, in start
step.start(parent)
File "/home/aditya/.local/lib/python3.10/site-packages/celery/bootsteps.py", line 365, in start
return self.obj.start()
File "/home/aditya/.local/lib/python3.10/site-packages/celery/worker/consumer/consumer.py", line 332, in start
blueprint.start(self)
File "/home/aditya/.local/lib/python3.10/site-packages/celery/bootsteps.py", line 116, in start
step.start(parent)
File "/home/aditya/.local/lib/python3.10/site-packages/celery/worker/consumer/tasks.py", line 38, in start
c.task_consumer = c.app.amqp.TaskConsumer(
File "/home/aditya/.local/lib/python3.10/site-packages/celery/app/amqp.py", line 274, in TaskConsumer
return self.Consumer(
File "/home/aditya/.local/lib/python3.10/site-packages/kombu/messaging.py", line 387, in __init__
self.revive(self.channel)
File "/home/aditya/.local/lib/python3.10/site-packages/kombu/messaging.py", line 409, in revive
self.declare()
File "/home/aditya/.local/lib/python3.10/site-packages/kombu/messaging.py", line 422, in declare
queue.declare()
File "/home/aditya/.local/lib/python3.10/site-packages/kombu/entity.py", line 606, in declare
self._create_queue(nowait=nowait, channel=channel)
File "/home/aditya/.local/lib/python3.10/site-packages/kombu/entity.py", line 615, in _create_queue
self.queue_declare(nowait=nowait, passive=False, channel=channel)
File "/home/aditya/.local/lib/python3.10/site-packages/kombu/entity.py", line 643, in queue_declare
ret = channel.queue_declare(
File "/home/aditya/.local/lib/python3.10/site-packages/kombu/transport/virtual/base.py", line 523, in queue_declare
return queue_declare_ok_t(queue, self._size(queue), 0)
File "/home/aditya/.local/lib/python3.10/site-packages/kombu/transport/SQS.py", line 633, in _size
resp = c.get_queue_attributes(
File "/home/aditya/.local/lib/python3.10/site-packages/botocore/client.py", line 514, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/aditya/.local/lib/python3.10/site-packages/botocore/client.py", line 938, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (InvalidClientTokenId) when calling the GetQueueAttributes operation: The security token included in the request is invalid.
The credentials are correct, and my Cloud Admin insists that my credentials have these permissions.
If it helps, these are in my settings.py :
from urllib.parse import quote
CELERY_BROKER_URL = 'sqs://{access_key}:{secret_key}#'.format(
access_key=quote(env.str("AWS_ACCESS_KEY_ID"), safe=''),
secret_key=quote(env.str("AWS_SECRET_ACCESS_KEY"), safe=''),
)
CELERY_BROKER_TRANSPORT_OPTIONS = {
"region": "us-east-2",
"polling_interval": 60,
'sdb_persistence': False,
"predefined_queues":{
'myq': {
'url': 'https://sqs.us-east-2.amazonaws.com/164782647287/myq',
'access_key_id': env.str("AWS_ACCESS_KEY_ID"),
'secret_access_key': env.str("AWS_SECRET_ACCESS_KEY"),
},
}
}
CELERY_TASK_DEFAULT_QUEUE = 'myq'
I'm using the latest version of Celery and associated dependencies:
celery[sqs]==5.2.7

Swagger-UI generated python server not starting due to 'no module named' error

I'm working with an OpenAPI 3.0.1 yaml and it's unable to get the API webserver started due to the below error. I tried almost everything that is under my knowledge but I'm very new at OpenAPI and the documentation was followed as it is. Any thoughts on what could be wrong here?
This is the error on loading up the server:
Failed to add operation for GET /v2/catalog
Traceback (most recent call last):
File "C:\Programs\Python\Python38\lib\site-packages\connexion\apis\abstract.py", line 209, in add_paths
self.add_operation(path, method)
File "C:\Programs\Python\Python38\lib\site-packages\connexion\apis\abstract.py", line 162, in add_operation
operation = make_operation(
File "C:\Programs\Python\Python38\lib\site-packages\connexion\operations\__init__.py", line 8, in make_operation
return spec.operation_cls.from_spec(spec, *args, **kwargs)
File "C:\Programs\Python\Python38\lib\site-packages\connexion\operations\openapi.py", line 128, in from_spec
return cls(
File "C:\Programs\Python\Python38\lib\site-packages\connexion\operations\openapi.py", line 75, in __init__
super(OpenAPIOperation, self).__init__(
File "C:\Programs\Python\Python38\lib\site-packages\connexion\operations\abstract.py", line 96, in __init__
self._resolution = resolver.resolve(self)
File "C:\Programs\Python\Python38\lib\site-packages\connexion\resolver.py", line 40, in resolve
return Resolution(self.resolve_function_from_operation_id(operation_id), operation_id)
File "C:\Programs\Python\Python38\lib\site-packages\connexion\resolver.py", line 64, in resolve_function_from_operation_id
raise ResolverError(msg, sys.exc_info())
connexion.exceptions.ResolverError: <ResolverError: Cannot resolve operationId "catalog.get"! Import error was "No module named 'catalog'">
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Programs\Python\Python38\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Programs\Python\Python38\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "D:\API\swagger_server\__main__.py", line 25, in <module>
main()
File "D:\API\swagger_server\__main__.py", line 18, in main
app.add_api('D:\API\swagger_server\swagger\swagger.yaml', arguments={'title': 'GPI API Broker'}, pythonic_params=True)
File "C:\Programs\Python\Python38\lib\site-packages\connexion\apps\flask_app.py", line 57, in add_api
api = super(FlaskApp, self).add_api(specification, **kwargs)
File "C:\Programs\Python\Python38\lib\site-packages\connexion\apps\abstract.py", line 141, in add_api
api = self.api_cls(specification,
File "C:\Programs\Python\Python38\lib\site-packages\connexion\apis\abstract.py", line 111, in __init__
self.add_paths()
File "C:\Programs\Python\Python38\lib\site-packages\connexion\apis\abstract.py", line 216, in add_paths
self._handle_add_operation_error(path, method, err.exc_info)
File "C:\Programs\Python\Python38\lib\site-packages\connexion\apis\abstract.py", line 231, in _handle_add_operation_error
raise value.with_traceback(traceback)
File "C:\Programs\Python\Python38\lib\site-packages\connexion\resolver.py", line 61, in resolve_function_from_operation_id
return self.function_resolver(operation_id)
File "C:\Programs\Python\Python38\lib\site-packages\connexion\utils.py", line 110, in get_function_from_name
module = importlib.import_module(module_name)
File "C:\Programs\Python\Python38\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'catalog'
The startup command is basically: python -m swagger_server
Finally, this is part of my YAML where the operationId is mentioned:
openapi: 3.0.1
info:
title: Open Service Broker API
description: The Open Service Broker API defines an HTTP(S) interface between Platforms
and Service Brokers.
contact:
name: Open Service Broker API
url: https://www.openservicebrokerapi.org/
email: open-service-broker-api#googlegroups.com
license:
name: Apache 2.0
url: http://www.apache.org/licenses/LICENSE-2.0.html
version: master - might contain changes that are not yet released
externalDocs:
description: The offical Open Service Broker API specification
url: https://github.com/openservicebrokerapi/servicebroker/
servers:
- url: http://localhost:80/
- url: https://localhost:80/
security:
- basicAuth: []
paths:
/v2/catalog:
get:
tags:
- Catalog
summary: get the catalog of services that the service broker offers
operationId: 'catalog.get'
parameters:
- name: X-Broker-API-Version
in: header
...
Thank you all in advance!
The operationId must be relative to where your app is running.
swagger_server
|-- app.py
|-- __init__.py
|-- OpenAPI
| |-- openapi.yml
|-- models
| |-- catalog.py
Given the above folder structure, you start the app in the directory swagger_server. Consequently, the path of app.py is in your sys.path. Relative to this path you need to specify the operationId in your openapi.yml:
paths:
/v2/catalog:
get:
tags:
- Catalog
summary: get the catalog of services that the service broker offers
operationId: 'models.catalog.get'
The file catalog.py must contain a function def get(name). Here you can define your service/handler for the endpoint. It must include the parameter name because you have specified the parameter in your YAML file.

ldap3.core.exceptions.LDAPSessionTerminatedByServerError: session terminated by server

I am trying to run the code which is trying to build a connection:
server = Server(host='localhost', port=33389, use_ssl=False, get_info=ALL)
conn = Connection(server, user='uid=admin,ou=people,dc=example,dc=org', password=user-pass, raise_exceptions=False, authentication=SIMPLE)
print(server.info)
print(conn)
Below is the error detail:
None
ldap://localhost:33389 - cleartext - user: uid=admin,ou=people,dc=hadoop,dc=apache,dc=org - not lazy - unbound - closed - <no socket> - tls not started - not listening - SyncStrategy - internal decoder
**************************
Traceback (most recent call last):
File "knox_connect.py", line 116, in <module>
main()
File "knox_connect.py", line 112, in main
print(get_knox_users())
File "knox_connect.py", line 63, in get_knox_users
conn.open()
File "/usr/local/lib/python3.5/dist-packages/ldap3/strategy/sync.py", line 59, in open
self.connection.refresh_server_info()
File "/usr/local/lib/python3.5/dist-packages/ldap3/core/connection.py", line 1325, in refresh_server_info
self.server.get_info_from_server(self)
File "/usr/local/lib/python3.5/dist-packages/ldap3/core/server.py", line 448, in get_info_from_server
self._get_dsa_info(connection)
File "/usr/local/lib/python3.5/dist-packages/ldap3/core/server.py", line 364, in _get_dsa_info
get_operational_attributes=True)
File "/usr/local/lib/python3.5/dist-packages/ldap3/core/connection.py", line 775, in search
response = self.post_send_search(self.send('searchRequest', request, controls))
File "/usr/local/lib/python3.5/dist-packages/ldap3/strategy/sync.py", line 142, in post_send_search
responses, result = self.get_response(message_id)
File "/usr/local/lib/python3.5/dist-packages/ldap3/strategy/base.py", line 345, in get_response
raise LDAPSessionTerminatedByServerError(self.connection.last_error)
ldap3.core.exceptions.LDAPSessionTerminatedByServerError: session terminated by server
Any idea about the error?
You must open the connection with the conn.bind() method before reading info.
if you used print(conn.result) it will display more details with "description" and "message" where you can find the proper reason why it is terminated.
example:-
{'dn': u'', 'saslCreds': None, 'referrals': None, 'description': 'inappropriateAuthentication', 'result': 48, 'message': u'Inappropriate authentication', 'type': 'bindResponse'}
{'dn': u'', 'saslCreds': None, 'referrals': None, 'description': 'success', 'result': 0, 'message': u'', 'type': 'bindResponse'})

Resources