I'm using the versions airflow 2.5.1 and python 3.10. I need to create S3 connection type in Admin>Add connection. But the connection Type for S3 in dropdown is missing.
I already installed the the provider apache-airflow-providers-amazon and also tried re-starting. But the issue remains. Any help would be much appreciated.
I ended up using connection type Amazon Web Services. It worked with the same s3 hook.
Related
I'm trying to find a package or figure out a way to use a SMB client or Samba client to put files on an FSx share. I found this example: https://aws.amazon.com/blogs/storage/enabling-smb-access-for-serverless-workloads/. Problem is I am writing in nodejs not python. Anyone know a nodjs package or a way to install SMBClient with a custom resource for the cloud formation template?
I have been building a web application using python and AWS console on a borrowed computer for the past month.
I recently obtained a new computer and I am trying to change from developing my app online in AWS console to offline on my localhost.
Online, I have already an existing API, lambda Fucntions, Api Gateway, DynamoDb Tables.
Offline,I have the following tools installed: Linux, Pycharm, python 3.9 AWS CLI 2, AWS SAM CLI, Docker.
My misunderstanding lies in how to replicate the organization of the directories on my local computer.
And is there a simple command to import or clone or set-up my entire app/api locally
Any advice, direction, documentation or tutorials related to this reverse migration issue would be greatly appreciated.
Thank You
There are other similar question like mine, but I think no one looks complete or fits/answer my case.
I'm deploying a Python 3.6 application on AWS lambda via serverless framework.
With this application I'm using diskcache to perform some small file caching (not using at all sqlite actually)
I'm using "serverless-python-requirements" plugin in order to have all my dependencies (defined in requirements.txt file) packed up and uploaded (diskcache in this case)
When application is live on AWS and I request it, I'll get back a 500 error. And in my logs I can read:
Unable to import module 'handler': No module named '_sqlite3'
Then from answer below I get that sqlite module should not be needed to be installed.
Python: sqlite no matching distribution found for sqlite
So no need (and it wont work) to add sqlite as a requirement...
Then I wonder why AWS lambda is unable to find sqlite once deployed.
Any hint pls?
Thanks
On my AWS Lambda Python 3.6 function I'd like to use Google Firestore (Cloud Firestore BETA) for caching purposes, but as soon as I add
from google.cloud import firestore
to my Python script and upload ZIP to AWS Lambda function, Lambda test come back with error
Unable to import module 'MyLambdaFunction': cannot import name 'cygrpc'.
AWS CloudWatch log doesn't contain any details on the error, just that same error message.
Lambda function works great on my local dev machine (Windows 10), and I can write to Firestore fine. It also works on AWS if I comment out the import and all Firestore related lines.
Any tips how I could go about solving this?
The python client for Firestore relies on the C-based implementation of GRPC. This appears not to work by default in AWS Lambda.
Node.js users have reported similar problems and they've documented a workaround of building a docker image.
This should be similar to any getting any other python package that requires native code to work. Perhaps something like this method for getting scikit to work?
I hope this is enough to get you going in the right direction, but unfortunately I don't know anything about AWS Lambda :-(.
Ran into same issue, i solved it by using the plugin serverless-python-requirements for serverless framework and passing:
pythonRequirements:
dockerizePip: true
Essentially this installs your c-based packages (and all other packages) in a docker container where it would work and then symlinks them to your lambda fn.
A helpful guide can be found on: https://serverless.com/blog/serverless-python-packaging/
Plugin: https://github.com/UnitedIncome/serverless-python-requirements
I need to run CRUD operations on my bucket (database) in couchbase which is deployed ec2 instance. And the code which I have is running on aws lambda. However, when I try to test this code on lambda by passing details in the body I get the error as : "errorMessage": "/usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /var/task/node_modules/couchbase/build/Release/couchbase_impl.node)". This error is because in my function I require an npm module called "couchbase" which is used for executing CRUD operations on my couchbase bucket.
So can you guys help me as to what might be the problem here? Is the file missing on nodejs environment running on lambda or do I need to implement in a different way so as to get it working?
Thanks in advance.
I was able to solve this issue by locally compiling node_modules with the same nodejs version (v0.10.36) which lambda uses and uploading the zip file to lambda.