Unable to locate credentials celery - python-3.x

hi i am using celery to schedule the tasks and run to read the data from the s3 files , but i am getting the error like Unable to locate credentials , but the same code is working fine with the local environement , when we deployed the code in the production(EC2) i am getting above error , and without using the celery i am able to connect the s3 and able to read the files.
the packages i have used
boto3 1.13.13
botocore 1.16.13
celery 4.4.4
both credentials are placed in same location like ~/.aws
can any one help

in the dev environment we have multiple users like our code is running on user1 and i have give user name in configured celery.conf files of supervisor is user2 so that's why it is not not finding

Related

Google Cloud Run Second Flask Application - requirements.txt issue

I have a google cloud run flask application named "HelloWorld1" already up and running however i need to create a second flask application. I followed the below steps as per documentation:
1- On "Cloud Shell Editor" clicked "<>Cloud Code" --> "New Application" --> "Cloud Run Application Basic Cloud Run Application .."-->"Python (Flask): Cloud Run", provide and new folder and application is created.
2- When i try to run it using "Run on Cloud Run Emulator" i get the following error:
Starting to run the app using configuration 'Cloud Run: Run/Debug Locally' from .vscode/launch.json...
To view more detailed logs, go to Output channel : "Cloud Run: Run/Debug Locally - Detailed"
Dependency check started
Dependency check succeeded
Starting minikube, this may take a while...................................
minikube successfully started
The minikube profile 'cloud-run-dev-internal' has been scheduled to stop automatically after exiting Cloud Code. To disable this on future deployments, set autoStop to false in your launch configuration /home/mian/newapp/.vscode/launch.json
Update initiated
Update failed with error code DEVINIT_REGISTER_BUILD_DEPS
listing files: file pattern [requirements.txt] must match at least one file
Skaffold exited with code 1.
Cleaning up...
Finished clean up.
I tried following:
1- tried to create different type of application e.g django instead of flask however always getting the same error
2- tried to give full path of [requirements.txt] in docker settings, no luck.
Please if someone help me understanding why i am not able to run a second cloud run Flask app due to this error?
It's likely that your Dockerfile references the 'requirements.txt' file, but that file is not in your local directory. So, it gives the error that it's missing:
listing files: file pattern [requirements.txt] must match at least one file

django.db.utils.DatabaseError: Error while trying to retrieve text for error ORA-01804

Q1. What versions are we using?
Ans.
Python 3.6.12
OS : CentOS 7 64-bit
DB : Oracle 18c
Django 2.2
cx_Oracle : 8.1.0
Q2. Describe the problem
Ans. While running server with "python3 manage.py runserver"
application is able to contact Oracle DB and show the Django Administration page and login also works.
But when we access the application using the Apache (HTTPD) based URL over secure SSL port, we do see the Django page and the admin page as well but Login to Admin page with Internal server error.
In the logs, we see
"django.db.utils.DatabaseError: Error while trying to retrieve text for error ORA-01804"
cx_oracle is otherwise able to connect to the database properly, another application is also using the same database behind the same httpd proxy and works fine
Q3. Show the directory listing where your Oracle Client libraries are installed (e.g. the Instant Client directory). Is it 64-bit or 32-bit?
Ans. 64-bit
Q4. Show what the PATH environment variable (on Windows) or LD_LIBRARY_PATH (on Linux) is set to?
LD_LIBRARY_PATH=/srv/vol/db/oracle/product/18.0.0/dbhome_1/lib:/lib:/usr/lib
PATH=$ORACLE_HOME/bin:/srv/vol/db/oracle/product/18.0.0/dbhome_1/lib:$PATH
Q5. Show any Oracle environment variables set (e.g. ORACLE_HOME, ORACLE_BASE).
ORACLE_HOME=/srv/vol/db/oracle/product/18.0.0/dbhome_1
TNS_ADMIN=$ORACLE_HOME/network/admin
NLS_LANG=AMERICAN_AMERICA.AL32UTF8
ORACLE_BASE=/srv/vol/db/oracle
CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/lib
Any suggestions/help is highly appreciated.
Thank you
I found the problem
So I just removed all the variable declarations from /etc/sysconfig/httpd and checked, the application was still able to access the lib files, so these were now redundant.
Then undid all variable declarations done earlier in .localsh and .localrc files for the os users. To start from scratch, and go step by step to see where it breaks.
So now, cx_Oracle was looking for the lib files in wrong directory
$ORACLE_HOME/client_1/lib
instead of
$ORACLE_HOME/lib
DPI-1047: Cannot locate a 64-bit Oracle Client library: "$ORACLE_HOME/client_1/lib/libclntsh.so: cannot open shared object file: No such file or directory". See https://cx-oracle.readthedocs.io/en/latest/user_guide/installation.html for help
I did not have any subfolder named "client_1" inside dbhome_1
so I just created a symlink client_1 that points to dbhome_1 (still unsure on this, but at least it works :) )
So, now, this error was gone but now again ORA-01804 was coming. 😑
I had read somewhere that this error can be fixed by adding "libociei.so" but I did not have one on my instance, so I generated it using these commands:-
mkdir -p $ORACLE_HOME/rdbms/install/instantclient/light
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk igenliboci
Then I just moved this libociei.so file from
$ORACLE_HOME/instantclient to $ORACLE_HOME/lib
Now there was a new error (so.. progress 😉 ):
ORA-12546 - TNS Permission Denied.
This was easy to solve 😀
I used this command to address this :-
setsebool -P httpd_can_network_connect on
And...... That was all! It worked.

How to access logs on StdLib for my nodejs application

I have a node js application for a slack-bot deployed on StdLib for a slack-bot application that I created using the following tutorial: Build a serverless slack bot in 9 minutes with node js and stdlib
Now, everything is up and running, but I just want to see the logs of my application from StdLib.
I am already logged in as the authenticated user from my terminal and I am able to execute the command lib up dev successfully.
But, now when I try to view the logs using the command: lib logs dev, i get the following error:
Error: You must be signed in as a service's owner or be part of the service's team to to view logs for a service
Can anyone help me understand what i am doing wrong and how to access the dev logs from StdLib?
EDIT: I also tried logging in again by using lib login --email <my email> and then again tried lib logs dev, but it resulted in the same error as above.
Interestingly, even after logging in, if I do lib info dev, It gives me the error Error: Bad Request: "<my username>" does not have permission to access "dev"
So, in case someone else is stuck with the same...
I was able to figure this out by checking out the following documentation:
https://docs.stdlib.com/main/#/creating-services/logging
basically I need to give the username and the app name in a specific fashion as follows:
lib logs <my username>.<my app>[#dev]
The error mentioned was kind of confusing and I could not decipher what I was doing wrong based on the error.

AWS EMR - Upload file into the application master

I'm using aws cli and I launch a Cluster with the following command:
aws emr create-cluster --name "Config1" --release-label emr-5.0.0 --applications Name=Spark --use-default-role --ec2-attributes KeyName=ChiaveEMR --log-uri 's3://aws-logs-813591802533-us-west-2/elasticmapreduce/' --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m1.medium InstanceGroupType=CORE,InstanceCount=2,InstanceType=m1.medium
after that, I put a file into the master node:
aws emr put --cluster-id j-NSGFSP57255P --key-pair-file "ChiaveEMR.pem" --src "./configS3.txt"
The file is located in /home/hadoop/configS3.txt.
Then I launch a step:
aws emr add-steps --cluster-id ID_CLUSTER --region us-west-2 --steps Type=Spark,Name=SparkSubmit,Args=[--deploy-mode,cluster,--master,yarn,--executor-memory,1G,--class,Traccia2014,s3://tracceale/params/traccia-22-ottobre_2.11-1.0Ale.jar,/home/hadoop/configS3.txt,30,300,2,"s3a://tracceale/Tempi1"],ActionOnFailure=CONTINUE
But I get this error:
17/02/23 14:49:51 ERROR ApplicationMaster: User class threw exception: java.io.FileNotFoundException: /home/hadoop/configS3.txt (No such file or directory)
java.io.FileNotFoundException: /home/hadoop/configS3.txt (No such file or directory)
probably due to the fact that 'configS3.txt' is located on the master and not on the slaves.
How could I pass 'configS3.txt' to spark-submit script? I've tried from S3 too but it doesn't work. Any solutions? Thanks in advance
Since you are using "--deploy-mode cluster", the driver runs on a CORE/TASK instance rather than the MASTER instance, so yes, it's because you uploaded the file to the MASTER instance but then the code that's trying to access the file is not running on the MASTER instance.
Given that the error you are encountering is a FileNotFoundException, it sounds like your application code is trying to open it directly, meaning that of course you can't simply use the S3 path directly. (You can't do something like new File("s3://bucket/key") because Java has no idea how to handle this.) My assumption could be wrong though because you have not included your application code or explained what you are doing with this configS3.txt file.
Maurizio: you're still trying to fix your previous problem.
On a distributed system, you need files which are visible on all machines (which the s3:// filestore delivers) and to use an API which works with data from the distributed filesystem. which SparkContext.hadoopRDD() delivers. You aren't going to get anywhere by trying to work out how to get a file onto the local disk of every VM, because that's not the problem you need to fix: it's how to get your code to read data from the shared object store.
Sorry

Bad SSL Key When Trying to Use spark-ec2 script to launch cluster on EC2?

Version of Apache Spark: spark-1.2.1-bin-hadoop2.4
Platform: Ubuntu
I have been using the spark-1.2.1-bin-hadoop2.4/ec2/spark-ec2 script to create temporary clusters on ec2 for testing. All was working well.
Then I started to get the following error when trying to launch the cluster:
[Errno 185090050] _ssl.c:344: error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib
I have traced this back to the following line in the spark_ec2.py script:
conn = ec2.connect_to_region(opts.region)
Thus, the first time the script interacts with ec2, it is throwing this error. Spark is using the Python boto library (included with the Spark download) to make this call.
I assume the error I am getting is because of a bad cacert.pem file somewhere.
My question: which cacert.pem file gets used when I try to invoke the spark-ec2 script, and why is it not working?
I also had this error with spark-1.2.0-bin-hadoop2.4
SOLVED: the embedded boto library that comes with Spark found a ~/.boto config file I had for another non-Spark project (actually it was for the Google Cloud Services...GCS installed it, I had forgotten about it). That was screwing everything up.
As soon as I deleted the ~/.boto config file GCS installed, everything started working again for Spark!

Resources