just wonder is there a way to use pyodbc in a python WebJob
I want to set up a scheduled web job which will periodically fetch data from Azure database
However there's always error loading the pyodbc module, even if I upload the compiled version and add path in my script. anyone knows how to use pyodbc in an Azure WebJob?
Thanks!
Get the working PyODBC :
To use PyODBC you should compile it in its 32bits version (very important) with Python 2.7 or Python 3.4
Or install Python 2.7 or 3.4, 32bits, and type the command "pip install pyodbc"
Use in Azure WebJob
Put the PyODBC.pyd file in the root directory of your job and it should work.
Note :
If your PyODBC library isn't in a 32bits version, you should have the error "... Not a valid Win32 application"
Related
I am getting since yesterday on appliction from yesterday it's working fine early some upgradtion on cloud breaks the application
This is the error
Error : module 'typing' has no attribute '_ClassVar'
My python env is 3.7 and Django version 2.1.3
Please check the below steps if that helps to fix the issue:
Try downgrading the Python version to 3.6 and check.
Also, run this command pip uninstall dataclasses when using the Python Version 3.7
As Charlie V Said,
Check the version of the Python in your App Service versus the version you use to develop. Check the requirements file and compare that with your development.
I had a similar problem in the past. The first thing is determine if your application is actually using this module. If yes, make sure you have it at your requirements.txt. If no, refer to the logs to understand which python file is referencing it and remove the reference for it.
I am trying to install mplcursors or mpldatacursors in python 3.10.0, and it keeps showing packages not found. I have it installed in a Pycharm virtual environment, and it works over there (Python 3.9).
Does anyone have an idea for the support of these APIs?
Can I copy an installed API into another environment?
I am trying to run using pygal libray to show graph in AWS lambda.But this error is shown, even I have already installed lxml.deployment_package
my_source_code
import_error
It's because lxml contains binary pre-compiled libraries that it uses. When you install lxml locally on your Windows machine, you install a Windows-compatible version of it. However this is not compatible with the Lambda execution environment which is Linux based.
So you have to create a Lambda compatible deployment package. You have couple of options doing so. You can use sam build --use-container, you can build the libraries in a Docker environment and then zip those, etc. See this answer as well.
I have installed below on my windows 10 machine to use the Apache Spark.
Java,
Python 3.6 and
Spark (spark-2.3.1-bin-hadoop2.7)
I am trying to write pyspark related code in VSCode. It is showing red underline under the 'from ' and showing error message
E0401:Unable to import 'pyspark'
I have also used ctrl+Shift+P and select "Python:Update workspace Pyspark libraries". It is showing notification message
Make sure you have SPARK_HOME environment variable set to the root path of the local spark installation!
What is wrong?
You will need to install the pyspark Python package using pip install pyspark. Actually, this is the only package you'll need for VSCode, unless you also want to run your Spark application on the same machine.
I'm quite new to python, but I need to connect with my python to a DB2. I'm trying to install the Python extension since a few hours and I can't get to a result.
When I'm installing the extension with
python ibm_db2 -- install
I get the following exception:
/IBMDB2/CLIDRIVER//include folder not found.
Check if you have set the IBM_DB_HOME environment variable's value correctly
But I can't find a folder including /include
I installed the ibm driver IBM Data Server Driver Package (DS Driver) V10.5
But also the SQLLIB folder has no /include folder.
OS: Windows 7 64-bit
Attached a screenshot of the driver folder:
Folder content
Thanks in advance.