I am trying to connect to the Mssql Server using Airflow Hooks but throwing me the error of:
Broken DAG: [/usr/local/airflow/dags/odoo_customer_sql.py] No module named 'pymssql'
My code is:
hook = MsSqlHook(mssql_conn_id='ofo_sql_server')
conn = hook.get_conn()
return conn
P.S: I am using Docker Container which includes:
webserver:
image: puckel/docker-airflow:1.10.1
build:
context: https://github.com/puckel/docker-airflow.git#1.10.1
dockerfile: Dockerfile
args:
AIRFLOW_DEPS: gcp_api,s3, mssql, pyodbc
PYTHON_DEPS: sqlalchemy==1.2.0, pyodbc == 4.0.27, pymssql == 2.1.3
Open docker dashboard
Open apache airflow CLI (command line interface)
pip install pymssql --upgrade
Restart the web server
Refresh the browser
This should resolve the issue.
Related
I have implemented a Flask APP with python and by using Docker Desktop I have obtained an image of my APP. I pushed that app on my private docker HUB. In the Azure portal I have created a docker container based APP service including user, password.
The dockerized APP works perfectly on my laptop with this dockerfile:
FROM python:3.7.3
RUN apt-get update -y
RUN apt-get install -y python3-pip \
python3-dev \
build-essential \
cmake
ADD . /demo
WORKDIR /demo
RUN pip install -r requirements.txt
EXPOSE 80 5000
here the docker-composer.yml
version: "3.8"
services:
app:
build: .
command: python app.py
ports:
- "5000:5000"
volumes:
- .:/demo
here a little part of app.py
if __name__ == '__main__':
app.run(host="0.0.0.0")
As said with these files the docker works on my laptop, but during the deploying stage in azure I received the following error:
Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here.
INFO - Initiating warmup request to container xxx for site xxx
ERROR - Container xxx for site xxx has exited, failing site start
ERROR - Container xxx didn't respond to HTTP pings on port: 80, failing site start. See container logs for debugging.
INFO - Stopping site xxx because it failed during startup.
I am trying to run my python code through docker-compose. It is not a flask app so I didn't provide a port number on my yml file. Here is my docker-compose.yml file:
version: '3'
services:
main:
build: .
image: ddn4
environment:
- neo4j_uri=bolt://54.209.5.141:7687
- neo4j_username=neo4j
- password=provis234
- blob_conn_string=httpsxxxx
main.py is my python code. After running
docker-compose build
, I get that an image was successfully built. Also, upon checking using
docker images
, I see that the image ddn4 was built successfully. But upon running
docker-compose up
, I am getting the following error:
main_1 | Error !!!! File Exception:
main_1 | 'function' object is not subscriptable
main_1 | Error !!!!:
main_1 | 'NoneType' object has no attribute 'columns' dd-n4_main_1 exited with code 0
dd-n4 is the location of my dockerfile, requirements.txt, python codes and docker-compose.yml file.
Here is the python code for the variables which seem to be causing the error:
def neo4jconn():
"""
This code is to create a connection string for connecting to Neo4j
"""
try:
neo_conn = Graph(os.getenv['neo4j_uri'], user=os.getenv['neo4j_username'], password=os.getenv['password'])
return neo_conn
except Exception as ex:
print('Error !!!!:')
print(ex)
You cannot use both build and image statement.
If you use build the built image will automatically be used !
try to use docker-compose up for running docker-compose.yml file, if you would like to re-build it add --build flag, detach mode add -d, it looks like:
docker-compose up -d --build
you can pass all of the environment variables to .env file and install additionaly python-dotenv:
pip install python-dotenv
and pass it to docker-compose:
youre_service:
image: image:1.7
container_name: container
env_file:
- .env
I was able to get it resolved by updating my python code to use os.getenv() instead of os.getenv[]. Thanks #David Maze
Overview:
I updated the MySQL Node-RED module and now I must restart Node-RED to enable it. The message as follows:
Node-RED must be restarted to enable upgraded modules
Problem:
I am running the official node-red docker container using docker-compose and there is no command node-red command when I enter the container as suggested in-the-docs.
Question
How do I manually restart the node-red application without the shortcut in the official nodered docker container?
Caveats:
I have never used node.js and I am new to node-red.
I am fluid in Linux and other programming languages.
Steps-to-reproduce
Install docker and docker-compose.
Create a project directory with the docker-compose.yml file
Start the service: docker-compose up
navigate to the http://localhost:1880
click the hamburger menu icon->[manage-pallet]->pallet and search for and update the MySQL package.
Go into nodered container: docker-compose exec nodered bash
execute: node-red
result: bash: node-red: command not found
File:
docker-compose.yml
#
version: "2.7"
services:
nodered:
image: nodered/node-red:latest
user: root:root
environment:
- TZ=America/New_York
ports:
- 1880:1880
networks:
- nodered-net
volumes:
- ./nodered_data:/data
networks:
nodered-net:
You will need to bounce the whole container, there is no way to restart Node-RED while keeping the container running because the running instance is what keeps the container alive.
Run docker ps to find the correct container instance then run docker restart [container name]
Where [container name] is likely to be something like nodered-nodered_1
i am getting this error when i try to run the commande "mongo" in the container bash:
Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :connect#src/mongo/shell/mongo.js:328:13 #(connect):1:6exception: connect failed
i'm trying to set up a new nodejs app in a mongo docker image. the image is created fine with dockerfile in docker hub and i pull it, create a container and every thing is good but when i try to tape "mongo" commande in the bash a get the error.
this is my dockerfile
FROM mongo:4
RUN apt-get -y update
RUN apt-get install -y nodejs npm
RUN apt-get install -y curl python-software-properties
RUN curl -sL https://deb.nodesource.com/setup_11.x | bash -
RUN apt-get install -y nodejs
RUN node -v
RUN npm --version
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
CMD [ "npm", "start"]
EXPOSE 3000
When your Dockerfile ends with CMD ["npm", "start"], it is building an image that runs your application instead of running the database.
Running two things in one container is slightly tricky and usually isn't considered a best practice. (You change your application code so you build a new image and delete and recreate your existing container; do you actually want to stop and delete your database at the same time?) You should run this as two separate containers, one running the standard mongo image and a second one based on a Dockerfile similar to this but FROM node. You might look into Docker Compose as a simple orchestration tool that can manage both containers together.
The one other thing that's missing in your example is any configuration that tells the application where its database is. In Docker this is almost never localhost ("this container", not "this physical host somewhere"). You should add a control to pass that host name in as an environment variable. In Docker Compose you'd set it to the name of the services: block running the database.
version: '3'
services:
mongodb:
image: mongodb:4
volumes:
- './mongodb:/data/db'
app:
build: .
ports: '3000:3000'
env:
MONGODB_HOST: mongodb
(https://hub.docker.com/_/mongo is worth reading in detail.)
I'm using fig to deploy my Node.js app.
fig.yml
web:
build: .
command: node app.js
links:
- db
ports:
- "1337:1337"
db:
image: dockerfile/mongodb
Running fig run db env gives me the following environment vars:
DB_PORT=tcp://172.17.0.29:27017
DB_PORT_27017_TCP=tcp://172.17.0.29:27017
DB_PORT_27017_TCP_ADDR=172.17.0.29
DB_PORT_27017_TCP_PORT=27017
DB_PORT_27017_TCP_PROTO=tcp
DB_PORT_28017_TCP=tcp://172.17.0.29:28017
DB_PORT_28017_TCP_ADDR=172.17.0.29
DB_PORT_28017_TCP_PORT=28017
DB_PORT_28017_TCP_PROTO=tcp
DB_NAME=/pos_db_run_3/db
My app's Dockerfile looks like this
# Pull base image.
FROM dockerfile/nodejs
# install mongo client
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
RUN echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/mongodb.list
RUN apt-get update
RUN apt-get install -y dnsutils
RUN apt-get install -y mongodb-org-shell
# copy the source files into the image
COPY . /data/myapp
# Define working directory.
WORKDIR /data/myapp
# Install dependencies
RUN npm install
# Create default database and user
RUN mongo $DB_PORT < seed-mongo
RUN node/seed.js
EXPOSE 1337
# Define default command.
CMD ["bash"]
However the build is failing at RUN mongo $DB_PORT < seed-mongo
MongoDB shell version: 2.6.5
connecting to: test
2014-11-20T20:47:59.193+0000 warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused
2014-11-20T20:47:59.195+0000 Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
exception: connect failed
Service 'web' failed to build: The command [/bin/sh -c mongo $DB_PORT < seed-mongo] returned a non-zero code: 1
According to the fig docs I'd be better off referring to the database simply as db so I tried
RUN mongo db < seed-mongo
But this gave me
MongoDB shell version: 2.6.5
connecting to: db
2014-11-20T21:08:11.154+0000 warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused
2014-11-20T21:08:11.156+0000 Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
exception: connect failed
So I injected a RUN host db in there just to see if db really is a host name. And no, despite the docs above it's not.
Step 10 : RUN host db
---> Running in 73b4dc787c79
Host db not found: 3(NXDOMAIN)
Service 'web' failed to build: The command [/bin/sh -c host db] returned a non-zero code: 1
So I'm stumped.
How am I supposed to talk to the mongo instance running in my linked Docker container?
Your mongo container has ip address $DB_PORT_27017_TCP_ADDR
So try with mongo --host $DB_PORT_27017_TCP_ADDR < seed-mongo
You can't use it in a RUN statement I think, because the environment variables are not available at build-time. You could try using it in a CMD I guess.
My solution was a combination of insights from above, but mainly I stopped trying to seed a specific database and db user in the build phase and instead use --host db_1 with no username or password. If I really need that functionality later I'll work it out.
I could then build the project and connect it to mongo in the other image.
In my config/connections.js I included:
development: {
adapter : 'sails-mongo',
host : 'db_1',
port : 27017,
user : '',
password : '',
database : process.env.DB_NAME
},
and I changed the CMD to be "node data/seeds.js && node app"
You never exposed your Mongo port.
web:
build: .
command: node app.js
links:
- db
ports:
- "1337:1337"
db:
image: dockerfile/mongodb
ports:
- "27017:27017"