Access Github Secrets in Python workflow - python-3.x

I have an issue with access Github Secrets in CI workflow
the tests part of main.ymlfile -
# Run our unit tests
- name: Run unit tests
env:
CI: true
MONGO_USER: ${{ secrets.MONGO_USER }}
MONGO_PWD: ${{ secrets.MONGO_PWD }}
ADMIN: ${{ secrets.ADMIN }}
run: |
pipenv run python app.py
I have a database.py file in which I am accessing these environment variables
import os
import urllib
from typing import Dict, List, Union
import pymongo
from dotenv import load_dotenv
load_dotenv()
print("Mongodb user: ", os.environ.get("MONGO_USER"))
class Database:
try:
client = pymongo.MongoClient(
"mongodb+srv://" +
urllib.parse.quote_plus(os.environ.get("MONGO_USER")) +
":" +
urllib.parse.quote_plus(os.environ.get("MONGO_PWD")) +
"#main.rajun.mongodb.net/myFirstDatabase?retryWrites=true&w=majority"
)
DATABASE = client.Main
except TypeError as NoCredentialsError:
print("MongoDB credentials not available")
raise Exception(
"MongoDB credentials not available"
) from NoCredentialsError
...
...
this is the issue I get in the build-
Traceback (most recent call last):
Mongodb user: None
MongoDB credentials not available
Followed by urllib raising bytes expected error
I have followed the documentation here but I still cannot find out my mistake

Related

Hello , why my script run on local phycharm but when i upload my script on VPS ( host )i get errors?

why my script run on local phycharm but when i upload my script on VPS ( host )i get errors ?
my python code
`
import json
import socketio
TOKEN = "my-super-token" #You donation alert token
sio = socketio.Client()
#sio.on('connect')
def on_connect():
sio.emit('add-user', {"token": TOKEN, "type": "alert_widget"})
#sio.on('donation')
def on_message(data):
y = json.loads(data)
print(y['username'])
print(y['message'])
print(y['amount'])
print(y['currency'])
sio.connect('wss://socket.donationalerts.ru:443',transports='websocket')
`
from localhost , pycharmall work but from host i get this
error screenshot here
error :
Traceback (most recent call last): File "test.py", line 22, in
sio.connect('wss://socket.donationalerts.ru:443',transports='websocket')
File "/usr/local/lib/python3.8/site-packages/socketio/client.py", line
338, in connect
raise exceptions.ConnectionError(exc.args[0]) from None socketio.exceptions.ConnectionError: Connection error
i try install ,reinstall python , packages etc

Unable to run Python Script from within an Ansible Playbook

I am trying to write an ansible playbook to crawl a website and then store its contents into a static file under aws s3 bucket. Here is the crawler code :
"""
Handling pages with the Next button
"""
import sys
from urllib.parse import urljoin
import requests
from bs4 import BeautifulSoup
url = "https://xyz.co.uk/"
file_name = "web_content.txt"
while True:
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
raw_html = soup.prettify()
file = open(file_name, 'wb')
print('Collecting the website contents')
file.write(raw_html.encode())
file.close()
print('Saved to %s' % file_name)
#print(type(raw_html))
# Finding next page
next_page_element = soup.select_one('li.next > a')
if next_page_element:
next_page_url = next_page_element.get('href')
url = urljoin(url, next_page_url)
else:
break
This is my ansible-playbook:
---
- name: create s3 bucket and upload static website content into it
hosts: localhost
connection: local
tasks:
- name: create a s3 bucket
amazon.aws.aws_s3:
bucket: testbucket393647914679149
region: ap-south-1
mode: create
- name: create a folder in the bucket
amazon.aws.aws_s3:
bucket: testbucket393647914679149
object: /my/directory/path
mode: create
- name: Upgrade pip
pip:
name: pip
version: 21.1.3
- name: install virtualenv via pip
pip:
requirements: /root/ansible/requirements.txt
virtualenv: /root/ansible/myvenv
virtualenv_python: python3.6
environment:
PATH: "{{ ansible_env.PATH }}:{{ ansible_user_dir }}/.local/bin"
- name: Run script to crawl the website
script: /root/ansible/beautiful_crawl.py
- name: copy file into bucket folder
amazon.aws.aws_s3:
bucket: testbucket393647914679149
object: /my/directory/path/web_content.text
src: web_content.text
mode: put
Problem is when I run this, it runs fine upto the task name: install virtualenv via pip and then throws following error while executing the task name: Run script to crawl the website:
fatal: [localhost]: FAILED! => {"changed": true, "msg": "non-zero return code", "rc": 2, "stderr": "/root/.ansible/tmp/ansible-tmp-1625137700.8854306-13026-9798 3643645466/beautiful_crawl.py: line 1: import: command not found\n/root/.ansible /tmp/ansible-tmp-1625137700.8854306-13026-97983643645466/beautiful_crawl.py: lin e 2: from: command not found\n/root/.ansible/tmp/ansible-tmp-1625137700.8854306- 13026-97983643645466/beautiful_crawl.py: line 3: import: command not found\n/roo t/.ansible/tmp/ansible-tmp-1625137700.8854306-13026-97983643645466/beautiful_cra wl.py: line 4: from: command not found\n/root/.ansible/tmp/ansible-tmp-162513770 0.8854306-13026-97983643645466/beautiful_crawl.py: line 6: url: command not foun d\n/root/.ansible/tmp/ansible-tmp-1625137700.8854306-13026-97983643645466/beauti ful_crawl.py: line 7: file_name: command not found\n/root/.ansible/tmp/ansible-t mp-1625137700.8854306-13026-97983643645466/beautiful_crawl.py: line 10: syntax e rror near unexpected token ('\n/root/.ansible/tmp/ansible-tmp-1625137700.885430 6-13026-97983643645466/beautiful_crawl.py: line 10: response = requests.get (url)'\n", "stderr_lines": ["/root/.ansible/tmp/ansible-tmp-1625137700.8854306-1 3026-97983643645466/beautiful_crawl.py: line 1: import: command not found", "/ro ot/.ansible/tmp/ansible-tmp-1625137700.8854306-13026-97983643645466/beautiful_cr awl.py: line 2: from: command not found", "/root/.ansible/tmp/ansible-tmp-162513 7700.8854306-13026-97983643645466/beautiful_crawl.py: line 3: import: command no t found", "/root/.ansible/tmp/ansible-tmp-1625137700.8854306-13026-9798364364546 6/beautiful_crawl.py: line 4: from: command not found", "/root/.ansible/tmp/ansi ble-tmp-1625137700.8854306-13026-97983643645466/beautiful_crawl.py: line 6: url: command not found", "/root/.ansible/tmp/ansible-tmp-1625137700.8854306-13026-97 983643645466/beautiful_crawl.py: line 7: file_name: command not found", "/root/. ansible/tmp/ansible-tmp-1625137700.8854306-13026-97983643645466/beautiful_crawl. py: line 10: syntax error near unexpected token ('", "/root/.ansible/tmp/ansibl e-tmp-1625137700.8854306-13026-97983643645466/beautiful_crawl.py: line 10: response = requests.get(url)'"], "stdout": "", "stdout_lines": []}
What am I doing wrong here?
You have multiple problems.
Check the documentation.
No. 1: The script modules will run bash scripts by default, not python scripts. If you want to run a python script, you need to add a shebang like #!/usr/bin/env python3 as the first line of the script or use the executable parameter.
No 2: You create a venv, so I assume you want to run the script in that venv. You can't do that out of the box with the script module, so you would need to work around that.
This should work for you (you don't need the shebang, as you tell the script module to run it with python in the venv using the executable parameter):
- name: Run script to crawl the website
script: /root/ansible/beautiful_crawl.py
executable: /root/ansible/myvenv/bin/python

requests.exceptions.ConnectionError: HTTPConnectionPool(host='docker.for.linux.localhost', port=8000): Max retries exceeded with url: /api/user

In ubuntu 20.04 LTS, I am getting this error
requests.exceptions.ConnectionError: HTTPConnectionPool(host='docker.for.linux.localhost', port=8000): Max retries exceeded with url: /api/user
Actually I have made two docker images for flask and django.
In flask app,
my main.py:
from dataclasses import dataclass
from flask import Flask, jsonify
from flask_sqlalchemy import SQLAlchemy
from flask_cors import CORS
from sqlalchemy import UniqueConstraint
import requests
app = Flask(__name__)
app.config["SQLALCHEMY_DATABASE_URI"] = 'mysql://root:root#db/main'
CORS(app)
db = SQLAlchemy(app)
#dataclass
class Product(db.Model):
id: int
title: str
image: str
id = db.Column(db.Integer, primary_key=True, autoincrement=False)
title = db.Column(db.String(200))
image = db.Column(db.String(200))
#dataclass
class ProductUser(db.Model):
id = db.Column(db.Integer, primary_key=True)
user_id = db.Column(db.Integer)
product_id = db.Column(db.Integer)
UniqueConstraint('user_id', 'product_id', name='user_product_unique')
#app.route('/api/products')
def index():
return jsonify(Product.query.all())
#app.route('/api/products/<int:id>/like', methods=['POST'])
def like(id):
req = requests.get('http://docker.for.linux.localhost:8000/api/user')
return jsonify(req.json())
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
my Dockerfile:
FROM python:3.8
ENV PYTHONUNBUFFERED 1
WORKDIR /app
COPY requirements.txt /app/requirements.txt
RUN pip3 install -r requirements.txt
COPY . /app
my docker-compose.yaml file:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: 'python main.py'
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
queue:
build:
context: .
dockerfile: Dockerfile
command: 'python3 -u consumer.py'
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: main
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
when I am entering the url in postman
http://localhost:8001/api/products/1/like
by post method
I am getting the above error mentioned in the question
Can anyone tell how to fix this solution?
I am using Windows 10 for me http://docker.for.win.localhost:8000/api/user worked. you can try using lin or check and confirm the Linux abbreviation for this.
req = requests.get('http://docker.for.win.localhost:8000/api/user')
json = req.json()
I am using Ubuntu 22.04 LTS, and for me http://172.17.0.1:8000/api/user worked.
To get 172.17.0.1, I ran ifconfig in Terminal and looked for docker0.

How to use global credentials in python script invoked by jenkins pipeline

I'm very new to working with jenkins, so far I was able to run simple pipeline with simple pip install, but I need to pass global credentials from Jenkins into python script test.py invoked by jenkinsfile.
pipeline {
options { timeout(time: 30, unit: 'MINUTES')
buildDiscarder(logRotator(numToKeepStr: '30', artifactNumToKeepStr: '30')) }
agent { label 'ops_slave' }
stages {
stage('Environment Build') {
steps {
echo "Hello World!"
sh "echo Hello from the shell"
sh "hostname"
sh "uptime"
sh "python3 -m venv test_env"
sh "source ./test_env/bin/activate"
sh "pip3 install pandas psycopg2"
sh """echo the script is working"""
withCredentials([[
$class: 'UsernamePasswordMultiBinding',
credentialsId: 98,
usernameVariable: 'user',
passwordVariable: 'pw',
]])
sh """python3 bartek-jenkins-testing/python/test.py"""
}
}
}
}
I've seen implementation where you use argparse, but it's way above my level at this point, and I believe there is a way to reference it from python script or jenkins directly to pass to python script. I was googling for some time now, but I'm not sure questions I'm asking are correct.
My python script should be able to get username and password from Jenkins global credentials ID98:
print('Hello World this is python')
import pandas as pd
print(pd.__version__)
import pyodbc
import psycopg2
# can pass environemtn variables
connection = psycopg2.connect(
host="saturn-dv",
database="saturn_dv",
port='8080',
user='saturn_user_bartek_malysz',
password='')
connection.set_session(readonly=True)
query = """
SELECT table_name FROM information_schema.tables
WHERE table_schema = 'public'
ORDER BY table_schema,table_name;"""
data = pd.read_sql(query, connection)
print(data)
A straightforward way is to leverage environment variable as following
// Jenkinsfile
withCredentials([[
$class: 'UsernamePasswordMultiBinding',
credentialsId: 98,
usernameVariable: 'user',
passwordVariable: 'pw',
]]) {
sh """
export DB_USERNAME="${user}"
export DB_PASSWORD="${pw}"
python3 bartek-jenkins-testing/python/test.py
"""
}
// test.py
connection = psycopg2.connect(
host="saturn-dv",
database="saturn_dv",
port='8080',
user=os.getenv('DB_USERNAME'),
password=os.getenv('DB_PASSWORD'))

Unable to import google logging metric using terraform

I have created in terraform the following logging metric resource
resource "google_logging_metric" "proservices_run" {
name = "user/proservices-run"
filter = "resource.type=gae_app AND severity>=ERROR"
project = "${google_project.service.project_id}"
metric_descriptor {
metric_kind = "DELTA"
value_type = "INT64"
}
}
I have also on Stackdriver a custom metric named user/proservices-run.
However the following two import attempts fail:
$ terraform import google_logging_metric.proservices_run proservices-run
google_logging_metric.proservices_run: Importing from ID "proservices-run"...
google_logging_metric.proservices_run: Import complete!
Imported google_logging_metric (ID: proservices-run)
google_logging_metric.proservices_run: Refreshing state... (ID: proservices-run)
Error: google_logging_metric.proservices_run (import id: proservices-run): 1 error occurred:
* import google_logging_metric.proservices_run result: proservices-run: google_logging_metric.proservices_run: project: required field is not set
$ terraform import google_logging_metric.proservices_run user/proservices-run
google_logging_metric.proservices_run: Importing from ID "user/proservices-run"...
google_logging_metric.proservices_run: Import complete!
Imported google_logging_metric (ID: user/proservices-run)
google_logging_metric.proservices_run: Refreshing state... (ID: user/proservices-run)
Error: google_logging_metric.proservices_run (import id: user/proservices-run): 1 error occurred:
* import google_logging_metric.proservices_run result: user/proservices-run: google_logging_metric.proservices_run: project: required field is not set
Using
Terraform v0.11.14
and
provider.google = 2.11.0
provider.google-beta 2.11.0
edit: I noticed the project: required field is not set in the error message, I added the field project in my TF code, however the outcome is still the same.
I ran into the same issue trying to import a log-based metrics.
The solution was to set the env-var GOOGLE_PROJECT=<your-project-id> when running the command.
GOOGLE_PROJECT=MyProjectId \
terraform import \
"google_logging_metric.create_user_count" \
"create_user_count"

Resources