So im running mitmproxy for windows and im trying to run a script that saves responses and requests into postgresql, for this im using sqlalchemy
But i cannot make it work with mimtproxy for some reason, when running seems like its using another python interpreter and my code is not working. Does mitmproxy use a different interpreter appart from the one you have installed?
Command running from mimtmproxy/bin folder:
mitmdump.exe -s C:\users\etc\{FULL_PATH}\mitmproxy.py
im getting
"No module instaled named SQLAlchemy"
i already tried to installing via pip and pip3 the module is telling me im missing(sqlalchemy) but its already installed
enter image description here
mimtproxy.py
from mitmproxy import http
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import Session
from entities.models.Request import RequestModel
from entities.models.Response import ResponseModel
from entities.models.Session import SessionModel
server = 'www.XXX.com'
world = 'XXX'
user = 'XXX'
version = None
engine = create_engine('postgresql://XXX:XXX#localhost:5432/XXX')
Base = declarative_base()
def createSession():
with Session(engine) as session:
http_session = SessionModel(server=server,world=world,version=version,user=user)
session.add(http_session)
# We add created object to our DB
session.flush()
# At this point, the object has been pushed to the DB,
# and has been automatically assigned a unique primary key id
session.refresh(http_session)
# refresh updates given object in the session with its state in the DB
# (and can also only refresh certain attributes - search for documentation)
return http_session
session_object = createSession()
with Session(engine) as session:
session.add(session_object)
session.commit()
def request(flow: http.HTTPFlow) -> None:
if flow.request.headers['x-ig-client-version'] and session_object.version == None:
session_object.version = flow.request.headers['x-ig-client-version']
with Session(engine) as session:
session.commit()
request_url = flow.request.url
request_cookies = None
if flow.request.cookies:
request_cookies = flow.request.cookies
Request = RequestModel(method=flow.request.method,url=request_url)
Request.headers = flow.request.headers
Request.cookies = request_cookies
Request.body = flow.request.content
Request.timestamp_start = flow.request.timestamp_start
Request.timestamp_end = flow.request.timestamp_end
Request.size = len(flow.request.content)
Response = ResponseModel(headers=flow.response.headers,
status_code=flow.response.status_code,body=flow.response.content)
Response.cookies = None
if flow.response.cookies:
Response.cookies = flow.response.cookies
Request.response = Response
session_object.requests.append([Request])
with Session(engine) as session:
session.commit()
all sqlalchemy models are here:
AttributeError: 'set' object has no attribute '_sa_instance_state' - SQLAlchemy
If you want to use Python packages that are not included in mitmproxy's own installation, you need to install mitmproxy via pip or pipx. The normal binaries include their own Python environment.
Source:
https://docs.mitmproxy.org/stable/overview-installation/#installation-from-the-python-package-index-pypi.
Related
I have the following code:
import sqlalchemy
import testing.postgresql
from sqlalchemy.ext.declarative import declarative_base
from app.config import Settings, mode
from databases import Database
from sqlalchemy import (
create_engine
)
def get_database():
if mode == 'prod':
settings = Settings()
db_config = {
"drivername": "postgresql",
"host": settings.DB_HOST,
"username": settings.DB_USER,
"password": settings.DB_PASSWORD,
"port": settings.DB_PORT,
"database": settings.DB_DATABASE
}
uri = sqlalchemy.engine.url.URL(**db_config)
engine = create_engine(uri)
Base = declarative_base()
database = Database(str(engine.url))
return engine, Base, database
else:
with testing.postgresql.Postgresql() as postgresql:
engine = create_engine(postgresql.url())
Base = declarative_base()
database = Database(str(engine.url))
return engine, Base, database
engine, Base, database = get_database()
My code runs perfectly when mode == 'prod' but, when mode == 'test', I get this error:
venv\lib\site-packages\testing\postgresql.py:144: in find_program
raise RuntimeError("command not found: %s" % name)
E RuntimeError: command not found: initdb
I can say that progress is installed and running, and C:\Program Files\PostgreSQL\13\bin is in PATH.
I can't find what I can be missing.
Ran into this trying to solve this issue myself. On macOS, I was able to fix this by initializing the locate db in a terminal with sudo mkdir -p /etc/paths.d && echo /Applications/Postgres.app/Contents/Versions/latest/bin | sudo tee /etc/paths.d/postgresapp. Run this and wait 10-15 minutes, and then restart the terminal and try locate initdb, it should return a list of locations. Then, make sure psql is in path with which psql, and even to be safe run which initdb. If all of these return locations, try your code again with a new terminal and it should work.
I have this function which I want it to be called in the main function in flask app, and want to pass it to another class. The catch here is I want to create the instance only once while starting the server and have to pass it globally to all the classes
def webdriver_instance():
from selenium import webdriver
from selenium.webdriver import FirefoxOptions
opts = FirefoxOptions()
opts.add_argument("--headless")
opts.add_argument("start-maximized")
opts.add_argument("disable-infobars")
opts.add_argument("--disable-extensions")
opts.add_argument('--no-sandbox')
opts.add_argument('--disable-application-cache')
opts.add_argument('--disable-gpu')
opts.add_argument("--disable-dev-shm-usage")
browser = webdriver.Firefox(firefox_options=opts)
return browser
you are looking for re-using selenium sessions. You can start the browser on once and store the session id and executor url somewhere and grab it, when needed:
from selenium import webdriver
driver = webdriver.Firefox()
executor_url = driver.command_executor._url
session_id = driver.session_id
driver.get("http://tarunlalwani.com")
print session_id
print executor_url
def create_driver_session(session_id, executor_url):
from selenium.webdriver.remote.webdriver import WebDriver as RemoteWebDriver
# Save the original function, so we can revert our patch
org_command_execute = RemoteWebDriver.execute
def new_command_execute(self, command, params=None):
if command == "newSession":
# Mock the response
return {'success': 0, 'value': None, 'sessionId': session_id}
else:
return org_command_execute(self, command, params)
# Patch the function before creating the driver object
RemoteWebDriver.execute = new_command_execute
new_driver = webdriver.Remote(command_executor=executor_url, desired_capabilities={})
new_driver.session_id = session_id
# Replace the patched function with original function
RemoteWebDriver.execute = org_command_execute
return new_driver
driver2 = create_driver_session(session_id, executor_url)
print driver2.current_url
source: https://tarunlalwani.com/post/reusing-existing-browser-session-selenium/
Using flask-sqlalchemy, how is it possible to connect to a database from within a redis task?
The database connection is created in create_app with:
db = SQLAlchemy(app)
I call a job from a route:
#app.route("/record_occurrences")
def query_library():
job = queue.enqueue(ApiQueryService(word), word)
Then inside the redis task, I want to make an update to the database
class ApiQueryService(object):
def __init__(self,word):
resp = call_api()
db.session.query(Model).filter_by(id=word.id).update({"count":resp[1]})
I can't find a way to access the db. I've tried importing it with from app import db. I tried storing it in g. I tried reinstantiating it with SQLAlchemy(app), and several other things, but none of these work. When I was using sqlite, all of this worked, and I could easily connect to the db from any module with a get_db method that simply called sqlite3.connect(). Is there some simple way to access it with SQLAlchemy that's similar to that?
This can be solved using the App Factory pattern, as mentioned by #vulpxn.
Let's assume we have our configuration class somewhere like this:
class Config(object):
DEBUG = False
TESTING = False
DEVELOPMENT = False
API_PAGINATION = 10
PROPAGATE_EXCEPTIONS = True # needed due to Flask-Restful not passing them up
SQLALCHEMY_TRACK_MODIFICATIONS = False # ref: https://stackoverflow.com/questions/33738467/how-do-i-know-if-i-can-disable-sqlalchemy-track-modifications/33790196#33790196
class ProductionConfig(Config):
CSRF_COOKIE_SAMESITE = 'Strict'
SESSION_PROTECTION = "strong"
SESSION_COOKIE_SECURE = True
SESSION_COOKIE_HTTPONLY = True
SESSION_COOKIE_SAMESITE = 'Strict'
SECRET_KEY = "super-secret"
INVITES_SECRET = "super-secret"
PASSWORD_RESET_SECRET = "super-secret"
PUBLIC_VALIDATION_SECRET = "super-secret"
FRONTEND_SERVER_URL = "https://127.0.0.1:4999"
SQLALCHEMY_DATABASE_URI = "sqlite:///%s" % os.path.join(os.path.abspath(os.path.dirname(__file__)), "..",
"people.db")
We create our app factory:
from flask_sqlalchemy import SQLAlchemy
from flask import Flask
from development.config import DevelopmentConfig
from rq import Queue
from email_queue.worker import conn
db = SQLAlchemy()
q = Queue(connection=conn)
def init_app(config=ProductionConfig):
# app creation
app = Flask(__name__)
app.config.from_object(config)
# plugin initialization
db.init_app(app)
with app.app_context():
# adding blueprints
from .blueprints import api
app.register_blueprint(api, url_prefix='/api/v1')
return app
We will now be able to start our app using the app factory:
app = centrifuga4.init_app()
if __name__ == "__main__":
with app.app_context():
app.run()
But we will also be able to (in our Redis job), do the following:
def my_job():
app = init_app()
with app.app_context():
return something_using_sqlalchemy()
I am trying to write a python script that uses watchdog to look for file creation and upload that to s3 using boto3. However, my boto3 credentials expire after every 12hrs, So I need to renew them. I am storing my boto3 credentials in ~/.aws/credentials. So right now I am trying to catch the S3UploadFailedError, renew the credentials, and write them to ~/.aws/credentials. But though the credentials are getting renewed and I am calling boto3.client('s3') again its throwing exception.
What am I doing wrong? Or how can I resolve it?
Below is the code snippet
try:
s3 = boto3.client('s3')
s3.upload_file(event.src_path,'bucket-name',event.src_path)
except boto3.exceptions.S3UploadFailedError as e:
print(e)
get_aws_credentials()
s3 = boto3.client('s3')
I have found a good example to refresh the credentials within this link:
https://pritul95.github.io/blogs/boto3/2020/08/01/refreshable-boto3-session/
but there this a little bug inside. Be careful about that.
Here is the corrected code:
from uuid import uuid4
from datetime import datetime
from time import time
from boto3 import Session
from botocore.credentials import RefreshableCredentials
from botocore.session import get_session
class RefreshableBotoSession:
"""
Boto Helper class which lets us create refreshable session, so that we can cache the client or resource.
Usage
-----
session = RefreshableBotoSession().refreshable_session()
client = session.client("s3") # we now can cache this client object without worrying about expiring credentials
"""
def __init__(
self,
region_name: str = None,
profile_name: str = None,
sts_arn: str = None,
session_name: str = None,
session_ttl: int = 3000
):
"""
Initialize `RefreshableBotoSession`
Parameters
----------
region_name : str (optional)
Default region when creating new connection.
profile_name : str (optional)
The name of a profile to use.
sts_arn : str (optional)
The role arn to sts before creating session.
session_name : str (optional)
An identifier for the assumed role session. (required when `sts_arn` is given)
session_ttl : int (optional)
An integer number to set the TTL for each session. Beyond this session, it will renew the token.
50 minutes by default which is before the default role expiration of 1 hour
"""
self.region_name = region_name
self.profile_name = profile_name
self.sts_arn = sts_arn
self.session_name = session_name or uuid4().hex
self.session_ttl = session_ttl
def __get_session_credentials(self):
"""
Get session credentials
"""
session = Session(region_name=self.region_name, profile_name=self.profile_name)
# if sts_arn is given, get credential by assuming given role
if self.sts_arn:
sts_client = session.client(service_name="sts", region_name=self.region_name)
response = sts_client.assume_role(
RoleArn=self.sts_arn,
RoleSessionName=self.session_name,
DurationSeconds=self.session_ttl,
).get("Credentials")
credentials = {
"access_key": response.get("AccessKeyId"),
"secret_key": response.get("SecretAccessKey"),
"token": response.get("SessionToken"),
"expiry_time": response.get("Expiration").isoformat(),
}
else:
session_credentials = session.get_credentials().__dict__
credentials = {
"access_key": session_credentials.get("access_key"),
"secret_key": session_credentials.get("secret_key"),
"token": session_credentials.get("token"),
"expiry_time": datetime.fromtimestamp(time() + self.session_ttl).isoformat(),
}
return credentials
def refreshable_session(self) -> Session:
"""
Get refreshable boto3 session.
"""
# get refreshable credentials
refreshable_credentials = RefreshableCredentials.create_from_metadata(
metadata=self.__get_session_credentials(),
refresh_using=self.__get_session_credentials,
method="sts-assume-role",
)
# attach refreshable credentials current session
session = get_session()
session._credentials = refreshable_credentials
session.set_config_variable("region", self.region_name)
autorefresh_session = Session(botocore_session=session)
return autorefresh_session
According to the documentation, the client looks in several locations for credentials and there are other options that are also more programmatic-friendly that you might want to consider instead of the .aws/credentials file.
Quoting the docs:
The order in which Boto3 searches for credentials is:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
In your case, since you are already catching the exception and renewing the credentials, I would simply pass the new ones to a new instance of the client like so:
client = boto3.client(
's3',
aws_access_key_id=NEW_ACCESS_KEY,
aws_secret_access_key=NEW_SECRET_KEY,
aws_session_token=NEW_SESSION_TOKEN
)
If instead you are using these same credentials elsewhere in the code to create other clients, I'd consider setting them as environment variables:
import os
os.environ['AWS_ACCESS_KEY_ID'] = NEW_ACCESS_KEY
os.environ['AWS_SECRET_ACCESS_KEY'] = NEW_SECRET_KEY
os.environ['AWS_SESSION_TOKEN'] = NEW_SESSION_TOKEN
Again, quoting the docs:
The session key for your AWS account [...] is only needed when you are using temporary credentials.
Here is my implementation which only generates new credentials if existing credentials expire using a singleton design pattern
import boto3
from datetime import datetime
from dateutil.tz import tzutc
import os
import binascii
class AssumeRoleProd:
__credentials = None
def __init__(self):
assert True==False
#staticmethod
def __setCredentials():
print("\n\n ======= GENERATING NEW SESSION TOKEN ======= \n\n")
# create an STS client object that represents a live connection to the
# STS service
sts_client = boto3.client('sts')
# Call the assume_role method of the STSConnection object and pass the role
# ARN and a role session name.
assumed_role_object = sts_client.assume_role(
RoleArn=your_role_here,
RoleSessionName=f"AssumeRoleSession{binascii.b2a_hex(os.urandom(15)).decode('UTF-8')}"
)
# From the response that contains the assumed role, get the temporary
# credentials that can be used to make subsequent API calls
AssumeRoleProd.__credentials = assumed_role_object['Credentials']
#staticmethod
def getTempCredentials():
credsExpired = False
# Return object for the first time
if AssumeRoleProd.__credentials is None:
AssumeRoleProd.__setCredentials()
credsExpired = True
# Generate if only 5 minutes are left for expiry. You may setup for entire 60 minutes by catching botocore ClientException
elif (AssumeRoleProd.__credentials['Expiration']-datetime.now(tzutc())).seconds//60<=5:
AssumeRoleProd.__setCredentials()
credsExpired = True
return AssumeRoleProd.__credentials
And then I am using singleton design pattern for client as well which would generate a new client only if new session is generated. You can add region as well if required.
class lambdaClient:
__prodClient = None
def __init__(self):
assert True==False
#staticmethod
def __initProdClient():
credsExpired, credentials = AssumeRoleProd.getTempCredentials()
if lambdaClient.__prodClient is None or credsExpired:
lambdaClient.__prodClient = boto3.client('lambda',
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken'])
return lambdaClient.__prodClient
#staticmethod
def getProdClient():
return lambdaClient.__initProdClient()
I am trying to use keycloak with apache superset. I have spent hours on the links below but unable to replace the current login.
Using OpenID/Keycloak with Superset
2.Using KeyCloak(OpenID Connect) with Apache SuperSet
Using OpenID/Keycloak with Superset
I am using apache superset 0.34.5. While above links use 0.28 and below.
i am confused at inital step. let me explain the steps and see what i am missing.
I install superset using pip.
The structure i have is, i have config.py and security.py at the same level (i dont have security folder)
I renamed the security to oid_security.
I created a security.py with the following content.
from flask_appbuilder.security.manager import AUTH_OID
from superset.security import SupersetSecurityManager
from flask_oidc import OpenIDConnect
from flask_appbuilder.security.views import AuthOIDView
from flask_login import login_user
from urllib.parse import quote
from flask_appbuilder.views import ModelView, SimpleFormView, expose
import logging
class AuthOIDCView(AuthOIDView):
#expose('/login/', methods=['GET', 'POST'])
def login(self, flag=True):
sm = self.appbuilder.sm
oidc = sm.oid
#self.appbuilder.sm.oid.require_login
def handle_login():
user = sm.auth_user_oid(oidc.user_getfield('email'))
if user is None:
info = oidc.user_getinfo(['preferred_username', 'given_name', 'family_name', 'email'])
user = sm.add_user(info.get('preferred_username'), info.get('given_name'), info.get('family_name'), info.get('email'), sm.find_role('Gamma'))
login_user(user, remember=False)
return redirect(self.appbuilder.get_url_for_index)
return handle_login()
#expose('/logout/', methods=['GET', 'POST'])
def logout(self):
oidc = self.appbuilder.sm.oid
oidc.logout()
super(AuthOIDCView, self).logout()
redirect_url = request.url_root.strip('/') + self.appbuilder.get_url_for_login
return redirect(oidc.client_secrets.get('issuer') + '/protocol/openid-connect/logout?redirect_uri=' + quote(redirect_url))
class OIDCSecurityManager(SupersetSecurityManager):
authoidview = AuthOIDCView
def __init__(self,appbuilder):
super(OIDCSecurityManager, self).__init__(appbuilder)
if self.auth_type == AUTH_OID:
self.oid = OpenIDConnect(self.appbuilder.get_app)
I then created custom manager with the following
from flask_appbuilder.security.manager import AUTH_OID
from flask_appbuilder.security.sqla.manager import SecurityManager
from flask_oidc import OpenIDConnect
class OIDCSecurityManager(SecurityManager):
def __init__(self, appbuilder):
super(OIDCSecurityManager, self).__init__(appbuilder)
if self.auth_type == AUTH_OID:
self.oid = OpenIDConnect(self.appbuilder.get_app)
self.authoidview = AuthOIDCView
I created client secret.json with my credentials.
I edited config file as below.
from superset.security import OIDCSecurityManager
AUTH_TYPE = AUTH_OID
OIDC_CLIENT_SECRETS = 'client_secret.json'
OIDC_ID_TOKEN_COOKIE_SECURE = False
OIDC_REQUIRE_VERIFIED_EMAIL = False
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = 'Gamma'
CUSTOM_SECURITY_MANAGER = OIDCSecurityManager
One thing to mention here is have manager py in security folder in flask appbuilder which has Abstract Security Manager cls. I am getting an error security py
It says cannot import name SupersetSecurityManager from superset - security
anyone please?
I suggest you start afresh and follow the steps that worked for me:
Create a virtual environment within your superset directory and activate it.
Install the flask-oidc and superset plugins within your virtual environment. pip install flask-oidc
Have a oidc_security.py file with the script you pasted above i.e. security.py in your setup.
Have a client_secret.json file with your keycloak config.
Have a superset_config.py with the script you pasted above.
Add all three of these files to your pythonpath.
Run superset db upgrade & superset init commands.
Finally, execute superset run. After the initialization completes, navigate to http://localhost:8088 on your browser. Expected behaviour: you'll be redirected to keycloak to login/register. After successful sign in, you'll be redirected to superset app.
I hope this helps. Do post back incase you succeed or face an error.
I then created custom manager with the following
where to update this??
from flask_appbuilder.security.manager import AUTH_OID
from flask_appbuilder.security.sqla.manager import SecurityManager
from flask_oidc import OpenIDConnect
class OIDCSecurityManager(SecurityManager):
def __init__(self, appbuilder):
super(OIDCSecurityManager, self).__init__(appbuilder)
if self.auth_type == AUTH_OID:
self.oid = OpenIDConnect(self.appbuilder.get_app)
self.authoidview = AuthOIDCView