How run cherrypy app without screen logging? - cherrypy

Hi I looking for some configuration or flag that allows me to silence the requested pages.
When I run python cherrypy_app.py and I join to the 127.0.0.1:8080 in the console where I start the cherrypy app show me
127.0.0.1 - - [09/Oct/2014:19:10:35] "GET / HTTP/1.1" 200 1512 "" "Mozilla/5.0 ..."
127.0.0.1 - - [09/Oct/2014:19:10:35] "GET /static/css/style.css HTTP/1.1" 200 88 "http://127.0.0.1:8080/" "Mozilla/5.0 ..."
127.0.0.1 - - [09/Oct/2014:19:10:36] "GET /favicon.ico HTTP/1.1" 200 1406 "" "Mozilla/5.0 ..."
I do not want to show this info. It is possible?

I as far as I remember in my first attempts with CherryPy I had the same desire. So here's a little more to say besides turning off the stdout logging per se.
CherryPy has some predefined environments: staging, production, embedded, test_suite that are defined here. Each environment has its set of configuration. So while developing stdout logging is in fact quite helpful, whereas in production environment it makes no sense. Setting the environment according to the deployment is the correct way to deal configuration in CherryPy.
In your particular case the stdout logging is controlled by log.screen. It is already disabled in production environment.
Here's the example, but note that setting environment inside your application isn't the best idea. You're better use cherryd's --environment for it instead.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import cherrypy
config = {
'global' : {
'server.socket_host' : '127.0.0.1',
'server.socket_port' : 8080,
'server.thread_pool' : 8,
# Doing it explicity isn't a recommended way
# 'log.screen' : False
}
}
class App:
#cherrypy.expose
def index(self):
return 'Logging example'
if __name__ == '__main__':
# Better use cherryd (http://cherrypy.readthedocs.org/en/latest/install.html#cherryd)
# for setting the environment outside the app
cherrypy.config.update({'environment' : 'production'})
cherrypy.quickstart(App(), '/', config)

Related

Selecting multiple configs from a Config Group in Hydra without using an explicit nesting level inside each of the configs

The documentation in https://hydra.cc/docs/patterns/select_multiple_configs_from_config_group/ shows how to pick multiple configs from a Config Group and place them in a dictionary-like structure.
However, as mentioned in the very last paragraph there, "example uses an explicit nesting level inside each of the configs to prevent them stepping over one another".
For my use-case, this would prove extremely cumbersome and I would like to avoid it at all costs if possible.
Is there a way to achieve a similar result without resorting to explicitly adding the level in the individual configs? Thanks in advance :)
You can use a defaults-list #package keyword to achieve a similar result.
Let's assume you have changed the yaml files server/site/fb.yaml and server/site/google.yaml to not contain the "explicit nesting", so e.g. server/site/fb.yaml contains only the data domain: facebook.com.
You can achieve the same output as from the docs webpage using the following defaults list in server/apache.yaml`:
# Option 1: With a non-overridable defaults list
defaults:
- site/fb#site.fb
- site/google#site.google
# Option 2: With an overridable defaults list
defaults:
- site#site.fb: fb
- site#site.google: google
Either option 1 or option 2 above produces this output:
$ python my_app.py
server:
site:
fb:
domain: facebook.com
google:
domain: google.com
host: localhost
port: 443
The #package directive here could be any compound key that you want. For example, using the following defaults list:
# Option 1: With a non-overridable defaults list
defaults:
- site/fb#foo
- site/google#bar
# Option 2: With an overridable defaults list
defaults:
- site#foo: fb
- site#bar: google
We get this result:
$ python my_app.py
server:
foo:
domain: facebook.com
bar:
domain: google.com
host: localhost
port: 443
Using option 2 (an overridable defaults list) means you can override the given default option using the CLI:
$ python my_app.py server/site#server.foo=amazon
server:
foo:
domain: amazon.com
bar:
domain: google.com
host: localhost
port: 443

Packets don't have 'http' layer available

**Hi all,
I am learning online about network packets. I came across 'Scapy' in python. I am supposed to have 'Http' section the packet results available in terminal. For some reason I don't see '###[ HTTP ]###' for some sites. In the video that I am learning from, the tutor is using the same code but he sees 'http' for every single site he browses on, but I can't duplicate his results.
I have python 2.7.18 and python 3.9.9 in my Kali. I tried using both 'python' and 'python3' header when calling the program in terminal(no change in finding 'http' layer in packers).
I am capturing some of the http packets but not all. I have been working on a python code on my Kali VM that would look for the packets transmission for Urls and login info and display those URL of in the Terminal. The Tutorial had pretty much my expected result but I don't have the same result. In Tutorial coach was doing the same as I did(Go to Bing, open a random image )
Am I doing something wrong...? I would appreciate help on this issue please.**
...
# CODE:
#!/usr/bin/env python
import scapy.all as scapy
from scapy.layers import http
def sniff(interface):
scapy.sniff(iface=interface, store=False, prn=process_sniffed_packet) #prn = call back function, udp= audio and
def get_url(packet):
return packet[http.HTTPRequest].Host + packet[http.HTTPRequest].Path
def get_login_info(packet):
if packet.haslayer(scapy.Raw): # When used, it will only show the packet with username and password.
load = packet[scapy.Raw].load
keywords = ["uname", "username", "user", "pass", "password", "login", "Email"]
for keyword in keywords:
if keyword in str(load):
return load
def process_sniffed_packet(packet):
#print(packet.show())
if packet.haslayer(http.HTTPRequest):
#print(packet.show())
URL = get_url(packet)
print("[+] HTTP >> " + str(URL))
login_info = get_login_info(packet)
if login_info:
print("\n\nPossible username and Password > " + str(login_info) + "\n\n")
sniff("eth0") # This is connected to the internet
...
RESULT IN TERMINAL: I was browsing to Bing.com and opening a random Image.
I have used print(packet.show()) for Final Image that I browsed. In tutorial there was a ###HTTP### Layer, but I didn't have that layer.Image of Packer info for a randowm Image
┌──(venv)─(root💀kali)-[~/PycharmProjects/hello]
└─# python packet_sniffer.py
[+] HTTP >> b'ocsp.digicert.com/'
[+] HTTP >> b'ocsp.pki.goog/gts1c3'
[+] HTTP >> b'ocsp.pki.goog/gts1c3'
[+] HTTP >> b'ocsp.pki.goog/gts1c3'
[+] HTTP >> b'ocsp.pki.goog/gts1c3'
[+] HTTP >> b'ocsp.pki.goog/gts1c3'
[+] HTTP >> b'ocsp.pki.goog/gts1c3'
[+] HTTP >> b'ocsp.digicert.com/'
^C
My Expectation: These are exactly the URLs That I visited for above result.
┌──(venv)─(root💀kali)-[~/PycharmProjects/hello]
└─# python packet_sniffer.py
[+] HTTP >> file:///usr/share/kali-defaults/web/homepage.html
[+] HTTP >> https://www.google.com/search?client=firefox-b-1-e&q=bing
[+] HTTP >> https://www.bing.com/
[+] HTTP >> https://www.bing.com/search?q=test&qs=HS&sc=8-0&cvid=75111DD366884A028FE0E0D9383A29CD&FORM=QBLH&sp=1
[+] HTTP >> https://www.bing.com/images/search?`view=detailV2&ccid=3QI4G5yZ&id=F8B496EB517D80EFD809FCD1EF576F85DDD3A8EE&thid=OIP.3QI4G5yZS31HKo6043_GlAHaEU&mediaurl=https%3a%2f%2fwww.hrt.org%2fwp-content%2fuploads%2f2018%2f01%2fGenetic-Testing-Test-DNA-for-Genetic-Mutations-Telomeres-Genes-and-Proteins-for-Risk-1.jpg&cdnurl=https%3a%2f%2fth.bing.com%2fth%2fid%2fR.dd02381b9c994b7d472a8eb4e37fc694%3frik%3d7qjT3YVvV%252b%252fR%252fA%26pid%3dImgRaw%26r%3d0&exph=3500&expw=6000&q=test&simid=608028087796855450&FORM=IRPRST&ck=326502E72BC539777664412003B5BAC2&selectedIndex=80&ajaxhist=0&ajaxserp=0`
^C
...
I was running into a similar issue, which turned out to be that the HTTP/1.0 packets I was attempting to analyze were not being sent over PORT 80. Instead, my packets were being sent over PORT 5000.
It appears that the scapy implementation by default only interprets packets as http when they are sent on PORT 80.
I found the following snippet in this response to a GitHub Issue (for a package which should not be installed, per Cukic0d in their answer to a similar question here).
scapy.packet.bind_layers(TCP, HTTP, dport=5000)
scapy.packet.bind_layers(TCP, HTTP, sport=5000)
Adding this snippet before my call to sniff() resolved my issue and allowed me to proceed.
Hope this helps.

{"error":"not_found","reason":"missing"} error while running couchdb-lucene on windows

I am running CouchDB and Couchdb-lucene on Windows Server 2019 version 1809.
I followed all steps documented on link https://github.com/rnewson/couchdb-lucene
My CouchDB local.ini file
[couchdb]
os_process_timeout = 60000
[external]
fti=D:/Python/python.exe "C:/couchdb-lucene-2.2.0/tools/couchdb-external-hook.py --remote-port 5986"
[httpd_db_handlers]
_fti = {couch_httpd_external, handle_external_req, <<"fti">>}
[httpd_global_handlers]
_fti = {couch_httpd_proxy, handle_proxy_req, <<"http://127.0.0.1:5986">>}
couchdb-lucene.ini file
[lucene]
# The output directory for Lucene indexes.
dir=indexes
# The local host name that couchdb-lucene binds to
host=localhost
# The port that couchdb-lucene binds to.
port=5986
# Timeout for requests in milliseconds.
timeout=10000
# Timeout for changes requests.
# changes_timeout=60000
# Default limit for search results
limit=25
# Allow leading wildcard?
allowLeadingWildcard=false
# couchdb server mappings
[local]
url = http://localhost:5984/
Curl outputs
C:\Users\serhato>curl http://localhost:5986/_fti
{"couchdb-lucene":"Welcome","version":"2.2.0-SNAPSHOT"}
C:\Users\serhato>curl http://localhost:5984
{"couchdb":"Welcome","version":"3.1.1","git_sha":"ce596c65d","uuid":"cc1269d5a23b98efa74a7546ba45f1ab","features":["access-ready","partitioned","pluggable-storage-engines","reshard","scheduler"],"vendor":{"name":"The Apache Software Foundation"}}
Design document I defined in CouchDB which aims to create full text search index for RenderedMessage field
{
"_id": "_design/foo",
"_rev": "11-8ae842420bb4e122514fea6f05fac90c",
"fulltext": {
"by_message": {
"index": "function(doc) { var ret=new Document(); ret.add(doc.RenderedMessage); return ret }"
}
}
}
when I navigate to http://localhost:5984/dev-request-logs/_fti/_design/foo/by_message?q=hello
Response is
{"error":"not_found","reason":"missing"}
when I also navigate http://localhost:5984/dev-request-logs/_fti/
Response is same
{"error":"not_found","reason":"missing"}
I think there is a problem with external integration to lucene engine. So to my cruosity i try to execute python command to check if py script is running.
D:/Python/python.exe C:/couchdb-lucene-2.2.0/tools/couchdb-external-hook.py
but the result is
C:\Users\serhato>D:/Python/python.exe C:/couchdb-lucene-2.2.0/tools/couchdb-external-hook.py
File "C:\couchdb-lucene-2.2.0\tools\couchdb-external-hook.py", line 43
except Exception, e:
^
SyntaxError: invalid syntax
What might be the problem ?
After hours of search I finally got into this link
https://github.com/rnewson/couchdb-lucene/issues/265
the query must be through directly Lucene not coucbdb itself. Below url returns the result
C:\Users\serhato>curl http://localhost:5986/localx/dev-requestlogs/_design/foo/by_message?q=hello
Original Documentation is very misleading as all the examples uses couchdb default port not the Lucene.
Or am I missing something ??

using python, selenium and phantomjs on openshift (socket binding permission denied?)

Alright I'm at the end of my tether trying to get phantomJS to work with selenium in an openshift environment. I've downloaded the phantomjs binary using ssh and can even run it in the shell. But when it comes to starting a webdriver service using selenium I keep getting this traceback error no matter the args I put in.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/var/lib/openshift/576e22027628e1fb13000211/python/virtenv/venv/lib/python3.3/site-packages/selenium/webdriver/phantomjs/webdriver.py", line 50, in __init__
service_args=service_args, log_path=service_log_path)
File "/var/lib/openshift/576e22027628e1fb13000211/python/virtenv/venv/lib/python3.3/site-packages/selenium/webdriver/phantomjs/service.py", line 50, in __init__
service.Service.__init__(self, executable_path, port=port, log_file=open(log_path, 'w'))
File "/var/lib/openshift/576e22027628e1fb13000211/python/virtenv/venv/lib/python3.3/site-packages/selenium/webdriver/common/service.py", line 33, in __init__
self.port = utils.free_port()
File "/var/lib/openshift/576e22027628e1fb13000211/python/virtenv/venv/lib/python3.3/site-packages/selenium/webdriver/common/utils.py", line 36, in free_port
free_socket.bind(('0.0.0.0', 0))
PermissionError: [Errno 13] Permission denied
Not sure what's going on, am I supposed to bind to an IP address? If so I tried using service args but that hasn't helped.
I came across the same issues trying to run phantomJS on my Openshift-hosted Django application, running on a Python 3 gear. Finally I managed to make it work, this is how:
The main issue to overcome is that Openshift does not allow applications to bind on localhost (nor '0.0.0.0' nor '127.0.0.1). So the point is to bind to the actual IP address of your Openshift Gear instead
You have to deal with this issue at the ghostdriver level as well as within the Python-selenium binding.
ghostdriver (phantomJS binary)
Unfortunately, as explained brilliantly by Paolo Bernardi in this post : http://www.bernardi.cloud/2015/02/25/phantomjs-with-ghostdriver-on-openshift/ you have to use a patched version of phantomjs for this, as the released version doesn't allow to bind to a specified IP. The binary linked by Paolo did not work on my Python3 cardridge, yet this one worked perfectly: https://github.com/jrestful/server/blob/master/seo/phantomjs-1.9.8-patched.tar.gz?raw=true (see question Trying to run PhantomJS on OpenShift: cannot patch GhostDriver so that it can bind on the server IP address for details)
Upload this phantomjs binary to app-root/data/phantomjs/bin (for example) and make sure it is runnable :
> chmod 711 app-root/data/phantomjs/bin/phantomjs
You can now check that you can bin to your IP like this (I chose port number 15002 for my app, I reckon you can pick any value you want above 15000):
> echo $OPENSHIFT_PYTHON_IP
127.13.XXX.XXX
> app-root/data/phantomjs/bin/phantomjs --webdriver=127.13.XXX.XXX:15002
PhantomJS is launching GhostDriver...
[INFO - 2017-03-24T13:16:36.031Z] GhostDriver - Main - running on port 127.13.XXX.XXX:15002
Ok, now kill this process and proceed to step 2 : python webdriver
Custom python-selenium webdriver for PhantomJS
The point is to add the IP address to bind to as a parameter of the PhantomJS Webdriver.
First, I defined new settings to adapt to Openshift's constraint in my settings.py
PHANTOMJS_BIN_PATH = os.path.join(os.getenv('OPENSHIFT_DATA_DIR'), 'phantomjs', 'bin', 'phantomjs')
PHANTOMJS_LOG_PATH = os.path.join(os.getenv('OPENSHIFT_LOG_DIR'), 'ghostdriver.log')
(make sure app-root/logs/ is writable, maybe you'll have to chmod it)
Then, I had to override the PhantomJS Webdriver class, to provide the ip address as an argument. Here is my own implementation:
from selenium.webdriver import phantomjs
from selenium.webdriver.common import utils
from selenium.webdriver.remote.webdriver import WebDriver as RemoteWebDriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
class MyPhantomJSService(phantomjs.service.Service):
def __init__(self, executable_path, port=0, service_args=None, log_path=None, ip=None):
if ip is None:
self.ip = '0.0.0.0'
else:
self.ip = ip
phantomjs.service.Service.__init__(self, executable_path, port, service_args, log_path)
def command_line_args(self):
return self.service_args + ["--webdriver=%s:%d" % (self.ip, self.port)]
def is_connectable(self):
return utils.is_connectable(self.port, host=self.ip)
#property
def service_url(self):
"""
Gets the url of the GhostDriver Service
"""
return "http://%s:%d/wd/hub" % (self.ip, self.port)
class MyPhantomWebDriver(RemoteWebDriver):
"""
Wrapper to communicate with PhantomJS through Ghostdriver.
You will need to follow all the directions here:
https://github.com/detro/ghostdriver
"""
def __init__(self, executable_path="phantomjs",
ip=None, port=0, desired_capabilities=DesiredCapabilities.PHANTOMJS,
service_args=None, service_log_path=None):
"""
Creates a new instance of the PhantomJS / Ghostdriver.
Starts the service and then creates new instance of the driver.
:Args:
- executable_path - path to the executable. If the default is used it assumes the executable is in the $PATH
- ip - IP sur lequel on veut se binder : c'est la spécificité de ce monkeypatch
- port - port you would like the service to run, if left as 0, a free port will be found.
- desired_capabilities: Dictionary object with non-browser specific
capabilities only, such as "proxy" or "loggingPref".
- service_args : A List of command line arguments to pass to PhantomJS
- service_log_path: Path for phantomjs service to log to.
"""
self.service = MyPhantomJSService(
executable_path,
port=port,
service_args=service_args,
log_path=service_log_path,
ip=ip)
self.service.start()
try:
RemoteWebDriver.__init__(
self,
command_executor=self.service.service_url,
desired_capabilities=desired_capabilities)
except Exception:
self.quit()
raise
self._is_remote = False
def quit(self):
"""
Closes the browser and shuts down the PhantomJS executable
that is started when starting the PhantomJS
"""
try:
RemoteWebDriver.quit(self)
except Exception:
# We don't care about the message because something probably has gone wrong
pass
finally:
self.service.stop()
Finally, invoke this custom webdriver instead of webdriver.PhantomJS(..., like this:
from .myphantomjs import MyPhantomWebDriver
browser = MyPhantomWebDriver(executable_path=settings.PHANTOMJS_BIN_PATH, service_log_path=settings.PHANTOMJS_LOG_PATH, ip=os.getenv('OPENSHIFT_PYTHON_IP'), port=15002)
From then on, you can use the browser object normally

Unable to run python cgi scripts using CGIHTTPRequestHandler in Python 3.3

I am a noob; trying to create and use a simple webserver in Python that executes CGI scripts written in Python. I am using Windows XP and Python v3.3.0. I have a "myserver" directory which contains "myserver.py","sample.html" and the directory "cgi-bin" which in turn contains "cgi_demo.py"
myserver.py
from http.server import HTTPServer
from http.server import CGIHTTPRequestHandler
port = 8080
host = '127.0.0.1'
server_address = (host,port)
httpd = HTTPServer(server_address,CGIHTTPRequestHandler)
print("Starting my web server on port "+str(port))
httpd.serve_forever()
cgi_demo.py
import cgi
import cgitb; cgitb.enable()
print("Content-type: text/html")
print
print("<html><body>")
for i in range(0,100):
print(i,"<br>")
print("</body></html>")
Now the directory listing works fine for "myserver" but not for "cgi-bin"; maybe that is how it is coded - I don't have a problem here. "sample.html" is retrieved fine too. However, the execution of "cgi_demo.py" is not proper. I get a blank page in the browser; and the console window (which is blank too) appears and disappears. Moreover, on the server's console I get the message
127.0.0.1 - - [29/Nov/2012 12:00:31] "GET /cgi-bin/cgi_demo.py HTTP/1.1" 200 -
127.0.0.1 - - [29/Nov/2012 12:00:31] command: C:\Python33\python.exe -u "D:\python apps\my web server\cgi-bin\cgi_demo.py" ""
127.0.0.1 - - [29/Nov/2012 12:00:32] CGI script exited OK
Please tell me what is wrong! I get the feeling that the output stream of my script is not connected to the server. What am I doing wrong? Don't say that I have to extend CGIHTTPRequestHandler!!
SORRY for the trouble!
Well, it is my fault. 2 things to note:
[1]The console window that appeared and disappeared; it only happens when I use IDLE to execute the server. If the script is already running in a normal windows console then this does not happen. My Feeling was WRONG.
[2]There is an bug/error in my cgi script. After printing the HTTP header; the print statement that I wrote was just "print" instead of actually being "print()".This is so embarrassing! But, even then why didn't the interpreter catch this error?

Resources