In settings.py, I get var from environment like this:
ROBOTS_STR = os.environ.get('DJANGO_ROBOTS_STR')
My env var is set in a file and looks like:
DJANGO_ROBOTS_STR=User-agent: *\nDisallow: /admin\nDisallow: /api
The problem is that in the view, when I get settings.ROBOTS_STR, the value of the string has been auto escaped. It is: User-agent: *\\nDisallow: /admin\\nDisallow: /api
How can I change this behaviour? Note that I'm using Python 3.3
Decode it with string-escape:
>>> os.environ.get('DJANGO_ROBOTS_STR')
'User-agent: *\\nDisallow: /admin\\nDisallow: /api'
>>> os.environ.get('DJANGO_ROBOTS_STR').decode('string-escape')
'User-agent: *\nDisallow: /admin\nDisallow: /api'
>>> print(os.environ.get('DJANGO_ROBOTS_STR'))
User-agent: *\nDisallow: /admin\nDisallow: /api
>>> print(os.environ.get('DJANGO_ROBOTS_STR').decode('string-escape'))
User-agent: *
Disallow: /admin
Disallow: /api
For Python 3, encode it first, then decode it:
>>> os.environ.get('DJANGO_ROBOTS_STR').encode('latin1').decode('unicode_escape')
'User-agent: *\nDisallow: /admin\nDisallow: /api'
You can try set env var with $'xxxxxx':
$ export DJANGO_ROBOTS_STR=$'User-agent: *\nDisallow: /admin\nDisallow: /api'
>>> import os
>>> os.environ.get('DJANGO_ROBOTS_STR')
'User-agent: *\nDisallow: /admin\nDisallow: /api'
>>> print os.environ.get('DJANGO_ROBOTS_STR')
User-agent: *
Disallow: /admin
Disallow: /api
Related
im opening a socket server that hosts the reddit confirmation s that you can login to the application without compromising the bot and logging in with one sting called an access token that is in the url of the socket server and what i want to do it sake that access token from the url and display it into the socket server page but the server cant detect any variables inside of the url. Here is my code if you need it. it works when implicit is not part of the url generation, but as soon as i add implicit=True (which i need to, i plan on this being an app) it no longer works.
what i want: (about, this is for the non implicit)GET /?state=state&code=code HTTP/1.1.
what i get: GET / HTTP/1.1.
that is what happens when i do data = client.recv(1024).decode("utf-8").
it will return proper data when the url is generated with url = reddit.auth.url(scopes, state).
but when i generate the url with url = reddit.auth.url(scopes, state, implicit=True).
the data will return GET / HTTP/1.1.
I need the access token to be recognized after the /.
I am using praw 7.1.0
GET / HTTP/1.1
Host: localhost:8080
Connection: keep-alive
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) QtWebEngine/5.15.2 Chrome/83.0.4103.122 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Language: en-US,en;q=0.9
DNT: 1
Sec-Fetch-Site: cross-site
Sec-Fetch-Mode: navigate
Sec-Fetch-User: ?1
Sec-Fetch-Dest: document
Accept-Encoding: gzip, deflate, br
/
/
Traceback (most recent call last):
File "path/prawTest.py", line 81, in <module>
sys.exit(main())
File "path/prawTest.py", line 61, in main
params = {
File "path/prawTest.py", line 62, in <dictcomp>
key: value for (key, value) in [token.split("=") for token in param_tokens]
ValueError: not enough values to unpack (expected 2, got 1)
It expects to see variables after the /.
Without implicit=Tru i get the following:
GET /?state=50470&code=code HTTP/1.1
Host: localhost:8080
Connection: keep-alive
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) QtWebEngine/5.15.2 Chrome/83.0.4103.122 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Language: en-US,en;q=0.9
DNT: 1
Sec-Fetch-Site: cross-site
Sec-Fetch-Mode: navigate
Sec-Fetch-User: ?1
Sec-Fetch-Dest: document
Accept-Encoding: gzip, deflate, br
The implicit=True url looks like this:
http://localhost:8080/#access_token=token&token_type=bearer&state=39771&expires_in=3600&scope=%2A
In Burp Suite the first line of a captured request is usually GET / HTTP/1.1. However, I am currently practicing Host Header injection using the method of supplying an absolute URL in order to something like this:
GET https://vulnerable-website.com/ HTTP/1.1
Host: bad-stuff-here
In python I am using the requests library and am unable to specify the exact GET request I need.
import requests
burp0_url = "https://vulnerable-website.com:443/"
burp0_cookies = {[redacted]}
burp0_headers = {"Host": "bad-stuff-here", "User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en-US,en;q=0.5", "Accept-Encoding": "gzip, deflate", "Referer": "https://vulnerable-website.com/", "Connection": "close", "Upgrade-Insecure-Requests": "1"}
output = requests.get(burp0_url, headers=burp0_headers, cookies=burp0_cookies)
print(output, output.text)
I have tried specifying the GET request in the header dictionary (header = {"GET":" / HTTP/1.1", ...}), however this only results in a GET Header not r
Request on the 6th line being sent:
GET / HTTP/1.1
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0
Accept-Encoding: gzip, deflate
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Connection: close
GET: /
Host: bad-stuff-here
Accept-Language: en-US,en;q=0.5
Referer: https://vulnerable-website.com/
Upgrade-Insecure-Requests: 1
Cookie: [redacted]
This is a very specific problem and I'm not sure if anyone has had the same issues but any help is appreciated. Maybe a workaround with urllib or something I'm missing. Thanks.
requests uses urllib3 under the hood.
You have to craft the request yourself because of non of the clients [urlib, requests, http.client] won't allow you to insert a control character by design.
You can use a plain socket for this
msg = 'GET / HTTP/1.1\r\n\r\n'
s = socket.create_connection(("vulnerable-website.com", 80))
with closing(s):
s.send(msg)
buf = ''.join(iter(partial(s.recv, 4096), ''))
I'd like to pass a raw HTTP request like:
GET /foo/bar HTTP/1.1
Host: example.org
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; fr; rv:1.9.2.8) Gecko/20100722 Firefox/3.6.8
Accept: */*
Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded
X-Requested-With: XMLHttpRequest
Referer: http://example.org/test
Cookie: foo=bar; lorem=ipsum;
And generate the python request such as:
import requests
burp0_url = "http://example.org:80/foo/bar"
burp0_cookies = {"foo": "bar", "lorem": "ipsum"}
burp0_headers = {"User-Agent": "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; fr; rv:1.9.2.8) Gecko/20100722 Firefox/3.6.8", "Accept": "*/*", "Accept-Language": "fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3", "Accept-Encoding": "gzip,deflate", "Accept-Charset": "ISO-8859-1,utf-8;q=0.7,*;q=0.7", "Keep-Alive": "115", "Connection": "keep-alive", "Content-Type": "application/x-www-form-urlencoded", "X-Requested-With": "XMLHttpRequest", "Referer": "http://example.org/test"}
requests.get(burp0_url, headers=burp0_headers, cookies=burp0_cookies)
Is there a library for that?
I could not find an existing library that does this conversion, but there is a Python library to convert curl commands to python requests code.
https://github.com/spulec/uncurl
e.g.
import uncurl
print(uncurl.parse('curl --header "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7" --compressed --header "Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3" --header "Connection: keep-alive" --header "Content-Type: application/x-www-form-urlencoded" --cookie "foo=bar; lorem=ipsum;" --header "Keep-Alive: 115" --header "Referer: http://example.org/test" --user-agent "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; fr; rv:1.9.2.8) Gecko/20100722 Firefox/3.6.8" --header "X-Requested-With: XMLHttpRequest" https://example.org/foo/bar '))
I haven't found a Python library to transform raw HTTP into such a curl command. However, this Perl program does it.
Like this:
$ cat basic
GET /index.html HTTP/2
Host: example.com
Authorization: Basic aGVsbG86eW91Zm9vbA==
Accept: */*
$ ./h2c < basic
curl --http2 --header User-Agent: --user "hello:youfool" https://example.com/index.html
You could either call it from your python script, use a Python-Perl bridge or try to port it.
Postman also allows you to convert raw HTTP requests directly to python requests code, using its Code snippet generator. Although, it seems this can only be done via the GUI. It's also not Open-Source, so you can't access the code that does this transformation.
I needed something that can generate a request and couldn't find it so ended up writing it in gist:
class RequestParser(object):
def __parse_request_line(self, request_line):
request_parts = request_line.split(' ')
self.method = request_parts[0]
self.url = request_parts[1]
self.protocol = request_parts[2] if len(request_parts) > 2 else DEFAULT_HTTP_VERSION
def __init__(self, req_text):
req_lines = req_text.split(CRLF)
self.__parse_request_line(req_lines[0])
ind = 1
self.headers = dict()
while ind < len(req_lines) and len(req_lines[ind]) > 0:
colon_ind = req_lines[ind].find(':')
header_key = req_lines[ind][:colon_ind]
header_value = req_lines[ind][colon_ind + 1:]
self.headers[header_key] = header_value
ind += 1
ind += 1
self.data = req_lines[ind:] if ind < len(req_lines) else None
self.body = CRLF.join(self.data)
def __str__(self):
headers = CRLF.join(f'{key}: {self.headers[key]}' for key in self.headers)
return f'{self.method} {self.url} {self.protocol}{CRLF}' \
f'{headers}{CRLF}{CRLF}{self.body}'
def to_request(self):
req = requests.Request(method=self.method,
url=self.url,
headers=self.headers,
data=self.data, )
return req
Apologies if this has been asked before but I've trawled through a lot of similar questions but wasn't able to figure out so here goes:
I have a cURL command that does a HTTP POST.
How do I make the output to be redirected to standard output.
The command I am using inside a docker-container is the following:
curl -v -X POST "http://username:pass123#data-service:8081/api" -H "Content-Type: application/json" -d #postBody
This should give an output something like:
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying 172.18.0.7...
* TCP_NODELAY set
* Connected to data-service (172.18.0.7) port 8081 (#0)
* Server auth using Basic with user 'username'
> POST /v1/sources HTTP/1.1
> Host: data-service:8081
> Authorization: Basic Z38JEsJ65JI9128hhtJlZW21XQ==
> User-Agent: curl/7.52.1
> Accept: */*
> Content-Type: application/json
> Content-Length: 192
>
* upload completely sent off: 192 out of 192 bytes
< HTTP/1.1 200 OK
< Content-Type: application/json
< Transfer-Encoding: chunked
< Server: Jetty(8.1.8.v20121106)
<
{
"action" : "GO"
* Curl_http_done: called premature == 0
* Connection #0 to host data-service left intact
}
How do I redirect all of this to stdout including the returned body.
I have tried the following but it doesn't redirect everything:
curl -vs POST "http://username:pass123#data-service:8081/api" -H "Content-Type: application/json" -d #postBody 2> dev/console
Any ideas?!
Thank you.
Let's start simple server:
var http = require('http');
http.createServer(function (req, res) {
console.log('asdasd');
res.end('asdasd');
}).listen(8898)
And make a simple request
curl -v 'localhost:8898/?ab'
* Trying ::1...
* Connected to localhost (::1) port 8898 (#0)
> GET /?ab HTTP/1.1
> Host: localhost:8898
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 13 Oct 2016 20:26:14 GMT
< Connection: keep-alive
< Content-Length: 6
<
* Connection #0 to host localhost left intact
asdasd
Looks like everything is all right.
But if we add a literal space to it...
cornholio-osx:~/>curl -v 'localhost:8898/?a b'
* Trying ::1...
* Connected to localhost (::1) port 8898 (#0)
> GET /?a b HTTP/1.1
> Host: localhost:8898
> User-Agent: curl/7.43.0
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
Nothing is logged and no body is written.
I assume, literal spaces in URLs are violation of HTTP protocol but is this behavior HTTP-complaint?