So I am using a flask session. I am using the filesystem type so that I can store more session data than I otherwise could. My users want to keep session timeouts long, and the site uses minimal server storage, so this is fine. However, when I try to set the timeout session to 24 hours below, it still times out after 30 minutes.
application = Flask(__name__)
SECRET_KEY = os.urandom(32)
application.config['SESSION_PERMANENT'] = True
application.config['SESSION_TYPE'] = 'filesystem'
application.config['PERMANENT_SESSION_LIFETIME'] = timedelta(hours = 24)
application.config['SECRET_KEY'] = SECRET_KEY
Session(application)
#application.before_request
def make_session_permanent():
session.permanent = True
application.permanent_session_lifetime = timedelta(hours = 24)
What am I doing wrong here?
I believe I have found the issue, which was in the line : SECRET_KEY = os.urandom(32). Every time the app restarted (idle window in browser reloaded), session data was erased as a new secret key was generated, meaning the browser could not find the cookies it needed. I generated one key, externally, and then hardcoded this value into my code so it is the same every time.
I would change the line with session.permanent =true to session.modified = True and see if it works then.
Related
I have a python code which uses session id and gets the output from an url.
I have to logout of the session once i get the results.Is there a way to close the session. I tried all the options such as
s = requests.session()
s.config['keep_alive'] = False
r = requests.post(url=url, data=body, headers={'Connection':'close'})
When i tried to print the session id after the closing the connection it prints the same session id that is used to make the connection.
For experiment, I set up a cache uwsgi.cache_set('test', data) inside a mule process. The cache is set as expected.
Now I spawn a thread from which I can access that cache
Threading is enabled in uwsgi.ini:
[uwsgi]
threads = 4
in mule.py:
#Threaded function
def a_function():
uwsgi.cache_set('test', b'NOT OK') <- Nothing happens here
cache_return = uwsgi.cache_get('test') <- Returns b'OK' which means the cache did not overwrite the previous value.
if __name__ == '__main__':
cache = uwsgi.cache_set('test', b'OK') <- Works here
cache_return = uwsgi.cache_get('test') <- Return b'OK', as expected
t = Thread(target=a_function)
t.start()
The question is why does this happen and how do I set caches from inside a thread.
OK it seems like I used the wrong function (cache_set) instead of cache_update.
uwsgi.cache_set(key, value[, expire, cache_name])
Set a value in the cache. If the key is already set but not expired,
it doesn’t set anything.
uwsgi.cache_update(key, value[, expire, cache_name])
Update a value in the cache. This always sets the key, whether it was
already set before or not and whether it has expired or not.
I intend to transfer issues from Redmine to GitLab using this script
https://github.com/sdslabs/redmine-to-gitlab/blob/master/issue-tranfer.py
It works, but I would like to keep the issues ids during the transition. By default GitLab just starts from #1 and increases. I tried adding "newissue['iid']=issue['id']" and variations to the parameters, but apparently GitLab simply does not permit assigning an id. Anyone knows if there's a way?
"issue" is the data acquired from redmine:
newissue = {}
newissue['id'] = pro['id']
newissue['title'] = issue['subject']
newissue['description'] = issue["description"]
if 'assigned_to' in issue:
auser = con.finduserbyname(issue['assigned_to']['name'])
if(auser):
newissue['assignee_id'] = auser['id']
print newissue
if ('fixed_version' in issue):
newissue['milestone_id'] = issue['fixed_version']['id']
newiss = post('/projects/' + str(pro['id']) + '/issues', newissue)
and this is the "post" function
def post( url, load = {}):
load['private_token'] = conf.token
r = requests.post(conf.base_url + url, params = load, verify = conf.sslverify)
return r.json()
The API does not allow you to specify an issue ID at creation time. The ID is intended to be sequential. The only way you could potentially accomplish this task is to interact with the database directly. If you choose this route I caution you to be extremely careful, and have backups.
I am working with Windows Azure Websites and Web Jobs.
I have a console application that I use to download an FTP file nightly. They recently switch from passive to active FTP. I do not have any control over this.
The code attached was working for passive and works for active on my computer. However, it does not work when I add it to a webjob on Azure.
In this code I am able to get the content length, so I am logging in correctly and I have the correct URL.
Dim request As FtpWebRequest = DirectCast(FtpWebRequest.Create(strTempFTPUrl), FtpWebRequest)
request.Method = WebRequestMethods.Ftp.GetFileSize
Dim nc As New NetworkCredential(FTPUserName, FTPPassword)
request.Credentials = nc
request.UseBinary = True
request.UsePassive = False
request.KeepAlive = True
request.Proxy = Nothing
' Get the result (size)
Dim resp As FtpWebResponse = DirectCast(request.GetResponse(), FtpWebResponse)
Dim contLen As Int64 = resp.ContentLength
' and now download the file
request = DirectCast(FtpWebRequest.Create(strTempFTPUrl), FtpWebRequest)
request.Method = WebRequestMethods.Ftp.DownloadFile
request.Credentials = nc
request.UseBinary = True
request.UsePassive = False
request.KeepAlive = True
request.Proxy = Nothing
resp = DirectCast(request.GetResponse(), FtpWebResponse)
The error that I receive is this:
The underlying connection was closed: An unexpected error occurred on a receive. This happens on the second "resp = DirectCast(request.GetResponse(), FtpWebResponse)"
Does anyone have any suggestions on what I can do?
Edit: This is not a VM so as far as I know I do not have control over the firewall. This is a standard website.
Thank you very much!
I was with this same problem, I was able to solve by increasing the connection limit per point. By default it comes set to 2 I increased to 10
req.ServicePoint.ConnectionLimit = 10;
If you have timeout problem, also change the properties timeout and readwritetimeout.
Below is the link for a case similar to ours.
How can I programmatically remove the 2 connection limit in WebClient
Running python 3 with cherrypy 3.2, and have been having a host of problems. First of all, to get cookies to work, i had to fake a fqdn in /etc/hosts.
e.g.
http://test:8080 [no cookies]
http://test.local:8080 [cookies work]
After this, I tried to get sessions to work, but I am getting a new session id each time, and no session_id value is being set in a cookie anywhere in the browser.
class HelloWorld:
#cherrypy.expose
def index(self, *args):
print("\n\n")
### test cookies (works fine, no problems)
print(cherrypy.request.cookie)
cherrypy.response.cookie['c1'] = 'val1'
cherrypy.response.cookie['c1']['max-age'] = '3600'
cherrypy.response.cookie['d1'] = 'val2'
cherrypy.response.cookie['d1']['max-age'] = '3600'
### test sessions (doesn't work)
print(cherrypy.session.load()) # always returns None
print(cherrypy.session.id) # different every refresh
print(cherrypy.session.get('foo')) # always returns None
cherrypy.session['foo'] = 'bar'
cherrypy.session.save() # apparently has no effect
return "Hello world!"
Can anyone offer some advice or suggestions? I see that no no cookie with the session id is being set in chrome, even though my other values are.
My config looks like:
'/': {'tools.sessions.on': True,
'tools.sessions.timeout': 7200}}
Any ideas?
I was facing the same problem. I added tools.sessions.name to the cherrypy config and now it works