Does Flask copy app.config for every request? - multithreading

According to the Flask documentation,
Flask automatically pushes an application context when handling a
request.... Typically, an application context will have the same lifetime as a request.
Does this mean that objects stored inside app.config or g are copied for each request?
If not, I suppose that app.config and g are not thread-safe?

To the best of my knowledge now, g is created and destroyed during a request. You can't add things onto g when initializing the application. It is not used or copied across requests.
In contrast, app.config can be used to store application-global values. You can add anything onto it when initializing the application, and then access those values whenever app or flask.current_app are available.
I'm still unsure if app is shared or copied when the Flask dev server or another WSGI server is multithreaded.

Related

how to access a different module in multi target application

I'm new to cloud foundry, so I'm not sure, if my thoughts and plans are right. Maybe someone can explain or discuss it with me.
What I want to do:
Implement a MTA (Multitarget Application) with a a html5-module as frontend and a nodeJS-module as backend. Furthermore there should be a mongodb instance, which will be accessed from the nodejs-module. Later it should also get multitenant.
What I already did:
I implemented a simple nodejs-app and connected it to the db. Persisting and calling data with rest works already fine. I implemented a simple sapui5 app, which consumes data from the db with ajax. For now, the node startscript is in the html5 module, so it works somehow. But now I want to separate the modules.
So I created a mta-project with the two modules in webide and imported the two apps.
What I expect to do for it:
For now, I have an approuter, which is in my nodejs-module, but I can not access the webapp folder in the html5-module from here: file not found error: /home/vcap/app//. Is there a possibility to access the webapp-folder in another module over the path "/home/vcap/app/"? Or can I lookup the app-directory anywhere?
I have read, that an approuter-module (nodejs) can be needed, but I don't know exactly what it does. I think it serves the index.html file when opening the url of the whole app?

How can my Flask gunicorn app remember its state?

I am running a python3/flask app on gunicorn in heroku. This app presents the user with a list of items, pulled in from an API call, for the user to accept or reject. Depending on whether the user hits the accept or reject links associated with each item, the app appends the item to either an internal list of accepted items are one of rejected items.
Currently I'm storing each list (suggestions, acceptances and rejections) as a pandas dataframe object within the app.
i.e. I initialise my app with an empty data frame:
app = Flask(__name__)
app.accepted = pd.DataFrame()
app.suggested = pd.DataFrame()
populate the suggestions data frame with an API call:
#app.route("/get_suggestions")
def get_suggestions():
app.suggested = <some data returned from an API>
then append a suggested item to the accepted dataframe once an 'accept' link is hit:
#app.route("/accept/<suggest_id>")
def accept_item(suggest_id):
app.accepted(len(app.accepted)) = app.suggested.loc[int(suggest_id)]
This all works fine running on gunicorn on my local miniconda virtual environment (running "heroku local web") but when deployed on heroku I keep getting "Internal Server Error". When i look at the logs, it looks like the app's internal variables (e.g. app.suggested) are not being preserved, such that when accept_item is run, app.suggested is always empty. Why would they be preserved on the local version but not on the heroku deployment?
What's the simplest way of preserving this state? I'd like a small number of multiple users to be able to use the app and each build their own temporary lists. Do i need to use SQLite to preserve state? Do i need to drop a cookie into the user's browser so i can tell different users apart? I'd prefer not to require users to create an account on my site.
You can do one of two things:
Either save this state into cookies (which I would not recommend), or
Store this data into a database like Heroku Postgres (for instance), and use SQLAlchemy or some other ORM to retrieve and store that data.
Heroku dynos (which run your web application) are stateless. They reboot at will, don't persist disk, etc.
Storing global state won't work because those variables you're defining are changing on every incoming request.
You could use the app context to store this state, but since you're running on Heroku this is asking for trouble since the dynos will restart at random.

Forwarding an image upload request to another server

I'm trying to build a NodeJS REST API project based on the so called "micro architecture" (basically multiple smaller NodeJS projects that can run totally independent, but at the same time work together).
Currently users are able to upload images from the app, and my NodeJS backend then processes and saves them appropriately.
Now, what I want to do is the following:
User selects an image to upload from the app -> The app makes a request to the "Main API" endpoint -> The Main API endpoint then forwards this request to the "Image Service" -> Once the Image Service (which is a totally different server) has successfully finished, it should return the URL where the image is stored to the Main API server endpoint, which will then return the info back to the app.
My question is, how do I forward the image upload request from one server to another? Ideally, I don't want the Main API to store the image temporarily and then make a request to the Image Service.
What I'd like is try and forward the data the Main API receives straight to the Image Service server. I guess you could say I want to "stream" the data from one place to another without having to temporarily store on disk or memory. I literally just want it to "tunnel" from one server to another.
Is this possible and is this an efficient way? I just want 1 central point for the app to access, I don't want it to know about this Image Service server. I'd like the app to only ever make requests to the Main API, which will then call my other little services as required.
I'm using NodeJS, Express, Multer (for image uploads) and Digital Ocean hosting (if that should make any difference at all).
What you would basically be doing is setting up a proxy server that will pass requests straight through to another machine and back. There are a few libraries out there to help with this, and this article in particular http://blog.vanamco.com/proxy-requests-in-node-js/ will explain how to go about setting it up even though they are really just trying to get around HTTPS, the same concept applies here.
In short, you get the file upload POST, and then immediately just make that same request to another server and when the response is returned, immediately return it back to the front end. Your entry point can be set up as a hub, and you can proxy requests through to other servers or even just handle them on the same server if necessary.

Setting a handlebars helper to return a specific value per request in express

I have an express based app serving server side rendered HTML from handlebars templates, and a bundle of the backbone resources. Theoretically, client side, the app resembles what is happening server side.
This is all fine in development, but when the node server is dealing with many requests at the same time, then the mechanism by which the helper we are using is defined/redefined breaks - we set the helper (in this case logged in / not logged in, but could be anything) then serving the rest of the request happens asynchronously - we don't know and cannot control how long this will take.
I have already figured out that this is because Handlebars on the server is effectively a global - so every time a request comes in, the helper that is being called is from there, a shared object between requests.
The question is, how to be able to set a helper per async request that returns that particular value, and does not get polluted by concurrent requests...?
Here's a gist of a test case - hopefully shows the problem:
https://gist.github.com/dazld/023df6e1da7a92387720
(if it is not obvious from that what i am going for, just ping in comments, and i will write up something clearer).
Thanks!
This is because your using a single instance of Handlebars and with lots of requests your polluting one request with another.
I use hbs (https://github.com/donpark/hbs) as it creates a new instance of handlebars for each request/render for you.

How is a web application started? Where is the entry point (if there's one)?

I am using IIS to develop some web applications. I used to believe that every application should have a entry point. But it seems a web application doesn't have one.
I have read many books and articles addressing how to build an ASP.NET application under IIS, but they are just not addressing the most obvious and basic thing that I want to know.
So could anyone tell me how is a web application started? What's the difference between a traditional desktop application and a web application in terms of their working paradigm, such as the starting and terminating logic.
Many thanks.
Update - 1 - 23:14 2011/1/4
My current understanding is:
When some request arrives, the URL contained in the request will be extracted by the IIS. I guess IIS must have maintained some kind of a internal table which maps a URL to corresponding physical directory on disk. Let's take the following URL as an example:
http://myhost/webapp/page1.aspx
With the help of the aforementioned internal table, IIS will locate the page1.aspx file on disk. And then this file is checked and the code-behind code file is located. And then proper page class instance will be contructed and its methods defined in the code-behind file will be invoked in a pre-defined order. The output of the series of method invoking will be the response sent to the client.
Update - 2 - 23:32 2011/1/4
The URL is nothing but an identifier that serves as an index into the aforementioned internal table. With this index, IIS (or any kind of web server technology) could find the physical location of the resource. Then with some hint (such as file extension name like *.aspx), the web server knows what handler (such as the asp.net ISAPI handler) should be used to process that resource. That chosen handler will know how to parse and execute the resource file.
So this also explains why a web server should be extensible.
It depends what language and framework you are using, but broadly there are a number of entry points that will be bound to HTTP requests (e.g. by URL). When the server receives a request that matches one of these bindings, the bound code is executed.
There may also be various filter chains and interceptors that are executed based on other conditions of the request. There will probably also be some set-up code that the server executes when it starts up. Ultimately, there is still a single entry-point - the main() function of the server - but from the web application's perspective it is the request bindings that matter.
Edit in response to question edits
I have never used IIS, but I would assume there is no "lookup table", but instead some lookup rules. I shall talk you through the invocation of a .jsp page on an Apache server, which should be basically the same process.
The webapp is written and placed in the file system - e.g. C:/www/mywebapp
The web server is given a configuration rule telling it that the URL path /webapp/ should be mapped to C:/www/mywebapp
The web server is also configured to recognise .jsp files as being JSP servlets
The web server receives a request for /webapp/page1.jsp, this is dispatched to a worker thread
The web server uses its mapping rules to locate C:/www/mywebapp/page1.jsp
The web server wraps the code in the JSP file in a class with method serveRequest(request, response) and compiles it (if not already done so)
The web server calls the serveRequest function, which is now the entry point of the user code
When the user code is finished, the web server sends the response to the client, and the worker thread terminates
This is the most basic system - resource-based servlets (i.e. .jsp or .aspx files). The binding rules become much more complicated when using technologies like MVC frameworks, but the essential concepts are the same.
Similar to what OrangeDog mentioned in his answer, there is plenty that goes on when serving these pages Before you even get to your code.
Not only in asp.net mvc, but in asp.net in general there are various pieces that come into play when you're executing a request.
There is code like modules, handlers, etc that again do processing Before it gets to the code of the page. Additionally you can map the same page to be able to process different urls.
The concept of handler in asp.net is important, as there are various handlers that are responsible of processing requests that match extensions and/or http verbs (get, head, post). If you take a look into %systemroot%\Microsoft.NET\Framework64\v4.0.30319\Config\web.config, you can see a section. You can also see the handlers in IIS (these can be changed x site).
For example, the HttpForbiddenHandler is one that just rejects the request. It is configured to be called for special files like the sources "*.cs".
You can define your own handler, that is nothing more than a class that implements an IHttpHandler interface. So it has 2 methods: ProcessRequest and IsReusable. This is more similar to your cgi program, as the implementation is mainly a method that produces HTML or any other type of output based on the information in the request.
Asp.net pages build on top of that, and have plenty of extra features meant to make it easier for you to develop pages. You implement a class that inherits from Page, and there are 2 code files associated to it (.aspx and .cs). The same can be said for asp.net mvc, but it is structured differently. There is much more than it, if you want to take advantage of it you'd need to learn about it.
The downside of those abstractions, is that it makes some developers lose track of the context they're at / about the underlying. The context is still the same, you're producing an application that takes a request and produces an output. The difference is that there is plenty more code in place intended to make it easier.
In terms of a more detailed list of IIS's ASP.NET Request Lifecycle, there are a quite a few stages in the HTTPApplication Pipeline. Faily recently there was a good blog post that I thought summarized them very concisely and well. It's "HTTP Request Lifecycle Events in IIS Pipeline that every ASP.NET Developer Should Know" by Suprotim Agarwal.
For a more detailed explanation you should check out the MSDN article on the subject. This will also go into information on what happens before that pipeline.

Resources