openAI gym save data as times series - openai-gym

The openAI gym environments have the capability to print results to the screen and also render video. Is there a existing command or a wrapper/monitor that will save the time series data for the state and the actions? I would like to analyze the performance in the time-frequency domain.
I guess I could write the data to a file each time the animation is updated, but curious how others do this.

What I found is this:
Gym wrapper which saves video and episode summary here
Code for how to use the wrapper here
If you also need to save all actions and information from each episode, I think you will have to create your own script.

Related

Is it possible to import bokeh figures from the html file they have been saved in?

I've produced a few Bokeh output files as the result of a fairly time intensive process. It would be really cool to pull the plots together from their respective files and build a new output file where i could visualize them all in a column. I know I should have thought to do this earlier on before producing all the individual plots, but, alas, i did not.
Is there a way to import the preexisting figures so that I can aggregate them together into new multi-multi plot output file?
As of Bokeh 1.0.2, there is not any existing API for this, and I don't think there is any simple technique that could accomplish this either. I think the only options are: some kind of (probably somewhat fragile) text scraping of the HTML files, or distributing all the HTML files and using something like <iframe> to collect the individual subplot files into one larger view.
Going forward, for reference there is autoload_static that allows plots to be encapsulated in "sidecar" JS files that can be individually distributed and embedded, or there is json_item that produces and isolated JSON representation of the document that can also be individually distributed and embedded.

How recommendation systems work in production?

I am newbie to this and have started learning Spark. I have general question regarding how recommendation systems work in production environments or rather how is it deployed to production.
Below is a small example of a system for e-commerce website.
I understand that once the system is built, at the start we can feed the data to the engine(we can run the jobs or run the program/process for the engine) and it will give the results, which will be stored back to the database against each user. Next time when the user logins the website can fetch the data, previously computed data by the engine from the database and show as recommended items.
The confusion I have is how ‘these systems’, on the fly generate outputs based on the user activity. e.g. If I view a video on Youtube and I refresh the page, Youtube starts showing me similar videos.
So, do we have these recommendation engine running always in background and they keep updating the results based on the user’s activity? How it is done so fast and quick?
Short answer:
Inference is fast; training is the slow part.
Retraining is a tweak to the existing model, not from scratch.
Retraining is only periodic, rarely "on demand".
Long answer:
They generate this output based on a long baseline of user activity. This is not something the engine derives from your recent view: anyone viewing that video on the same day will get the same recommendations.
Some recommender systems will take your personal history into account, but this is usually just a matter of sorting the recommended list with respect to the predicted ratings for you personally. This is done with a generic model from your own ratings by genre and other characteristics of each video.
In general, updates are done once a day, and the changes are small. They don't retrain a model from scratch every time you make a request; the parameters and weights of your personal model are stored under your account. When you add a rating (not just viewing a video), your account gets flagged, and the model will be updated at the next convenient opportunity.
This update will begin with your current model, and will need to run for only a couple of epochs -- unless you've rated a significant number of videos with ratings that depart significantly from the previous predictions.
To add to Prune's answer, depending on the design of the system it may also be possible to take into account the user's most recent interactions.
There are two main ways of doing so:
Fold-in: your most recent interactions are used to recompute the parameters of your personal model while leaving the remainder of the model unchanged. This is usually quite fast and could be done in response to every request.
A model that takes interactions as input directly: some models can compute recommendations directly by taking a user's interactions as input, without storing user representations that can only be updated via retraining. For example, you can represent the user by simply averaging the representations of the items she most recently interacted with. Recomputing this representation can be done in response to every request.
In fact, this last approach seems to form part of the YouTube system. You can find details in the Covington et al. Deep Neural Networks For YouTube Recommendations paper.

Best and optimized way to create web based Interactive Choropleth Map

I am going to build an interactive Choropleth map for Bangladesh. The goal of this project is to build a map system and populate different type of data. I read the documentations of the Openlayers, Leaflet and D3. I need some advice to find the right path. The solution must be optimized enough.
The map i am going to create will be something like the following http://nasirkhan.github.io/bangladesh-gis/asset/base_files/bd_admin_3.html. It is prepared based on leaflet js. But it is not mandatory to work with this library. I tried with Leaflet because it is easy to use and found the expected solution within a very short time.
The requirement of the project is to prepare a Choropleth map where i can display the related data. for example i have to show the population of all the divisions of Bangladesh. at the same time there should be some options so that i can show the literacy rate, male-female ratio and so on.
the solution i am working now have some issues. like the load time is huge and if i want to load the 2nd dataset then i have to load the same huge geolocation data, how can i optimize or avoid this situation?
Leaflet has a layers control feature. If you cut down your data to just what is required, split it into different layers and allow the user to select that layers they are interested in viewing that might cut down on the loading of the data. Another option is to simplify the shape of the polygons.

Is there a way to get download all the statistics on events at once from Flurry?

We are bumping into limitations with Flurry. We use events and parameters to track some game play info (like number of KO/map) but 1/ the limit of 15 parameters per event is a problem and 2/ the visualisation is not good (for instance Ko/map is shown by map so we have to open each event one after another).
We are trying to build a better visualisation with excel using the CSV files provided by Flurry, but then again we need to download the 50+ CSV files and it's really not convenient.
Is there a way to get all the information in one CSV or to get the information another way?
As a side note Flurry support is not answering any of our emails. :(
thanks for your help!
Have you tried checking out playtomic instead. Sounds like it might match your problem better.
They have an API to access your data. So you should be able to access it realtime.
You might also want to check out www.parse.com

Making a changing text ex. Making an every day video clip about weather

We recently we bought a led screen(About 8x3 mts) and it allows us to publish videos from AE (obviusly). We need to design a goodweel campaign about weather, traffic, and breaking news.
My quesion is how can i replace the animated text and images without modifyng the AE original file?, for example: The weather is sunny and 27 celsius, the next day weather changes and i just have to modify a txt.(something like that), and I just have to export the avi. file and be ready to upload it to the screen.
I don't think After Effects is an appropriate solution for this. You would have to re-render your movie every time the weather changes. That would be some heavy CPU usage just to update the news or weather. You might want to look into programming something that would update itself and using After Effects simply to render the media assets that would make up your program.
Maybe researching something like JavaScript or Processing would be beneficial.

Resources