Customize the OpenAi Gym Taxi v2 Environment - openai-gym

I would like to modifiy the Taxi V2-Environment in Open AI Gym.
Is it possible to pick up 2 passengers, before I reach the destination point.

Yes, it is possible you can modify the taxi.py file in envs in the gym folder. For two passengers the number of states (state-space) will increase from 500 (5*5*5*4) to 10,000 (5*5*5*4*5*4), 5*4 states for another(2nd) passenger. You can now modify the code accordingly. We can also increase the number of action space from 6 to 8, by adding pickup and drop for passenger 2, to keep a record of both passengers if you want.

Related

Promotions code combination from one pool not possible

ist it possible to use two promotion codes from one promotion-code-pool in one order?
The promotion is set to
Max. total uses : unlimited
Max. uses per customer : unlimited
I can combine one code of this promotion in one order with other promotions, but i am unable to combine two codes from the same promotion-pool.
example:
we give out promotion codes for 25 €. if a customer happens two have two or more of them and wants to use them in one order, right now its not possible and i dont see where its limited.
when the customer has a 50 € promotion code and a 25 € promotion code he can combine them.
How do i have to configure promotions to allow users to use more than one code of the same promotion pool?
No, at this point in time it is not possible. One and the same promotion can only be redeemed once per cart.
You should set "max uses" empty. The system will check which specific coupon id is used and will then make that one invalid. You can also see the name of the client who used it.

SharePoint list update item takes more than 5 seconds

We have the Sharepoint 2016 hosted on prem with a minimum set of services running on the server. The resource utilization is very low and the user base is around 100. There are no workflows or any other resource consuming service running.
We use list to store and update information for certain users with the help of a form for the end user. Of recent, the time consumed for the update has increased to over 6 seconds for a list data update.
Example:
https://sitename_url/_api/web/lists/GetByTitle('WFListInfo')/items(15207)
This list has about 15 items, mostly numbers and single line text or number or DateTime.
The indexing is set to automatic.
As part of the review, we conducted a few checks and DB indexing on our cluster, however there is no improvement.
Looking forward to any help / suggestions. Thank you.

Getting Multiple Last Price Quotes from Interactive Brokers's API

I have a question regarding the Python API of Interactive Brokers.
Can multiple asset and stock contracts be passed into reqMktData() function and obtain the last prices? (I can set the snapshots = TRUE in reqMktData to get the last price. You can assume that I have subscribed to the appropriate data services.)
To put things in perspective, this is what I am trying to do:
1) Call reqMktData, get last prices for multiple assets.
2) Feed the data into my prediction engine, and do something
3) Go to step 1.
When I contacted Interactive Brokers, they said:
"Only one contract can be passed to reqMktData() at one time, so there is no bulk request feature in requesting real time data."
Obviously one way to get around this is to do a loop but this is too slow. Another way to do this is through multithreading but this is a lot of work plus I can't afford the extra expense of a new computer. I am not interested in either one.
Any suggestions?
You can only specify 1 contract in each reqMktData call. There is no choice but to use a loop of some type. The speed shouldn't be an issue as you can make up to 50 requests per second, maybe even more for snapshots.
The speed issue could be that you want too much data (> 50/s) or you're using an old version of the IB python api, check in connection.py for lock.acquire, I've deleted all of them. Also, if there has been no trade for >10 seconds, IB will wait for a trade before sending a snapshot. Test with active symbols.
However, what you should do is request live streaming data by setting snapshot to false and just keep track of the last price in the stream. You can stream up to 100 tickers with the default minimums. You keep them separate by using unique ticker ids.

Liferay: huge DLFileRank table

I have a Liferay 6.2 server that has been running for years and is starting to take a lot of database space, despite limited actual content.
Table Size Number of rows
--------------------------------------
DLFileRank 5 GB 16 million
DLFileEntry 90 MB 60,000
JournalArticle 2 GB 100,000
The size of the DLFileRank table sounds to me as abnormally big (if it is totally normal please let me know).
While the file ranking feature of Liferay is nice to have, we would not really mind resetting it if it halves the size of the database.
Question: Would a DELETE * FROM DLFileRank be safe? (stop Liferay, run that SQL command, maybe set dl.file.rank.enabled=false in portal-ext.properties, start Liferay again)
Is there any better way to do it?
Bonus if there is a way to keep recent ranking data and throw away only the old data (not a strong requirement).
Wow. According to the documentation here (Ctrl-F rank), I'd not have expected the number of entries to be so high - did you configure those values differently?
Set the interval in minutes on how often CheckFileRankMessageListener
will run to check for and remove file ranks in excess of the maximum
number of file ranks to maintain per user per file. Defaults:
dl.file.rank.check.interval=15
Set this to true to enable file rank for document library files.
Defaults:
dl.file.rank.enabled=true
Set the maximum number of file ranks to maintain per user per file.
Defaults:
dl.file.rank.max.size=5
And according to the implementation of CheckFileRankMessageListener, it should be enough to just trigger DLFileRankLocalServiceUtil.checkFileRanks() yourself (e.g. through the scripting console). Why you accumulate that large number of files is beyond me...
As you might know, I can never be quoted by stating that direct database manipulation is the way to go - in fact I refuse thinking about the problem from that way.

In New Relic RPM, I get reports with an Apdex index listed. What is the subscript meaning?

This sounds ridiculous, but New Relic RPM reports an Apdex index in a form like this:
0.92(3.5)
Where the 3.5 is subscripted.
What does the 3.5 mean? I can't find the definition anywhere, and yet there it is in my reports, staring me in the face.
The bracketed/subscripted number is the threshold (in seconds) for your Apdex score. So, in your case, if the full application response (page load) is less than 3.5s then that satisfies the requirement. If your app responds slower than the threshold then your Apdex score is impacted.
This threshold is customizable, so you can select what is appropriate for your application type.
You can read more about Apdex in our docs.
The sub-scripted number is your target response time for that tier. On the user agent (browser) the high water mark is 7 seconds. You should check US-Only and make this number 2 to 4 seconds to be world class.
The app server tier must respond much faster. The high water mark default that NR sets is .5 seconds or 500 milliseconds, a world class page buffer flush would be in the 50-200 ms on average.
Remember all this information is about aggregated averages and not instance data which will have many outliers and have a broad distribution.

Resources