Now that the Yahoo and Google API is down and it's not possible to retrive stock pricing data from those webistes, what other alternatives are there and how do I go about it?
This is the error I got:
raise ImmediateDeprecationError(DEP_ERROR_MSG.format('Yahoo Daily'))
pandas_datareader.exceptions.ImmediateDeprecationError: Yahoo Daily has been immediately deprecated due to large breaks in the API without the introduction of a stable replacement. Pull Requests to re-enable these data connectors are welcome.
Thanks for your help!
The GitHub issue pretty much states what's up.
"Yahoo changed their API so that the old pandas 0.5 code did not work. If you would like to fix this, PRs are welcome."
There was a major break in the API used for pandas datareader, likely due to some API changes - they are accepting PRs on GitHub to fix this. There's an open issue here: ImmediateDeprecationError:
Related
We are trying to migrate our old issue "tracking" system to GitLab.
For legacy reasons the issues have relatively large numbers 800 and above and they are not consecutive.
However for backreference it would be great if we could have one number for each issue and not an "old" and a "new" number, as in some contexts issues are referred to by number (e.g. external parties who in the future also will use GitLab)
I found this Set Minimum Issue Number in Gitlab where issues were created to "fill" the gaps. However this creates a lot of clutter (especially E-Mails Gitlab API - Create issue quietly?).
Any ideas how to solve this?
The ideal flow would be:
use gitlab-api to create issues we have and
add a parameter so set the number of the issue.
When using gitlab, the numbers are filled up by new issues over time,
or they count up from the highest issue number currently in the
project.
If I could actually set the issue number afterwards in the database (as was hinted in the linked question above), how will gitlab handle this? (I don't even know where to start looking in the gitlab code base, any hints on that might also answer this question).
Thanks in advance for any advice on how to tackle this.
I found a trick to do that.
There is exporting to .csv from the GitLab and .csv is an editable format. It can be edited in any excel-like program, or even in a notepad.
Make a first issue, fill it with dummy data (I suggest filling as that's a way to spot which field is responsible for what).
Then fill your data by any way suitable, preferably mass-exported from the previous issue tracker and import back to the GitLab.
If done correctly, this should be possible to be done partly automatically.
I am building a package that uses the Google Analytics API for Python.
But, in severous cases when I have multiple dimensions the extraction by day is sampled.
I know that if I use sampling_level = LARGE will use a sample more accurate.
But, somebody knows if has a way to reduce a request that you can extract one day without sampling?
Grateful
setting sampling to LARGE is the only method we have to decide the amount of sampling but as you already know this doesn't prevent it.
The only way to reduce the chances of sampling is to request less data. A reduced number of dimensions and metrics as well as a shorter date range are the best ways to ensure that you dont get sampled data
This is probably not the answer you want to hear but, one way of getting unsampled data from Google analytics is to use unsampled reports. However this requires that you sign up for Google Marketing Platform. With these you can create an unsampled report request using the API or the UI.
There is also a way to export the data to Big Query. But you lose the analysis that Google provides and will have to do that yourself. This too requires that you sign up for Google Marketing Platform.
there are several tactics of building unsampled reports, most popular is splitting your report into shorter time ranges up to hours. Mark Edmondson did a great work on anti-sampling in his R package so you might find it useful. You may start with this blog post https://code.markedmondson.me/anti-sampling-google-analytics-api/
Training tab that has always shown user phrases is suddenly empty. Support has not responded. Logging out and back in and clearing cache, and exporting/reimporting the agent has done nothing to solve it. Someone else already asked this question here but i can't upvote that one, or comment, and if i star it but they have already moved on because it's fixed for them it's not much help.
Dialogflow "Training" menu is empty always
Has anyone else experienced this and resolved it? We are on standard V2 edition for the last year-ish. Just started being empty last week. We can see questions coming in on the history and analytics tabs but the training tab remains empty. We average 10k questions a week.
There was nothing we could do on our side to fix this. Support finally got us a solution but did not provide any details over what had caused the issue. I asked for more info but all I got was this:
Hi Vanessa,
Thanks for reaching out to Dialogflow Support.
There was some internal issue which was resolved.
We will conduct an internal investigation on the issue and make appropriate
improvements to our systems to help prevent or minimize future recurrence.
Please accept our apologies for the inconvenience.
We are bumping into limitations with Flurry. We use events and parameters to track some game play info (like number of KO/map) but 1/ the limit of 15 parameters per event is a problem and 2/ the visualisation is not good (for instance Ko/map is shown by map so we have to open each event one after another).
We are trying to build a better visualisation with excel using the CSV files provided by Flurry, but then again we need to download the 50+ CSV files and it's really not convenient.
Is there a way to get all the information in one CSV or to get the information another way?
As a side note Flurry support is not answering any of our emails. :(
thanks for your help!
Have you tried checking out playtomic instead. Sounds like it might match your problem better.
They have an API to access your data. So you should be able to access it realtime.
You might also want to check out www.parse.com
I know that a lot of this information is probably entirely privatized, but does anyone know of a good source of real time information on what kind of trading activity is where in the market? It doesn't need to be fast enough to actually make informed trading decisions based on it, I'm more looking to aggregate it into some beautiful graphics. For fun. Because I have personal problems.
I'd be grateful for any help!
The best I'm aware of is the Yahoo Finance API. It'll give you delayed prices and some bid/ask stuff. There's a description of how it works here:
http://www.gummy-stuff.org/Yahoo-data.htm
Not sure, but I was of the opinion that Google Finance API was better than Yahoo:
http://code.google.com/apis/finance/
There was a project called OpenTick that planned on giving access to data from the exchanges themselves (eg., the Chicago Board of Trade), provided you paid the exchanges whatever fees were required. That project quietly died.
You can get some market benchmark data from the St Louis Fed. Aside from that, I haven't found anything better than Yahoo! Finance or Google Finance. Both the NASD and the NYSE give access to historical data on their websites, but I don't see any kind of web service interface.
Bloomberg open api http://www.openbloomberg.com/open-api/ which is recently made free can be used to get historical market data and also real time data. If you are looking for historical stock price there is a nice api http://www.quandl.com/ , you can get even more then 10 year old stock prices for co. in many formats.
I would have subscribed to the suggestion of the Google API, but it is not available anymore.
This post offers the best list of Financial Data accessible from R I've encountered online: http://www.r-bloggers.com/financial-data-accessible-from-r-part-iv/.
Yet this is not an R post. Beyond those sources, I would wholeheartedly recommend TD Ameritrade's Thinkorswim platform (www.thinkorswim.com). It is a trading platform with free real time data to US financial markets. You can open an account and keep just one cent on it if not needed for actual investing/trading.
Furthermore, I would recommend the Ninja Trader platform (http://ninjatrader.com), which offers free end of day historical data for US financial markets. You can export data from Ninja Trader to txt format and then import it into R or Python if so desired.