After looking, I have been unable to find a python3 module or method of pulling exchange rate data from the internet into my program.
Any ideas?
Thanks
Well, I'm not sure what's the location of your data source and in what format you need the data, but from your description I can make the following recomandation:
Use a service like:
https://openexchangerates.org/
and afterwards parse the response with a JSON parser (like the official one):
http://docs.python.org/3.3/library/json.html
and Voila you have the currency.
UPDATE
If you really have time and as the time passes patience to modify it, you can take whatever site you want with a visible exchange rate and write a HTML parser:
http://www.diveintopython.net/html_processing/extracting_data.html
This solution gives you freedom (basically you can query whatever you want) but if a site changes... well you have to update your code
UPDATE 2
You can use a very simple trick and a very simple html from google. Try to call the following link:
http://www.google.com/finance/converter?a=1&from=BGN&to=AED
and you will get a response for free, without any key, but be aware that this is not fair play especially when you are polling the service many times a minute!
Related
I would like to get images from a search engine, to run some automated tests without the need to go online and pick them by hand.
I found an old example from 5 years ago (ajax.googleapis.com/ajax/services/search/images), which sadly does not work anymore. What is the current method to do so in Python3? Ideally I would like to be able to pass a string with the search name, and retrieve a set amount of images, at full size.
I don't really mind which search engine is used; I just want to be sure that it is supported for the time being. Also I would like to avoid Selenium; I am planning to run this without any UI nor using the browser, all from terminal.
Have you heard of pixabay? There is a nice python wrapper for working with it as well.
Found a pretty good solution using BeautifulSoup.
It does not work on Google, since I get 403, but when faking the header in the request, is possible to get sometimes, data. I will have to experiment with different other sites.
So far the workflow is to search in the browser so I can get the url to pass to beautifulsoup. Once I get the url in code, I replaced the query part with a variable, so I can pass it programmatically.
Then I parse the output of beautifulsoup to extract the links to the images, and retrieve them using requests.
I wish there was a public API to get also parameters like picture size and such, but I found nothing that works currently.
I am a student and in the school website, what I want to do is that I want to busy wait on the certain URL and check if the class i want to register for is open or not. I was wondering if there was a way to constantly check on the website(busy waiting or otherwise) to see if the class is open or not. There is a table Rem where it shows the number of places remaining in the User Interface.
Also what language would you use to solve this problem?
Yes you can. but for that you will probably need to create a script that fetches the value of data from that table.
So something like web scraping should work.
I would definately use php for this stuff.
Google web scraping and you can code the script.
I am not sure if this is the exact thing that will help you, but what you need to do is something similar - See Here
I have created/recorded a script in Vugen, however the the URL of the site has been changed recently. Is there any way just by replacing the url with a parameter works?
I have tried by replacing url with parameters, the new URL is
http://xsx.xxx.xsx.xxx/test99
Yhe parameters I have tried are below:
NewUrl: http://xsx.xxx.xsx.xxx/
Newhos: test99
I have replaced all in the script and when I run it I get the following error:
Error -27651: Attempted read from an unconnected socket (empty response, no HTTP headers received). URL="http://xsx.xxx.xsx.xxx/scripts/uiServer.dll"
What is the solution for this? Should i record again with the new URL ?
Thanks.
Hope I've understood what you're asking for, so here goes. If it's only the URL that has changed and not the content of the site which you might require later on in your script that this is fairly simple to do.
As you have created the new parameters ensure that they are getting the data from the same DAT file. I.e. newurl.dat which contains the following:
newurl,newhost
http://xsx.xxx.xsx.xxx/,test99
and assign the parameters to the correct column and have the newhost set to sameline as newurl. This way it’s easier to maintain I believe.
Now that the parameters have been created and properly assigned in your script you’ll need to change the url your trying to change from:
http://xsx.xxx.xsx.xxx/oldtest to {newurl}{newhost}
this needs to be done for all instances where the change has occurred.
Hope this helps with your problem you’re having.
Are you certain that the build level has not also changed at the same time as the host? If so then your new instance may be out of synch with the request model of the scripts built using an earlier build. Developers have a habit of including items below the scenes that do affect the site visually but change the structure of the requests. The error you are receiving is common when you attempt to continue a conversation on a dead connection resulting from a missed dynamic session component which may have been added in the last build.
When in doubt quickly record the second site and take a look at the differences in the requests, even to the point of using WinDiff (included in LoadRunner) for this purpose.
I'm developing a webapp that will need to download the html form a website and then iterate through the code and try to find a specific but ever changing value (in our case it will be the price for the product).
For this, I was thinking about asking the user (upon installation and setup) to provide the system with a few lines of html from the page (that has the price) and then from then on, every time we need to fetch the price we would try to search for those lines and find the price.
Now, I believe this is a horrible and slow way of doing this and since there are no rules and the html can be totally different from one website to another (even the same website might change) I couldn't find a better way.
One improvement that I thought about was to iterate through the first time and record the line at which we find the code. Once found, the subsequent times we would then start from a few lines before the expected location and start the search. Any Thoughts on how I can improve on this?
I posted this question on https://cstheory.stackexchange.com/ but they commented that it's not on topic and that I should post it here.
I have the code for the above and if needed I can post it, I'm simply thinking that there must be a better, faster way of doing this.
This is actually something I tried for a project recently (using BeautifulSoup and Python). The solution that worked for me was to workout CSS selectors (which can map to jQuery selectors) that targeted the elements that contained the values I was looking for. In my case I was able to narrow down the full document to just the elements that contained what I was looking for but if you couldn't get exactly what you where after you could combine this with some extra lactic like test to see if it looks like a price (via regex) or test what it is next to.
In some website for which I have access, there are some input fields. In the sixth field I need to enter some input string from a list of 10000 strings, then a new page appears, for which I would just need to count the number of lines. Finally I would like to get a table with two columns like input string and number of resulting lines. Since I have to manually enter the info for all the different 10000 strings, I wonder therefore what is the best approach to enter a string into a generic formular field and get the resulting text. I heard about curl but I am not sure whether this is the easiest one.
P.S.
Example of interactive way: I type some string o words into google search and then I get a new page with the search results. Previously I have introduced my google username and password, so the results will be probably filtered according to my profile.
Example of non-interactive way: A script somehow introduces my user information, search query and saves to some text file the search results. Imagine the same idea but for a more complicated website like this.
What you want to do is to send a HTTP POST with specific data. This can be done with any proper HTTP client code, and one such is libcurl (or the pycurl binding or even using the curl command line tool). On the response from the post, you probably get a redirect and then the results, or you need to do a separate request for the results and then you're done and go back to do the next POST. Repeat until all POSTs are done.
What you may need to take into account is that you may have to deal with cookies and possibly to follow a redirect from the POST. A good approach is to record a "manual session" as done with a browser (use firebug or LiveHTTPHeaders etc) and then use that recording to help you repeat the same thing with a HTTP client.
A decent tutorial to get some starting up details on this kind of work can be found here: http://curl.haxx.se/docs/httpscripting.html
You could also use JMeter to run all the posts. You may use the CSV input to set the 10000 strings. Then you save the result as xml and extract the necessary data.