I have this simple twill code
>>> from twill.commands import *
>>> go("http://stackoverflow.com:80")
==> at http://stackoverflow.com:80
'http://stackoverflow.com:80'
>>> showlinks()
1. log in ==> /users/login
2. careers ==> http://careers.stackoverflow.com
3. chat ==> http://chat.stackoverflow.com
4. meta ==> http://meta.stackoverflow.com
5. about ==> /about
I know I can do
>>> follow('careers')
==> at http://careers.stackoverflow.com
'http://careers.stackoverflow.com'
>>>
but how do i specify the link number, for example,
>>> follow(2)
does not work?
The reason is that I want to test a website which has many links and I want to build the list of the links I want to follow.
How would one do this?
Thanks
twill's follow function expects a string as an argument. try something like the following:
>>> follow('2')
or
>>> follow(str(2))
Related
This question already has answers here:
How do I clone a list so that it doesn't change unexpectedly after assignment?
(24 answers)
Closed 2 years ago.
I want to retain the original list while manipulating it i.e I'm using it in a loop and have to perform some operations each iteration so need to reset the value of a list. Initially, I thought it was a problem with my loops but I have narrowed it down to.
inlist=[1,2,3]
a=inlist
a.pop(0)
print(a)
print(inlist)
gives an output of
[2,3]
[2,3]
Why am I not getting
[2,3]
[1,2,3]
It is applying pop to both a and inlist.
Let me explain with Interactive console:
>>> original_list = [1, 2, 3, 4] # source
>>> reference = original_list # 'aliasing' to another name.
>>> reference is original_list # check if two are referencing same object.
True
>>> id(reference) # ID of referencing object
1520121182528
>>> id(original_list) # Same ID
1520121182528
To create new list:
>>> copied = list(original_list)
>>> copied is original_list # now referencing different object.
False
>>> id(copied) # now has different ID with original_list
1520121567616
There's multiple way of copying lists, for few examples:
>>> copied_slicing = original_list[::]
>>> id(copied_slicing)
1520121558016
>>> import copy
>>> copied_copy = copy.copy(original_list)
>>> id(copied_copy)
1520121545664
>>> copied_unpacking = [*original_list]
>>> id(copied_unpacking)
1520123822336
.. and so on.
Image from book 'Fluent Python' by Luciano Ramalho might help you understand what's going on:
Rather than 'name' being a box that contains respective object, it's a Post-it stuck at object in-memory.
Try doing it this way
a = [1,2,3]
b=[]
b.extend(a)
b.pop(0)
Although what you are doing makes sense but what is happening is that you are just assigning another variable to the same list, which is why both are getting affected. However if you define b(in my case) as an empty list and then assign it, you are then making a copy as compared to another variable pointing to the same list.
I have some files that need to be sorted by name, unfortunately I can't use a regular sort, because I also want to sort the numbers in the string, so I did some research and found that what I am looking for is called natural sorting.
I tried the solution given here and it worked perfectly.
However, for strings like PresserInc-1_10.jpg and PresserInc-1_11.jpg which causes that specific natural key algorithm to fail, because it only matches the first integer which in this case would be 1 and 1, and so it throws off the sorting. So what I think might help is to match all numbers in the string and group them together, so if I have PresserInc-1_11.jpg the algorithm should give me 111 back, so my question is, is this possible ?
Here's a list of filenames:
files = ['PresserInc-1.jpg', 'PresserInc-1_10.jpg', 'PresserInc-1_11.jpg', 'PresserInc-10.jpg', 'PresserInc-2.jpg', 'PresserInc-3.jpg', 'PresserInc-4.jpg', 'PresserInc-5.jpg', 'PresserInc-6.jpg', 'PresserInc-11.jpg']
Google: Python natural sorting.
Result 1: The page you linked to.
But don't stop there!
Result 2: Jeff Atwood's blog that explains how to do it properly.
Result 3: An answer I posted based on Jeff Atwood's blog.
Here's the code from that answer:
import re
def natural_sort(l):
convert = lambda text: int(text) if text.isdigit() else text.lower()
alphanum_key = lambda key: [convert(c) for c in re.split('([0-9]+)', key)]
return sorted(l, key=alphanum_key)
Results for your data:
PresserInc-1.jpg
PresserInc-1_10.jpg
PresserInc-1_11.jpg
PresserInc-2.jpg
PresserInc-3.jpg
etc...
See it working online: ideone
If you don't mind third party libraries, you can use natsort to achieve this.
>>> import natsort
>>> files = ['PresserInc-1.jpg', 'PresserInc-1_10.jpg', 'PresserInc-1_11.jpg', 'PresserInc-10.jpg', 'PresserInc-2.jpg', 'PresserInc-3.jpg', 'PresserInc-4.jpg', 'PresserInc-5.jpg', 'PresserInc-6.jpg', 'PresserInc-11.jpg']
>>> natsort.natsorted(files)
['PresserInc-1.jpg',
'PresserInc-1_10.jpg',
'PresserInc-1_11.jpg',
'PresserInc-2.jpg',
'PresserInc-3.jpg',
'PresserInc-4.jpg',
'PresserInc-5.jpg',
'PresserInc-6.jpg',
'PresserInc-10.jpg',
'PresserInc-11.jpg']
How to use Python3 new feature : "f-string" to output the first 50 bits of math.pi?
We can achieve it by the following 2 old ways:
1 ("%.50f"% math.pi)
2 '{.50f}'.format(math.pi)
But for the new feature "f-string",I knew that we can use this format:
f"the value of pi is {math.pi}", but how to limit and filter the first 50 bits?
In [2]: ("%.50f"%math.pi)
Out[2]: '3.14159265358979311599796346854418516159057617187500'
Same formatting as with str.format, but using the variable name in the first block before the ::
>>> import math
>>> f"{math.pi:.50f}"
'3.14159265358979311599796346854418516159057617187500'
>>> f"the value of pi is {math.pi:.50f}"
'the value of pi is 3.14159265358979311599796346854418516159057617187500'
i'm working on the migration of my webstite form the Bing Azure API (v2) to the new Bing V5 search API.
On the old API, an object use this "__next" to tell if there's something else after him or not.
But on the new API the json do not return this anymore.
I'm working on upgrading my pagination and i don't know how to do it without this element.
Anyone know what replace this in the new API ?
I can't find any information on their migration guide or in the new V5 API guide.
Thanks.
John is right. You use the count and offset params in conjunction with the totalEstimatedMatches from the value in the json of the first object returned.
Example: Imagine you love rubber-duckies so much that you want every single webpage in existance that contains the term 'rubber-ducky.' WELL TOUGH LUCK BC THATS NOT HOW THE INTERNET WORKS. Don't kill yourself yet however, Bing knows a lot about webpages containing 'rubber-ducky,' and all you'll need to do is paginate through the 'rubber-ducky'-related sites that Bing knows about and rejoice.
First, we need to tell the API that we want "some" results by passing 'rubber-ducky' to it(the value of "some" is defined by the count param, 50 is the max).
Next, we'll need to look in the first JSON object returned; this will tell us how many 'rubber-ducky' sites that Bing knows about in a field called totalEstimatedMatches.
Since we have an insatiable hunger for rubber-ducky-related websites, we're going to set up a while-loop that alternates b/w querying and incrementing offset and that does not stop until totalEstimatedMatches and offset are count distance apart.
Here's some python code for clarification:
>>> import SomeMagicalSearcheInterfaceThatOnlyNeeds3Params as Searcher
>>>
>>> SearcherInstance = Searcher()
>>> SearcherInstance.q = 'rubber-ducky'
>>> SearcherInstance.count = 50
>>> SearcherInstance.offset = 0
>>> SearcherInstance.totalEstimatedMatches = 0
>>>
>>> print SearcherInstance.preview_URL
'https://api.cognitive.microsoft.com/bing/v5.0/images/search?q=rubber%2Dducky&count=50&offset=0'
>>>
>>> json_return_object = SearcherInstance.search_2_json()
>>>
>>> ## Python just treats JSON as nested dictionaries.
>>> tem = json_return_object['webPages']['totalEstimatedMatches']
>>> print tem
9500000
>>> num_links_returned = len(json_return_object['webPages']['value'])
>>> print num_links_returned
50
>>>
>>> ## We'll set some vals manually then make our while loop.
>>> SearcherInstance.offset += num_links_returned
>>> SearcherInstance.totalEstimatedMatches = tem
>>>
>>> a_dumb_way_to_store_this_much_data = []
>>>
>>> while SearcherInstance.offset < SearcherInstance.totalEstimatedMatches:
>>> json_response = SearcherInstance.search_2_json()
>>> a_dumb_way_to_store_this_much_data.append(json_response)
>>>
>>> actual_count = len(json_return_object['webPages']['value'])
>>> SearcherInstance.offset += min(SearcherInstance.count, actual_count)
Hope this helps a bit.
You should read the totalEstimatedMatches value the first time you call the API, then use the &count and &offset parameters to page through the results as described here: https://msdn.microsoft.com/en-us/library/dn760787.aspx.
Hello I have a little Flask application that I am trying to develop.
I have a Boat class and it has boat_name and length that I query.
Boat.query.filter_by(numberofcabins=numberofcabins)
query works and I get what I look for.
But
db.session.query(Boat).filter_by(length = length)
or
Boat.query.filter_by(length=length)
query does not get any result.
I don't know what is my mistake.
Boat.query.filter_by(length=length)
requires an .all() appended to the end.
>>> type(Boatquery.filter_by(length=length))
<class 'flask_sqlalchemy.BaseQuery'>
>>> type(Boatquery.filter_by(length=length).all())
<type 'list'>
Documentation for .all
Documentation for .filter_by