Hello I have a little Flask application that I am trying to develop.
I have a Boat class and it has boat_name and length that I query.
Boat.query.filter_by(numberofcabins=numberofcabins)
query works and I get what I look for.
But
db.session.query(Boat).filter_by(length = length)
or
Boat.query.filter_by(length=length)
query does not get any result.
I don't know what is my mistake.
Boat.query.filter_by(length=length)
requires an .all() appended to the end.
>>> type(Boatquery.filter_by(length=length))
<class 'flask_sqlalchemy.BaseQuery'>
>>> type(Boatquery.filter_by(length=length).all())
<type 'list'>
Documentation for .all
Documentation for .filter_by
Related
Hi I have a list of places
>>> places = ['NYC','PA', 'SF', 'Vienna', 'Berlin', 'Amsterdam']
I temporarily sort it with
>>> sorted(places)
And finally I want to sort the sorted(places) list in reverse alphabetical order.
Is this ever correct?
>>> sorted(places).reverse()
I thought it was but Python3 says the list is none.
Thank you
here are a few examples how you can do it:
your list:
places = ['NYC','PA', 'SF', 'Vienna', 'Berlin', 'Amsterdam']
using sort method with reverse parameter
places.sort(reverse=True)
print(places)
using sorted and reverse methods on a new variable since sorted method would require one:
testlist=sorted(places)
testlist.reverse()
print(testlist)
using sorted method with reverse parameter:
testlist1=sorted(places, reverse=True)
print(testlist1)
print(sorted(places,reverse=True))
or you can use a variable
other_list=sorted(places,reverse=True)
print(other_list)
I am working (for the first time) with scraping a website. I am trying to pull the latitude (in decimal degrees) from a website. I have managed to pull out the correct parent node that contains the information, but I am stuck on how to pull out the actual number from this. All of the searching I have done has only told me how to pull it out if I know the string (which I don't) or if the string is in a child node, which it isn't. Any help would be great.
Here is my code:
a_string = soup.find(string="Latitude in decimal degrees")
a_string.find_parents("p")
Out[46]: [<p><b>Latitude in decimal degrees</b><font size="-2">
(<u>see definition</u>)
</font><b>:</b> 35.7584895</p>]
test = a_string.find_parents("p")
print(test)
[<p><b>Latitude in decimal degrees</b><font size="-2"> (<u>see definition</u>)</font>
<b>:</b> 35.7584895</p>]
I need to pull out the 35.7584895 and save it as an object so I can append it to a dataset.
I am using Beautiful Soup 4 and python 3
The first thing to notice is that, since you have used the find_parents method (plural), test is a list. You need only the first item of it.
I will simulate your situation by doing this.
>>> import bs4
>>> HTML = '<p><b>Latitude in decimal degrees</b><font size="-2"> (<u>see definition</u>)</font><b>:</b> 35.7584895</p>'
>>> item_soup = bs4.BeautifulSoup(HTML, 'lxml')
The simplest way of recovering the textual content of this is to do this:
>>> item_soup.text
'Latitude in decimal degrees (see definition): 35.7584895'
However, you want the number. You can get this in various ways, two of which come to my mind. I assign the result of the previous statement to str so that I can manipulate the result.
>>> str = item_soup.text
One way is to search for the colon.
>>> str[1+str.rfind(':'):].strip()
'35.7584895'
The other is to use a regex.
>>> bs4.re.search(r'(\d+\.\d+)', str).groups(0)[0]
'35.7584895'
It's very strange error to my mind
import numpy as np
import pandas as pd
df = pd.DataFrame({'head': [1, 1,2,2,1,3,2,3,1,1,1,2, 3,3],
'appraisal': [1,2,1,3,1,4,1,5,1,1,2,3,4,5]})
then
df.loc[df.head, 'appraisal'].mean()
and
TypeError: cannot do slice indexing on <class 'pandas.indexes.range.RangeIndex'> with these indexers
But if i change 'head' to for ample 'head_id' it works correct
df = pd.DataFrame({'head_id': [1, 1,2,2,1,3,2,3,1,1,1,2, 3,3],
'appraisal': [1,2,1,3,1,4,1,5,1,1,2,3,4,5]})
df.loc[df.head_id, 'appraisal'].mean()
2.0
what's wrong?
head is function of pandas, so need [], same is impossible use df.sum, df.min - but works df['sum'], df['mean']:
df.loc[df['head'], 'appraisal'].mean()
#if change 'head' to 'head1' it works
df.loc[df.head1, 'appraisal'].mean()
Attribute Access in docs:
You can use this access only if the index element is a valid python identifier, e.g. s.1 is not allowed. See here for an explanation of valid identifiers.
The attribute will not be available if it conflicts with an existing method name, e.g. s.min is not allowed.
Similarly, the attribute will not be available if it conflicts with any of the following list: index, major_axis, minor_axis, items, labels.
In any of these cases, standard indexing will still work, e.g. s['1'], s['min'], and s['index'] will access the corresponding element or column.
The Series/Panel accesses are available starting in 0.13.0.
i'm working on the migration of my webstite form the Bing Azure API (v2) to the new Bing V5 search API.
On the old API, an object use this "__next" to tell if there's something else after him or not.
But on the new API the json do not return this anymore.
I'm working on upgrading my pagination and i don't know how to do it without this element.
Anyone know what replace this in the new API ?
I can't find any information on their migration guide or in the new V5 API guide.
Thanks.
John is right. You use the count and offset params in conjunction with the totalEstimatedMatches from the value in the json of the first object returned.
Example: Imagine you love rubber-duckies so much that you want every single webpage in existance that contains the term 'rubber-ducky.' WELL TOUGH LUCK BC THATS NOT HOW THE INTERNET WORKS. Don't kill yourself yet however, Bing knows a lot about webpages containing 'rubber-ducky,' and all you'll need to do is paginate through the 'rubber-ducky'-related sites that Bing knows about and rejoice.
First, we need to tell the API that we want "some" results by passing 'rubber-ducky' to it(the value of "some" is defined by the count param, 50 is the max).
Next, we'll need to look in the first JSON object returned; this will tell us how many 'rubber-ducky' sites that Bing knows about in a field called totalEstimatedMatches.
Since we have an insatiable hunger for rubber-ducky-related websites, we're going to set up a while-loop that alternates b/w querying and incrementing offset and that does not stop until totalEstimatedMatches and offset are count distance apart.
Here's some python code for clarification:
>>> import SomeMagicalSearcheInterfaceThatOnlyNeeds3Params as Searcher
>>>
>>> SearcherInstance = Searcher()
>>> SearcherInstance.q = 'rubber-ducky'
>>> SearcherInstance.count = 50
>>> SearcherInstance.offset = 0
>>> SearcherInstance.totalEstimatedMatches = 0
>>>
>>> print SearcherInstance.preview_URL
'https://api.cognitive.microsoft.com/bing/v5.0/images/search?q=rubber%2Dducky&count=50&offset=0'
>>>
>>> json_return_object = SearcherInstance.search_2_json()
>>>
>>> ## Python just treats JSON as nested dictionaries.
>>> tem = json_return_object['webPages']['totalEstimatedMatches']
>>> print tem
9500000
>>> num_links_returned = len(json_return_object['webPages']['value'])
>>> print num_links_returned
50
>>>
>>> ## We'll set some vals manually then make our while loop.
>>> SearcherInstance.offset += num_links_returned
>>> SearcherInstance.totalEstimatedMatches = tem
>>>
>>> a_dumb_way_to_store_this_much_data = []
>>>
>>> while SearcherInstance.offset < SearcherInstance.totalEstimatedMatches:
>>> json_response = SearcherInstance.search_2_json()
>>> a_dumb_way_to_store_this_much_data.append(json_response)
>>>
>>> actual_count = len(json_return_object['webPages']['value'])
>>> SearcherInstance.offset += min(SearcherInstance.count, actual_count)
Hope this helps a bit.
You should read the totalEstimatedMatches value the first time you call the API, then use the &count and &offset parameters to page through the results as described here: https://msdn.microsoft.com/en-us/library/dn760787.aspx.
I am following the Intro to Spark course on edX. However, I cant understand few things, following is an lab assignment. FYI, I am not looking for solution.
I am not able to understand as why I am receiving the error
TypeError: 'Column' object is not callable
Following is the code
from pyspark.sql.functions import regexp_replace, trim, col, lower
def removePunctuation(column):
"""
Args:
column (Column): A Column containing a sentence.
"""
# This following is giving error. I believe I am calling all the rows from the dataframe 'column' where the attribute is named as 'sentence'
result = column.select('sentence')
return result
sentenceDF = sqlContext.createDataFrame([('Hi, you!',),
(' No under_score!',),
(' * Remove punctuation then spaces * ',)], ['sentence'])
sentenceDF.show(truncate=False)
(sentenceDF
.select(removePunctuation(col('sentence')))
.show(truncate=False))
Can you be little elaborate? TIA.
The column parameter is not a DataFrame object and, therefore, does not have access to the select method. You'll need to use other functions to solve this problem.
Hint: Look at the import statement.