I have an API request I'm writing to query OpenWeatherMap's API to get weather data. I am using a city_id number to submit a request for a unique place in the world. A successful API query looks like this:
r = requests.get('http://api.openweathermap.org/data/2.5/group?APPID=333de4e909a5ffe9bfa46f0f89cad105&id=4456703&units=imperial')
The key part of this is 4456703, which is a unique city_ID
I want the user to choose a few cities, which then I'll look through a JSON file for the city_ID, then supply the city_ID to the API request.
I can add multiple city_ID's by hard coding. I can also add city_IDs as variables. But what I can't figure out is if users choose a random number of cities (could be up to 20), how can I insert this into the API request. I've tried adding lists and tuples via several iterations of something like...
#assume the user already chose 3 cities, city_ids are below
city_ids = [763942, 539671, 334596]
r = requests.get(f'http://api.openweathermap.org/data/2.5/groupAPPID=333de4e909a5ffe9bfa46f0f89cad105&id={city_ids}&units=imperial')
Maybe a list is not the right data type to use?
Successful code would look something like...
r = requests.get(f'http://api.openweathermap.org/data/2.5/group?APPID=333de4e909a5ffe9bfa46f0f89cad105&id={city_id1},{city_id2},{city_id3}&units=imperial')
Except, as I stated previously, the user could choose 3 cities or 10 so that part would have to be updated dynamically.
you can use some string methods and list comprehensions to append all the variables of a list to single string and format that to the API string as following:
city_ids_list = [763942, 539671, 334596]
city_ids_string = ','.join([str(city) for city in city_ids_list]) # Would output "763942,539671,334596"
r = requests.get('http://api.openweathermap.org/data/2.5/group?APPID=333de4e909a5ffe9bfa46f0f89cad105&id={city_ids}&units=imperial'.format(city_ids=city_ids_string))
hope it helps,
good luck
Related
I am looking at pages that are structured in the following way, though the exact elements may not be a table. In general, there are key-value pairs where the count of keys are limited up to 3 per page (but not necessarily in a particular order) and the keys vary from page to page (and I otherwise have no way to know what all of the keys may be without pre-scraping every possible page). Also, there should not be repeats of a key in the same page (e.g., A -> 1, B -> 2, A -> 3). I don't have issues isolating the keys, values from the page using XPath, just on storing and exporting the values from my Spider.
Approach 1
If I use the dictionary approach with something like this pseudocode:
for th, td in table:
item[th.text()] = td.text()
Then the result would only show values for A, B, C because those values exist in the first page processed and only the headers and values for the first request are maintained.
Approach 2
If I use the scrapy.item.Item() and scrapy.item.Field() approach with something like this:
class MyItem(Item):
A = Field()
B = Field()
C = Field()
Then I have no way of declaring a value for the unknown values (shown as ...). And I'll receive a KeyError when trying to set the value (either directly or using an ItemLoader.add_value()).
I am using Python 3.8 and Scrapy 2.4.1.
I have a Dataframe, which has a bunch of ID name pairs in it. I create it by doing the following:
market_df = pd.DataFrame(markets_info['markets'])
market_df.astype(dict(id=int, name=str))
I received ID numbers from a process and I need to grab the associated name to that ID. I have tried creating an index on the ID and then parsing it, but that doesn't seem to set the ID correctly.
I now am trying to do the following: exch_name = MARKET_IDS.loc[MARKET_IDS['id'] == exchange_id, 'name']
I have verified that exchange_id is also of type int.
What am I missing here?
I don't know if this is because you left out some crucial information from this, but from what it sounds like in your post you're not really altering market_df at all, as your second line is not an assignment. It should read market_df = market_df.astype(dict(id=int, name=str))
Ok
i have this class in my model :
i want to get the agencys value which is a many to many on this class and store them in a list or array . Agency which store agency_id with the id of my class on a seprate table.
Agency has it's own tabel as well
class GPSpecial(BaseModel):
hotel = models.ForeignKey('Hotel')
rooms = models.ManyToManyField('Room')
agencys = models.ManyToManyField('Agency')
You can make it a bit more compact by using the flat=True parameter:
agencys_spe = list(GPSpecial.objects.values_list('agencys', flat=True))
The list(..) part is not necessary: without it, you have a QuerySet that contains the ids, and the query is postponed. By using list(..) we force the data into a list (and the query is executed).
It is possible that multiple GPSpecial objects have a common Agency, in that case it will be repeated. We can use the .distinct() function to prevent that:
agencys_spe = list(GPSpecial.objects.values_list('agencys', flat=True).distinct())
If you are however interested in the Agency objects, for example of GPSpecials that satisfy a certain predicate, you better query the Agency objects directly, like for example:
agencies = Agency.objects.filter(gpspecial__is_active=True).distinct()
will produce all Agency objects for which a GPSpecial object exists where is_active is set to True.
I think i found the answer to my question:
agencys_sp = GPSpecial.objects.filter(agencys=32,is_active=True).values_list('agencys')
agencys_spe = [i[0] for i in agencys_sp]
I have a large Windows event log set that I am attempting to find unique listing of a users from a single column in a single event ID. This runs, but takes an extremely long time. How would you use python Elasticsearch_dsl and Elasticsearch-py to accomplish this?
es = Elasticsearch([localhostmines], timeout=30)
s = Search(using=es, index="logindex-*").filter('term', EventID="4624")
users = set([])
for hit in s.scan():
users.add(hit.TargetUserName)
print(users)
TargetUserName column contains stringed names, EventID column contains strings of event ids for windows.
You need to use a terms aggregations which will do exactly what you expect.
s = Search(using=es, index="logindex-*").filter('term', EventID="4624")
s.aggs.bucket('per_user', 'terms', field='TargetUserName')
response = s.execute()
for user in response.aggregations.per_user.buckets:
print(user.key, user.doc_count)
With Django/Haystack/SOLR, I'd like to be able to restrict the result of a search to those records within a particular range of django_ids. Getting these IDs is not a problem, but trying to filter by them produces some unexpected effects. The code looks like this (extraneous code trimmed for clarity):
def view_results(request,arg):
# django_ids list is first calculated using arg...
sqs = SearchQuerySet().facet('example_facet') # STEP_1
sqs = sqs.filter(django_id__in=django_ids) # STEP_2
view = search_view_factory(
view_class=SearchView,
template='search/search-results.html',
searchqueryset=sqs,
form_class=FacetedSearchForm
)
return view(request)
At the point marked STEP_1 I get all the database records. At STEP_2 the records are successfully narrowed down to the number I'd expect for that list of django_ids. The problem comes when the search results are displayed in cases where the user has specified a search term in the form. Rather than returning all records from STEP_2 which match the term, I get all records from STEP_2 plus all from STEP_1 which match the term.
Presumably, therefore, I need to override one/some of the methods in for SearchView in haystack/views.py, but what? Can anyone suggest a means of achieving what is required here?
After a bit more thought, I found a way around this. In the code above, the problem was occurring in the view = search_view_factory... line, so I needed to create my own SearchView class and override the get_results(self) method in order to apply the filtering after the search has been run with the user's search terms. The result is code along these lines:
class MySearchView(SearchView):
def get_results(self):
search = self.form.search()
# The ID I need for the database search is at the end of the URL,
# but this may have some search parameters on and need cleaning up.
view_id = self.request.path.split("/")[-1]
view_query = MyView.objects.filter(id=view_id.split("&")[0])
# At this point the django_ids of the required objects can be found.
if len(view_query) > 0:
view_item = view_query.__getitem__(0)
django_ids = []
for thing in view_item.things.all():
django_ids.append(thing.id)
search = search.filter_and(django_id__in=django_ids)
return search
Using search.filter_and rather than search.filter at the end was another thing which turned out to be essential, but which didn't do what I needed when the filtering was being performed before getting to the SearchView.