In django rest framework, have a POST api that creates a City object.
However it also receives a list of ids of stadiums.
What is the best way to validate each of the ids are valid stadium_ids (present and not deleted in the table Stadiums).
First put list of ids in a set to distinct probable repeated ids:
ids = set(ids)
Then filter stadiums based on this ids:
stadiums = Stadium.object.filter(id__in=ids)
If some stadiums are not present in database, then count of stadiums is less than size of ids:
if len(ids) != len(stadiums):
# Handle Error
Related
I have a command named demand and need to make a limit to amounts per user. There are roles in the server named "team_role" and then there's also "2 demands", "1 demand", and "0 demands". After the user demands, I need it to role them down a demand until eventually, they hit 0. Once they hit 0, it should send them a message saying they can't demand. Here's my code for one of the teams (the Dallas Cowboys).
#bot.command(aliases=["<:DallasCowboys:788796627161710592>"])
#commands.has_any_role("Dallas Cowboys")
async def t(ctx):
guild = bot.get_guild(766292887914151949)
role_name = discord.utils.get(guild.roles, name='Free Agent')
role = discord.utils.get(guild.roles, name='Dallas Cowboys')
embed = discord.Embed()
embed.add_field(name="<a:CheckMark:768095274949935146> Successful Demand", value=f"{ctx.author.mention} has demanded from the <:DallasCowboys:788796627161710592>")
await ctx.send(embed=embed)
await ctx.author.add_roles(role_name)
await ctx.author.remove_roles(role)
Since you said I can just provide the logic:
Create a means of persistently storing the data, whether that be
json file or a database.
a) If you choose the json file, the file will consist of key value
pairs of user ids to number of demands remaining. For example, the
file might look something like this:
{ "UserDemandsRemaining" : { 12345 : 2, 45678: 1, 09876: 0 } }
b) If you choose to use to use a SQL database, create a table called
UserDemandsRemaining with two fields: UserId and
DemandsRemaining. The primary key would be UserId.
Each time a user makes a demand, you will check how many remaining
demands the user has.
If there is no matching UserId stored:
a) If you chose to use a json file, and there is no matching key in the json value associated with the UserDemandsRemaining key, then create a key-value pair with the key as the UserId that made the demand, and the value as 3.
b) If you chose to use a SQL db, and there is no matching UserId matching the UserId that made the demand in the UserDemandsRemaining table of the database, then insert a row into the table containing the UserId that made the demand and a DemandsRemaining value of 3.
Since the user isn't stored, we know that they have not yet made any demands. Execute the demand. Then decrement the `DemandsRemaining` by 1.
Else if there is a matching UserId:
if DemandsRemaining > 0:
#Execute the command. Decrement `DemandsRemaining` by 1.
else:
#Notify the user that they have no more remaining demands.
In order to reset the number of demands each person has remaining,
just set the value pair associated with the users in the json file
or the DemandsRemaining in the table to 3 for everyone.
I have Profile table with a huge number of rows. I was trying to filter out profiles based on super_category and account_id (these are the fields in the model Profile).
Assume I have a list of ids in the form of bulk_account_ids and super_categories
list_of_ids = Profile.objects.filter(account_id__in=bulk_account_ids, super_category__in=super_categories).values_list('id', flat=True))
list_of_ids = list(list_of_ids)
SomeTask.delay(ids=list_of_ids)
This particular query is timing out while it gets evaluated in the second line.
Can I use .iterator() at the end of the query to optimize this?
i.e list(list_of_ids.iterator()), if not what else I can do?
I have an API request I'm writing to query OpenWeatherMap's API to get weather data. I am using a city_id number to submit a request for a unique place in the world. A successful API query looks like this:
r = requests.get('http://api.openweathermap.org/data/2.5/group?APPID=333de4e909a5ffe9bfa46f0f89cad105&id=4456703&units=imperial')
The key part of this is 4456703, which is a unique city_ID
I want the user to choose a few cities, which then I'll look through a JSON file for the city_ID, then supply the city_ID to the API request.
I can add multiple city_ID's by hard coding. I can also add city_IDs as variables. But what I can't figure out is if users choose a random number of cities (could be up to 20), how can I insert this into the API request. I've tried adding lists and tuples via several iterations of something like...
#assume the user already chose 3 cities, city_ids are below
city_ids = [763942, 539671, 334596]
r = requests.get(f'http://api.openweathermap.org/data/2.5/groupAPPID=333de4e909a5ffe9bfa46f0f89cad105&id={city_ids}&units=imperial')
Maybe a list is not the right data type to use?
Successful code would look something like...
r = requests.get(f'http://api.openweathermap.org/data/2.5/group?APPID=333de4e909a5ffe9bfa46f0f89cad105&id={city_id1},{city_id2},{city_id3}&units=imperial')
Except, as I stated previously, the user could choose 3 cities or 10 so that part would have to be updated dynamically.
you can use some string methods and list comprehensions to append all the variables of a list to single string and format that to the API string as following:
city_ids_list = [763942, 539671, 334596]
city_ids_string = ','.join([str(city) for city in city_ids_list]) # Would output "763942,539671,334596"
r = requests.get('http://api.openweathermap.org/data/2.5/group?APPID=333de4e909a5ffe9bfa46f0f89cad105&id={city_ids}&units=imperial'.format(city_ids=city_ids_string))
hope it helps,
good luck
Is there a way to get the index of the results within an aql query?
Something like
FOR user IN Users sort user.age DESC RETURN {id:user._id, order:{index?}}
If you want to enumerate the result set and store these numbers in an attribute order, then this is possible with the following AQL query:
LET sorted_ids = (
FOR user IN Users
SORT user.age DESC
RETURN user._key
)
FOR i IN 0..LENGTH(sorted_ids)-1
UPDATE sorted_ids[i] WITH { order: i+1 } IN Users
RETURN NEW
A subquery is used to sort users by age and return an array of document keys. Then a loop over a numeric range from the first to the last index of the that array is used to iterate over its elements, which gives you the desired order value (minus 1) as variable i. The current array element is a document key, which is used to update the user document with an order attribute.
Above query can be useful for a one-off computation of an order attribute. If your data changes a lot, then it will quickly become stale however, and you may want to move this to the client-side.
For a related discussion see AQL: Counter / enumerator
If I understand your question correctly - and feel free to correct me, this is what you're looking for:
FOR user IN Users
SORT user.age DESC
RETURN {
id: user._id,
order: user._key
}
The _key is the primary key in ArangoDB.
If however, you're looking for example data entered (in chronological order) then you will have to have to set the key on your inserts and/or create a date / time object and filter using that.
Edit:
Upon doing some research, I believe this link might be of use to you for AI the keys: https://www.arangodb.com/2013/03/auto-increment-values-in-arangodb/
Ok
i have this class in my model :
i want to get the agencys value which is a many to many on this class and store them in a list or array . Agency which store agency_id with the id of my class on a seprate table.
Agency has it's own tabel as well
class GPSpecial(BaseModel):
hotel = models.ForeignKey('Hotel')
rooms = models.ManyToManyField('Room')
agencys = models.ManyToManyField('Agency')
You can make it a bit more compact by using the flat=True parameter:
agencys_spe = list(GPSpecial.objects.values_list('agencys', flat=True))
The list(..) part is not necessary: without it, you have a QuerySet that contains the ids, and the query is postponed. By using list(..) we force the data into a list (and the query is executed).
It is possible that multiple GPSpecial objects have a common Agency, in that case it will be repeated. We can use the .distinct() function to prevent that:
agencys_spe = list(GPSpecial.objects.values_list('agencys', flat=True).distinct())
If you are however interested in the Agency objects, for example of GPSpecials that satisfy a certain predicate, you better query the Agency objects directly, like for example:
agencies = Agency.objects.filter(gpspecial__is_active=True).distinct()
will produce all Agency objects for which a GPSpecial object exists where is_active is set to True.
I think i found the answer to my question:
agencys_sp = GPSpecial.objects.filter(agencys=32,is_active=True).values_list('agencys')
agencys_spe = [i[0] for i in agencys_sp]