I have been working on this as a side project for like 2 months and for the life of me I can not get the bot to create roles with permissions. Here is what I have.
levels={
"Admin":{"name":"ADMIN","hoist":"1","colour":"0x6F0A0A","permissions":""},
"Moderator":{"name":"WATCHMEN","hoist":"1","colour":"0xFF611A","permissions":"discord.permissions(manage_roles=True,kick_members=True,ban_members=True,create_instant_invite=True,mention_everyone=True,change_nickname=True,manage_nicknames=True,read_message_history,send_messages=True,embed_links=True,send_tts_messages,attach_files=True,external_emojis=True,add-reactions=True)"},
"Member":{"name":"MEMBER","hoist":"0","colour":"0x52D41A","permissions":"discord.permissions(send_messages=True)"},
"Verify":{"name":"VERIFY","hoist":"1","colour":"0xFFFFFF","permissions":"discord.permissions(send_messages=True)"},
}
and
async def cook_roles(ctx):
for level in levels.keys():
guild=ctx.guild
name=levels[level]['name']
hoist=levels[level]['hoist']
colour=levels[level]['colour']
# perms=levels[level]['permissions']
if name == "Admin":
perms=discord.Permissions.all()
else:
perms=discord.Permissions(levels[level]['permissions'])
print=(perms)
await ctx.send(perms)
await guild.create_role(name=name,hoist=hoist,permissions=perms,colour=discord.Colour(int(colour,16)))
Any help is appriciated!
I tried taking away the discord.Permissions() and formatting in perms like this
perms=discord.Permissions(levles[level]['permissions'])
but that didn't work either. (I have tried a host of things and just haven't figured it out.)
Here is a traceback for the first provided answer:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/discord/ext/commands/core.py", line 83, in wrapped
ret = await coro(*args, **kwargs)
File "ny.py", line 483, in build
await cook_roles(ctx)
File "ny.py", line 551, in cook_roles
await guild.create_role(name=name,hoist=hoist,permissions=perms,colour=discord.Colour(int(colour,16)))
TypeError: int() can't convert non-string with explicit base
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/discord/ext/commands/bot.py", line 892, in invoke
await ctx.command.invoke(ctx)
File "/usr/local/lib/python3.6/dist-packages/discord/ext/commands/core.py", line 797, in invoke
await injected(*ctx.args, **ctx.kwargs)
File "/usr/local/lib/python3.6/dist-packages/discord/ext/commands/core.py", line 92, in wrapped
raise CommandInvokeError(exc) from exc
discord.ext.commands.errors.CommandInvokeError: Command raised an exception: TypeError: int() can't convert non-string with explicit base
You have a string values in dict, which is not considered as real objects.
You can store any type objects in dicts, so all you need to to do, basically, is to make values in dict actual objects with actual types.
You also was trying to use permissions module instead of Permissions class.
And always make sure that permissions names exists: discord.Permissions (your mistake was in some missing =True and add-reactions instead of add_reactions)
levels = {
"Admin": {"name": "ADMIN", "hoist": True, "colour": 0x6F0A0A, "permissions": discord.Permissions()},
"Moderator": {
"name": "WATCHMEN",
"hoist": True,
"colour": 0xFF611A,
"permissions": discord.Permissions(manage_roles=True,
kick_members=True,
ban_members=True,
create_instant_invite=True,
mention_everyone=True,
change_nickname=True,
manage_nicknames=True,
read_message_history=True,
send_messages=True,
embed_links=True,
send_tts_messages=True,
attach_files=True,
external_emojis=True,
add_reactions=True),
},
"Member": {
"name": "MEMBER",
"hoist": False,
"colour": 0x52D41A,
"permissions": discord.Permissions(send_messages=True),
},
"Verify": {
"name": "VERIFY",
"hoist": True,
"colour": 0xFFFFFF,
"permissions": discord.Permissions(send_messages=True),
},
}
Related
Trying to create a PutMongo processor in nipyapi connected with local instance. I have specified all the required configurations but doesnt seem to work.
PutMongoFile = canvas.create_processor(
root_process_group,
processor_PutMongo,
(randrange(0,4000), randrange(0,4000)),
name=None,
config=processor_PutMongo_config)
Get the following error:
*AttributeError Traceback (most recent call last)
<ipython-input-48-ef1b815cdbdb> in <module>
----> 1 PutMongoFile = canvas.create_processor(root_process_group,processor_PutMongo,(randrange(0,4000),randrange(0,4000)),name=None,config=processor_PutMongo_config)
~\AppData\Roaming\Python\Python38\site-packages\nipyapi\canvas.py in create_processor(parent_pg, processor, location, name, config)
503 """
504 if name is None:
--> 505 processor_name = processor.type.split('.')[-1]
506 else:
507 processor_name = name
AttributeError: 'list' object has no attribute 'type'*
Appreciate any help!!!
We resolved this in an Issue discussion on the Repo here. The get_processor_type method is greedy by default and will return a list if more than one Processor Type is found, in this case finding both PutMongo and PutMongoRecord. I have updated the method to allow exact matching only, and implemented better checks for this in the next release
processor_PutMongo = canvas.get_processor_type('org.apache.nifi.processors.mongodb.PutMongo', identifier_type='name')
This returns a list of dictionaries containing details of both the processors, PutMongo & PutMongoRecord.
This is the exact JSON you would get when you would be printing processor_PutMongo:
[
{
"bundle":{
"artifact":"nifi-mongodb-nar",
"group":"org.apache.nifi",
"version":"1.13.2"
},
"controller_service_apis":"None",
"deprecation_reason":"None",
"description":"Creates FlowFiles from documents in MongoDB loaded by a user-specified query.",
"explicit_restrictions":"None",
"restricted":false,
"tags":[
"read",
"get",
"mongodb"
],
"type":"org.apache.nifi.processors.mongodb.GetMongo",
"usage_restriction":"None"
},
{
"bundle":{
"artifact":"nifi-mongodb-nar",
"group":"org.apache.nifi",
"version":"1.13.2"
},
"controller_service_apis":"None",
"deprecation_reason":"None",
"description": "A record-based version of GetMongo that uses the Record writers to write the MongoDB result set.",
"explicit_restrictions":"None",
"restricted":false,
"tags":[
"mongo",
"get",
"fetch",
"record",
"json",
"mongodb"
],
"type":"org.apache.nifi.processors.mongodb.GetMongoRecord",
"usage_restriction":"None"
}
]
The shortest solution would be to just extract either the first or the second element of the list by their indexes.
For e.g.
PutMongo = canvas.create_processor(new_processor_group, processor_PutMongo[0],(randrange(0,20000),randrange(0,20000)),name=None,config=processor_PutMongo_config)
My goals is to restrict access to ec2 using tag key. It works fine if I remove the condition from the IAM policy. However, if I add the aws:TagKeys condition then I get UnauthorizedOperation error. Need some assistance in fixing the IAM policy or either the code to work with tagkey.
Here's the IAM policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeKeyPairs"
],
"Resource": "*",
"Condition": {
"ForAnyValue:StringEquals": {
"aws:TagKeys": "mytag"
}
}
}
]
}
Here's my python code:
import os
import boto3
import json
os.environ['AWS_DEFAULT_REGION'] = 'ap-south-1'
os.environ['AWS_ACCESS_KEY_ID'] = 'myacceskey'
os.environ['AWS_SECRET_ACCESS_KEY'] = 'secret'
def list_instances_by_tag_value(tagkey, tagvalue):
# When passed a tag key, tag value this will return a list of InstanceIds that were found.
ipdict={}
ec2client = boto3.client('ec2')
#response = ec2client.describe_key_pairs()
#print(response)
response = ec2client.describe_instances(
Filters=[
{
'Name':'tag:' + tagkey,
'Values':[tagvalue]
}
]
)
client_dict = {}
for reservation in (response["Reservations"]):
print(reservation)
#boto3.set_stream_logger(name='botocore')
output = list_instances_by_tag_value("mytag", "abcd")
Here's the exception:
Traceback (most recent call last):
File "test.py", line 29, in <module>
output = list_instances_by_tag_value("mytag", "abcd")
File "test.py", line 20, in list_instances_by_tag_value
'Values':[tagvalue]
File "C:\python35\lib\site-packages\botocore\client.py", line 272, in _api_call
return self._make_api_call(operation_name, kwargs)
File "C:\python35\lib\site-packages\botocore\client.py", line 576, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation.
I have checked that tagkey is supported by describeinstances - https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstances.html
Also checked couple of SO threads after which I changed my action to very specific DescribeInstances from Describe*
But its still not working for me.
Got it: Why does applying a condition to ec2:DescribeInstances in an IAM policy fail?
DescribeInstances does not support resource level permissions
My code is the following
def write_cells(spreadsheet_id, update_data):
updating = sheet_service.spreadsheets().values().\
batchUpdate(spreadsheetId=spreadsheet_id, body=update_data)
updating.execute()
spreadsheet_data = [
{
"deleteDimension": {
"range": {
"sheetId": sheet_id,
"dimension": "ROWS",
"startIndex": 5,
"endIndex": 100
}
}
}
]
update_spreadsheet_data = {
'valueInputOption': 'USER_ENTERED',
'data': spreadsheet_data
}
update_data = update_spreadsheet_data
write_cells(spreadsheet_id, update_data)
I have the following error message
HttpError Traceback (most recent call last)
<ipython-input-64-0ba8756b8e85> in <module>()
----> 1 write_cells(spreadsheet_id, update_data)
2 frames
/usr/local/lib/python3.6/dist-packages/googleapiclient/http.py in execute(self, http, num_retries)
838 callback(resp)
839 if resp.status >= 300:
--> 840 raise HttpError(resp, content, uri=self.uri)
841 return self.postproc(resp, content)
842
HttpError: <HttpError 400 when requesting https://sheets.googleapis.com/v4/spreadsheets/1lAI8gp29luZDKAS1m3P62sq0kKCn8eaMUvO_M_J8meU/values:batchUpdate?alt=json returned "Invalid JSON payload received. Unknown name "delete_dimension" at 'data[0]': Cannot find field.">
I don't understand this: "Unknown name delete_dimension". I'm unable to resolve it. Any help is appreciated, thanks.
You want to delete rows using Sheets API with Python.
If my understanding is correct, how about this modification?
Modification points:
When delete rows in Spreadsheet, please use spreadsheets().batchUpdate().
I think that spreadsheet_data is correct.
In this case, please modify the request body to {"requests": spreadsheet_data}.
Modified script:
def write_cells(spreadsheet_id, update_data):
# Modified
updating = sheet_service.spreadsheets().batchUpdate(
spreadsheetId=spreadsheet_id, body=update_data)
updating.execute()
spreadsheet_data = [
{
"deleteDimension": {
"range": {
"sheetId": sheet_id,
"dimension": "ROWS",
"startIndex": 5,
"endIndex": 100
}
}
}
]
update_spreadsheet_data = {"requests": spreadsheet_data} # Modified
update_data = update_spreadsheet_data
write_cells(spreadsheet_id, update_data)
Note:
This modified script supposes that you have already been able to use Sheets API.
Reference:
DeleteDimensionRequest
If I misunderstood your question and that was not the result you want, I apologize.
I have the following code from a blog which gets the bitcoin price for today. I could access this Lambda function from the AWS Lex console and test the bot to get the price for today.
"""
Lexbot Lambda handler.
"""
from urllib.request import Request, urlopen
import json
def get_bitcoin_price(date):
print('get_bitcoin_price, date = ' + str(date))
request = Request('https://rest.coinapi.io/v1/ohlcv/BITSTAMP_SPOT_BTC_USD/latest?period_id=1DAY&limit=1&time_start={}'.format(date))
request.add_header('X-CoinAPI-Key', 'E4107FA4-A508-448A-XXX')
response = json.loads(urlopen(request).read())
return response[0]['price_close']
def lambda_handler(event, context):
print('received request: ' + str(event))
date_input = event['currentIntent']['slots']['Date']
btc_price = get_bitcoin_price(date_input)
response = {
"dialogAction": {
"type": "Close",
"fulfillmentState": "Fulfilled",
"message": {
"contentType": "SSML",
"content": "Bitcoin's price was {price} dollars".format(price=btc_price)
},
}
}
print('result = ' + str(response))
return response
But when I test the function from the AWS Lex console, I get the following error:
Response:
{
"errorMessage": "'currentIntent'",
"errorType": "KeyError",
"stackTrace": [
[
"/var/task/lambda_function.py",
18,
"lambda_handler",
"date_input = event['currentIntent']['slots']['Date']"
]
]
}
Request ID:
"2488187a-2b76-47ba-b884-b8aae7e7a25d"
Function Logs:
START RequestId: 2488187a-2b76-47ba-b884-b8aae7e7a25d Version: $LATEST
received request: {'Date': 'Feb 22'}
'currentIntent': KeyError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 18, in lambda_handler
date_input = event['currentIntent']['slots']['Date']
KeyError: 'currentIntent'
How do I test the function in the AWS Lambda console? 'lambda_handler' function, what format would be the 'event' and 'context' be? Also, what would be the 'context' here?
What should I pass as 'event' and 'context' in my case?
Your code is failing because the event object is filled in with {'Date': 'Feb 22'} but your code expects much more than this. Therefore, it fails when you try to parse this JSON by trying to access currentIntent:
date_input = event['currentIntent']['slots']['Date']
You cannot pass any context to your Lambda when testing from the console as it is automatically populated by AWS. Also, the context is only used in very specific occasions, so I would not worry about it for now.
You can, however, pass the event as argument and there are many ways to do it. The simplest way to do it manually is to go to AWS's Lambda Console, click on Test and, if you haven't configured any Test Event yet, the following screen will pop up
Now, on the dropdown, you can select your event and AWS will fill it in for you, like this:
You can now customise the event the way you want it.
Once you save it and click on Test, the event object will be populated with the provided JSON.
Another option is to check Sample Events Published By Event Sources, so you can simply grab any JSON event you'd like and tailor it accordingly.
I have grabbed the Lex Sample Event for you, which looks like this:
{
"messageVersion": "1.0",
"invocationSource": "FulfillmentCodeHook or DialogCodeHook",
"userId": "user-id specified in the POST request to Amazon Lex.",
"sessionAttributes": {
"key1": "value1",
"key2": "value2",
},
"bot": {
"name": "bot-name",
"alias": "bot-alias",
"version": "bot-version"
},
"outputDialogMode": "Text or Voice, based on ContentType request header in runtime API request",
"currentIntent": {
"name": "intent-name",
"slots": {
"slot-name": "value",
"slot-name": "value",
"slot-name": "value"
},
"confirmationStatus": "None, Confirmed, or Denied
(intent confirmation, if configured)"
}
}
Use that as your event and you'll be able to test it accordingly.
I want to copy a set of files over from S3, and put them in the /tmp directory while my lambda function is running, to use and manipulate the contents. The following code excerpt works fine on my PC (which is running windows)
s3 = boto3.resource('s3')
BUCKET_NAME = 'car_sentiment'
keys = ['automated.csv', 'connected_automated.csv', 'connected.csv',
'summary.csv']
for KEY in keys:
try:
local_file_name = 'tmp/'+KEY
s3.Bucket(BUCKET_NAME).download_file(KEY, local_file_name)
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "404":
continue
else:
raise
However, when I try to run on AWS lambda, I get:
{
"errorMessage": "[Errno 2] No such file or directory: 'tmp/automated.csv.4Bcd0bB9'",
"errorType": "FileNotFoundError",
"stackTrace": [
[
"/var/task/SentimentForAWS.py",
28,
"my_handler",
"s3.Bucket(BUCKET_NAME).download_file(KEY, local_file_name)"
],
[
"/var/runtime/boto3/s3/inject.py",
246,
"bucket_download_file",
"ExtraArgs=ExtraArgs, Callback=Callback, Config=Config)"
],
[
"/var/runtime/boto3/s3/inject.py",
172,
"download_file",
"extra_args=ExtraArgs, callback=Callback)"
],
[
"/var/runtime/boto3/s3/transfer.py",
307,
"download_file",
"future.result()"
],
[
"/var/runtime/s3transfer/futures.py",
73,
"result",
"return self._coordinator.result()"
],
[
"/var/runtime/s3transfer/futures.py",
233,
"result",
"raise self._exception"
],
[
"/var/runtime/s3transfer/tasks.py",
126,
"__call__",
"return self._execute_main(kwargs)"
],
[
"/var/runtime/s3transfer/tasks.py",
150,
"_execute_main",
"return_value = self._main(**kwargs)"
],
[
"/var/runtime/s3transfer/download.py",
582,
"_main",
"fileobj.seek(offset)"
],
[
"/var/runtime/s3transfer/utils.py",
335,
"seek",
"self._open_if_needed()"
],
[
"/var/runtime/s3transfer/utils.py",
318,
"_open_if_needed",
"self._fileobj = self._open_function(self._filename, self._mode)"
],
[
"/var/runtime/s3transfer/utils.py",
244,
"open",
"return open(filename, mode)"
]
]
}
Why does it think the file name is tmp/automated.csv.4Bcd0bB9 rather than just tmp/automated.csv and how do I fix it? Been pulling my hair out on this one, trying multiple approaches, some of which generate a similar error when running locally on my PC. Thanks!
You should save in /tmp, rather than tmp/.
eg:
local_file_name = '/tmp/' + KEY
Well the reason why lambda gives the above error is because it doesn't allow to write in the heirarchial structure with in the /tmp/ directory. You can write the files directly to /tmp/example.txt dir but not to say /tmp/dir1/example.txt