API to pull the list of supported languages in AWS Translate - python-3.x

I am currently working on a project where I need to translate Customer comments into English from the source language on AWS. It is easy to do so using AWS Translate but before I call translate API to translate text into English, I want to check whether the source language is supported by AWS or not?
One solution is to put all the language codes supported by AWS Translate into a list and then check source language against this list. This is easy but it is going to be messy and I want to make it more dynamic.
So, I am thinking to code like this
import boto3
def translateUserComment(source_language):
translate = boto3.client(service_name='translate', region_name='region', use_ssl=True)
languages_supported = tanslate.<SomeMethod>()
if source_language in languages_supported:
result = translate.translate_text(Text="Hello, World",
SourceLanguageCode=source_language, TargetLanguageCode="en")
print('TranslatedText: ' + result.get('TranslatedText'))
print('SourceLanguageCode: ' + result.get('SourceLanguageCode'))
print('TargetLanguageCode: ' + result.get('TargetLanguageCode'))
else:
print("The source language is not supported by AWS Translate")
Problem is that I am not able to find out any API call to get the list of languages/language codes supported by AWS Translate for place.
Before I posted this question,
I have tried to search for similar questions on stackoverflow
I have gone through the AWS Translate Developer guide but still no luck
Any suggestion/ redirection to the right approach is highly appreciated.

Currently there is no API for this service, although this code would work, in this code a class is created Translate_lang with all the language codes and country wise
from here-> https://docs.aws.amazon.com/translate/latest/dg/what-is.html
, you can call this class into your program and use it by creating an instance of the class:
translate_lang_check.py
class Translate_lang:
def __init__(self):
self.t_lang = {'Afrikaans': 'af', 'Albanian': 'sq', 'Amharic': 'am',
'Arabic': 'ar', 'Armenian': 'hy', 'Azerbaijani': 'az',
'Bengali': 'bn', 'Bosnian': 'bs', 'Bulgarian': 'bg',
'Catalan': 'ca', 'Chinese (Simplified)': 'zh',
'Chinese (Traditional)': 'zh-TW', 'Croatian': 'hr',
'Czech': 'cs', 'Danish': 'da ', 'Dari': 'fa-AF',
'Dutch': 'nl ', 'English': 'en', 'Estonian': 'et',
'Farsi (Persian)': 'fa', 'Filipino Tagalog': 'tl',
'Finnish': 'fi', 'French': 'fr', 'French (Canada)': 'fr-CA',
'Georgian': 'ka', 'German': 'de', 'Greek': 'el', 'Gujarati': 'gu',
'Haitian Creole': 'ht', 'Hausa': 'ha', 'Hebrew': 'he ', 'Hindi': 'hi',
'Hungarian': 'hu', 'Icelandic': 'is', 'Indonesian': 'id ', 'Italian': 'it',
'Japanese': 'ja', 'Kannada': 'kn', 'Kazakh': 'kk', 'Korean': 'ko',
'Latvian': 'lv', 'Lithuanian': 'lt', 'Macedonian': 'mk', 'Malay': 'ms',
'Malayalam': 'ml', 'Maltese': 'mt', 'Mongolian': 'mn', 'Norwegian': 'no',
'Persian': 'fa', 'Pashto': 'ps', 'Polish': 'pl', 'Portuguese': 'pt',
'Romanian': 'ro', 'Russian': 'ru', 'Serbian': 'sr', 'Sinhala': 'si',
'Slovak': 'sk', 'Slovenian': 'sl', 'Somali': 'so', 'Spanish': 'es',
'Spanish (Mexico)': 'es-MX', 'Swahili': 'sw', 'Swedish': 'sv',
'Tagalog': 'tl', 'Tamil': 'ta', 'Telugu': 'te', 'Thai': 'th',
'Turkish': 'tr', 'Ukrainian': 'uk', 'Urdu': 'ur', 'Uzbek': 'uz',
'Vietnamese': 'vi', 'Welsh': 'cy'}
def check_lang_in_translate(self, given_country):
if given_country in self.t_lang:
return self.t_lang[given_country]
else:
return None
def check_lang_code_in_translate(self, given_lang):
if given_lang in list(self.t_lang.values()):
return True
else:
return False
You can call and check your lang codes using the methods of this class:
from translate_lang_check import Translate_lang
tl = Translate_lang()
print(tl.check_lang_code_in_translate('en'))

Related

Unique nested dictionary from the 'for' loop Python3

I've got the host which executes commands via 'subprocess' and gets output list of several parameters. The problem is that output can not be correctly modified to be translated to the dictionary, whether it's yaml or json. After list is received Regexp takes part to match valuable information and to perform grouping. I am interested in getting a unique dictionary, where crossing keys are put into nested dictionary.
Here is the code and the example of output list:
from re import compile,match
# Output can differ from request to request, the "keys" from the #
# list_of_values can dublicate or appear more than two times. The values #
# mapped to the keys can differ too. #
list_of_values = [
"paramId: '11'", "valueId*: '11'",
"elementId: '010_541'", 'mappingType: Both',
"startRng: ''", "finishRng: ''",
'DbType: sql', "activeSt: 'false'",
'profile: TestPr1', "specificHost: ''",
'hostGroup: tstGroup10', 'balance: all',
"paramId: '194'", "valueId*: '194'",
"elementId: '010_541'", 'mappingType: Both',
"startRng: '1020304050'", "finishRng: '1020304050'",
'DbType: sql', "activeSt: 'true'",
'profile: TestPr1', "specificHost: ''",
'hostGroup: tstGroup10', 'balance: all']
re_compile_valueId = compile(
"valueId\*:\s.(?P<valueId>\d{1,5})"
"|elementId:\s.(?P<elementId>\d{3}_\d{2,3})"
"|startRng:\s.(?P<startRng>\d{1,10})"
"|finishRng:\s.(?P<finishRng>\d{1,10})"
"|DbType:\s(?P<DbType>nosql|sql)"
"|activeSt:\s.(?P<activeSt>true|false)"
"|profile:\s(?P<profile>[A-z0-9]+)"
"|hostGroup:\s(?P<hostGroup>[A-z0-9]+)"
"|balance:\s(?P<balance>none|all|priority group)"
)
iterator_loop = 0
uniq_dict = dict()
next_dict = dict()
for element in list_of_values:
match_result = match(re_compile_valueId,element)
if match_result:
temp_dict = match_result.groupdict()
for key, value in temp_dict.items():
if value:
if key == 'valueId':
uniq_dict['valueId'+str(iterator_loop)] = ''
iterator_loop +=1
next_dict.update({key: value})
else:
next_dict.update({key: value})
uniq_dict['valueId'+str(iterator_loop-1)] = next_dict
print(uniq_dict)
This code right here responses with:
{
'valueId0':
{
'valueId': '194',
'elementId': '010_541',
'DbType': 'sql',
'activeSt': 'true',
'profile': 'TestPr1',
'hostGroup': 'tstGroup10',
'balance': 'all',
'startRng': '1020304050',
'finishRng': '1020304050'
},
'valueId1':
{
'valueId': '194',
'elementId': '010_541',
'DbType': 'sql',
'activeSt': 'true',
'profile': 'TestPr1',
'hostGroup': 'tstGroup10',
'balance': 'all',
'startRng': '1020304050',
'finishRng': '1020304050'
}
}
And I was waiting for something like:
{
'valueId0':
{
'valueId': '11',
'elementId': '010_541',
'DbType': 'sql',
'activeSt': 'false',
'profile': 'TestPr1',
'hostGroup': 'tstGroup10',
'balance': 'all',
'startRng': '',
'finishRng': ''
},
'valueId1':
{
'valueId': '194',
'elementId': '010_541',
'DbType': 'sql',
'activeSt': 'true',
'profile': 'TestPr1',
'hostGroup': 'tstGroup10',
'balance': 'all',
'startRng': '1020304050',
'finishRng': '1020304050'
}
}
I've also got another code below, which runs and puts values as expected. But the structure breaks the idea of having this all looped around, because each dictionary result key has its own order number mapped. The example below. The list_of_values and re_compile_valueId can be used from previous example.
for element in list_of_values:
match_result = match(re_compile_valueId,element)
if match_result:
temp_dict = match_result.groupdict()
for key, value in temp_dict.items():
if value:
if key == 'balance':
key = key+str(iterator_loop)
uniq_dict.update({key: value})
iterator_loop +=1
else:
key = key+str(iterator_loop)
uniq_dict.update({key: value})
print(uniq_dict)
The output will look like:
{
'valueId1': '11', 'elementId1': '010_541',
'DbType1': 'sql', 'activeSt1': 'false',
'profile1': 'TestPr1', 'hostGroup1': 'tstGroup10',
'balance1': 'all', 'valueId2': '194',
'elementId2': '010_541', 'startRng2': '1020304050',
'finishRng2': '1020304050', 'DbType2': 'sql',
'activeSt2': 'true', 'profile2': 'TestPr1',
'hostGroup2': 'tstGroup10', 'balance2': 'all'
}
Would appreciate any help! Thanks!
It appeared that some documentation reading needs to be performed :D
The copy() of the next_dict under else statement needs to be applied. Thanks to thread:
Why does updating one dictionary object affect other?
Many thanks to answer's author #thefourtheye (https://stackoverflow.com/users/1903116/thefourtheye)
The final code:
for element in list_of_values:
match_result = match(re_compile_valueId,element)
if match_result:
temp_dict = match_result.groupdict()
for key, value in temp_dict.items():
if value:
if key == 'valueId':
uniq_dict['valueId'+str(iterator_loop)] = ''
iterator_loop +=1
next_dict.update({key: value})
else:
next_dict.update({key: value})
uniq_dict['valueId'+str(iterator_loop-1)] = next_dict.copy()
Thanks for the involvement to everyone.

AttributeError: module 'google.cloud.dialogflow' has no attribute 'types'

I am building a telegram-bot and using Dialogflow in it, I am getting the following error :
2021-11-19 23:26:46,936 - __main__ - ERROR - Update '{'message': {'entities': [],
'delete_chat_photo': False, 'text': 'hi', 'date': 1637344606, 'new_chat_members': [],
'channel_chat_created': False, 'message_id': 93, 'photo': [], 'group_chat_created':
False, 'supergroup_chat_created': False, 'new_chat_photo': [], 'caption_entities': [],
'chat': {'id': 902424541, 'type': 'private', 'first_name': 'Akriti'},
'from': {'first_name': 'Akriti', 'id': 902424541, 'is_bot': False, 'language_code': 'en'}
}, 'update_id': 624230278}' caused error 'module 'google.cloud.dialogflow' has no
attribute 'types''
It appears there is some issue with the Dialogflow attribute "types", but I don't know what I am doing wrong.
Here is the code where I am using it:
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "client.json"
from google.cloud import dialogflow
dialogflow_session_client = dialogflow.SessionsClient()
PROJECT_ID = "get-informed-ufnl"
from gnewsclient import gnewsclient
client = gnewsclient.NewsClient()
def detect_intent_from_text(text, session_id, language_code='en'):
session = dialogflow_session_client.session_path(PROJECT_ID, session_id)
text_input = dialogflow.types.TextInput(text=text, language_code=language_code)
query_input = dialogflow.types.QueryInput(text=text_input)
response = dialogflow_session_client.detect_intent(session=session, query_input=query_input)
return response.query_result
def get_reply(query, chat_id):
response = detect_intent_from_text(query, chat_id)
if response.intent.display_name == 'get_news':
return "get_news", dict(response.parameters)
else:
return "small_talk", response.fulfillment_text
def fetch_news(parameters):
client.language = parameters.get('language')
client.location = parameters.get('geo-country')
client.topic = parameters.get('topic')
return client.get_news()[:5]
topics_keyboard = [
['Top Stories', 'World', 'Nation'],
['Business', 'Technology', 'Entertainment'],
['Sports', 'Science', 'Health']
]
I figured it out, the problem lies in the import statement. The correct module name should be:
import google.cloud.dialogflow_v2 as dialogflow
I recommend to deactivate your current error handler or use one similar to this example such that you can see the full traceback of the exception :)
Disclaimer: I'm currently the maintainer of python-telegram-bot

Create cloudwatch alarm with multiple metrics and math expression (python3)

Here is an example of how to create cloudwatch alarm using too many metrics and Math expression using boto3.
asg_name : autoscaling group name
import boto3
cloudwatch = boto3.client('cloudwatch', region_name="eu-west-1")
def create_cloudwatch_alarm(asg_name):
cloudwatch.put_metric_alarm(
ActionsEnabled=True,
AlarmName=asg_name,
ComparisonOperator='GreaterThanThreshold',
DatapointsToAlarm=3,
EvaluationPeriods=3,
Metrics=[{'Expression': 'IF(m1 > m2, 1, 0)',
'Id': 'e2',
'Label': 'Compare Running vs desired capacity',
'ReturnData': True},
{'Id': 'm1',
'MetricStat': {'Metric': {'Dimensions': [{'Name': 'AutoScalingGroupName',
'Value': asg_name}],
'MetricName': 'GroupDesiredCapacity',
'Namespace': 'AWS/AutoScaling'},
'Period': 300,
'Stat': 'Average'},
'ReturnData': False},
{'Id': 'm2',
'MetricStat': {'Metric': {'Dimensions': [{'Name': 'AutoScalingGroupName',
'Value': asg_name}],
'MetricName': 'GroupInServiceInstances',
'Namespace': 'AWS/AutoScaling'},
'Period': 300,
'Stat': 'Average'},
'ReturnData': False}],
Threshold=0.1,
TreatMissingData='missing'
)
According AWS's documentation, you can only create alarm on up to 10 metrics with metrics math : https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create-alarm-on-metric-math-expression.html

how to make parse of list(string) not list(char) in parse argument of list?

I use flask_restful in flask
My code like:
from flask_restful import Resource, reqparse
apilink_parser = reqparse.RequestParser()
apilink_parser.add_argument('provider_id', type=int,required=True)
apilink_parser.add_argument('name', type=str, required=True)
apilink_parser.add_argument('func_id', type=int)
apilink_parser.add_argument('method', type=str)
apilink_parser.add_argument('url', type=str)
apilink_parser.add_argument('parameter',type=list)
apilink_parser.add_argument("expectreturn", type=list)
#marshal_with(apilink_fields)
def post(self):
args = apilink_parser.parse_args()
print(args)
# user owns the task
task = APILink.create(**args)
return task, 201
My json post data like:
{
"name":"riskiqwhois",
"provider_id":1,
"func_id":1,
"url":"myurl",
"parameter":["query"], //******//
"expectreturn":[],
"method":"post"
}
but when I print the arrgs the result is:
{
'provider_id': 1,
'name': 'riskiqwhois',
'func_id': 1,
'method': 'post',
'url': 'myurl',
'parameter': ['q', 'u', 'e', 'r', 'y'], //******//
'expectreturn': None
}
I want
You can see I want parameter is list of string which is only one element named "query", but the real parameter tranlate into the database is ['q', 'u', 'e', 'r', 'y'], How to make the parameter is list of string not list of char? how to make sure the data is list(string)?
Well, you forgot to set action="append" and you should change from type=list to type=str.
If not, you will still be getting a result like [['q', 'u', 'e', 'r', 'y']].
...
apilink_parser.add_argument('parameter',type=str, action='append')
apilink_parser.add_argument("expectreturn", type=str, action='append')
You can solve this problem by adding action="append" to your request parser
like below
apilink_parser.add_argument('parameter',type=str,action="append")
apilink_parser.add_argument("expectreturn", type=list,action="append")
this will return you below output
{
'provider_id': 1,
'name': 'riskiqwhois',
'func_id': 1,
'method': 'post',
'url': 'myurl',
'parameter': ['query'],
'expectreturn': None
}

Python nested json to csv

I can't convert this Json to csv. I have been trying with different solutions posted here using panda or other parser but non solved this.
This is a small extract of the big json
{'data': {'items': [{'category': 'cat',
'coupon_code': 'cupon 1',
'coupon_name': '$829.99/€705.79 ',
'coupon_url': 'link3',
'end_time': '2017-12-31 00:00:00',
'language': 'sp',
'start_time': '2017-12-19 00:00:00'},
{'category': 'LED ',
'coupon_code': 'code',
'coupon_name': 'text',
'coupon_url': 'link',
'end_time': '2018-01-31 00:00:00',
'language': 'sp',
'start_time': '2017-10-07 00:00:00'}],
'total_pages': 1,
'total_results': 137},
'error_no': 0,
'msg': '',
'request': 'GET api/ #2017-12-26 04:50:02'}
I'd like to get an output like this with the columns:
category, coupon_code, coupon_name, coupon_url, end_time, language, start_time
I'm running python 3.6 with no restrictions.

Resources