odoo create database via xmlrpc - python-3.x

I'm trying to script the creation a a new database + import data from other sources in odoo.
I am at the first step, creating a new database.
I have the following code, but it doesnt work :
import xmlrpc.client
print("Db name : ", end="")
db_name = input()
with xmlrpc.client.ServerProxy('127.0.0.1:8070/xmlrpc/2/db') as mod:
RES = mod.create_database(db_name, False, 'en_US')
(note that my test server does run on localhost port 8070)
The result is :
$ python3 baseodoo.py
Db name : please_work
Traceback (most recent call last):
File "baseodoo.py", line 5, in <module>
with xmlrpc.client.ServerProxy('127.0.0.1:8070/xmlrpc/2/db') as mod:
File "/usr/lib/python3.8/xmlrpc/client.py", line 1419, in __init__
raise OSError("unsupported XML-RPC protocol")
OSError: unsupported XML-RPC protocol
Am am unsure of the url ending in /db, I've got it from dispatch_rpc(…) in http.py which tests service_name for "common", "db" and "object"
Also in dispatch(…) from db.py, a method is prefixed by "exp_" so by calling create_database it should execute the exp_create_database function in db.py
I guess my reasoning is flawed but I don't know where. Help !
EDIT :
Ok I'm stupid, the url should start with "http://". Still I now get
xmlrpc.client.Fault: <Fault 3: 'Access Denied'>
EDIT2 :
There was a typo in the password I gave, so closing the question now.

Related

Cannot use custom key words with pvporcupine

I have already created an account in picovoice and recieved an access key, but when I try to put the path of my ppn file, it shows an error:
`
[ERROR] loading keyword file at 'C:\Users\Priyam\Desktop\hey keyazip' failed with 'IO_ERROR'
Traceback (most recent call last):
File "e:\Personal Project\import struct.py", line 13, in <module>
porcupine = pvporcupine.create(access_key='access key',
File "C:\Users\Priyam\AppData\Roaming\Python\Python310\site-packages\pvporcupine\__init__.py", line 77, in create
return Porcupine(
File "C:\Users\Priyam\AppData\Roaming\Python\Python310\site-packages\pvporcupine\porcupine.py", line 158, in __init__
raise self._PICOVOICE_STATUS_TO_EXCEPTION[status]()
pvporcupine.porcupine.PorcupineIOError
the code is:
`
porcupine=None
paud=None
audio_stream=None
try:
access_key="access key"
porcupine = pvporcupine.create(access_key='access key',
keyword_paths=['C:\\Users\Priyam\Desktop\hey keyazip'],keywords=['hey keya']) #pvporcupine.KEYWORDS for all keywords
paud=pyaudio.PyAudio()
audio_stream=paud.open(rate=porcupine.sample_rate,channels=1,format=pyaudio.paInt16,input=True,frames_per_buffer=porcupine.frame_length)
while True:
keyword=audio_stream.read(porcupine.frame_length)
keyword=struct.unpack_from("h"*porcupine.frame_length,keyword)
keyword_index=porcupine.process(keyword)
if keyword_index>=0:
print("hotword detected")
finally:
if porcupine is not None:
porcupine.delete()
if audio_stream is not None:
audio_stream.close()
if paud is not None:
paud.terminate()
`
I tried the code above and the code provided by picovoice itself, yet I am facing the same issues
It looks like Porcupine is having trouble finding your keyword file: [ERROR] loading keyword file at 'C:\Users\Priyam\Desktop\hey keyazip' failed with 'IO_ERROR'.
The Picovoice console provides the keyword inside a compressed .zip file. You will need to decompress the file and in your code update the path to lead to the .ppn file located inside. For example: C:\\Users\Priyam\Desktop\hey-keya_en_windows_v2_1_0\hey-keya_en_windows_v2_1_0.ppn

AWS secret manager time out sometimes

I am fetching a secret from secret manager on a lambda. The request fails sometimes. Which is totally strange, it is working fine and couple of hours later I check and I am getting time out.
def get_credentials(self):
"""Retrieve credentials from the Secrets Manager service."""
boto_config = BotoConfig(connect_timeout=3, retries={"max_attempts": 3})
secrets_client = self.boto_session.client(
service_name="secretsmanager",
region_name=self.boto_session.region_name,
config=boto_config,
)
secret_value = secrets_client.get_secret_value(SecretId=self._secret_name)
secret = secret_value["SecretString"]
I try to debug the lambda and later seems to be working again, without any change, those state changes happen in hours. Any hint why that could happen?
Traceback (most recent call last):
File "/opt/python/botocore/endpoint.py", line 249, in _do_get_response
http_response = self._send(request)
File "/opt/python/botocore/endpoint.py", line 321, in _send
return self.http_session.send(request)
File "/opt/python/botocore/httpsession.py", line 438, in send
raise ConnectTimeoutError(endpoint_url=request.url, error=e)
botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL: "https://secretsmanager.eu-central-1.amazonaws.com/"
You are using the legacy retry mode (is the default one in boto3), which has very limited functionality as it only works for a very limited number of errors.
You should try changing your retry strategy to something like Standard retry mode or Adaptative. In that case your code would look like:
from botocore.config import Config as BotoConfig
boto_config = BotoConfig(
connect_timeout=3,
retries={
"max_attempts": 3,
"mode":"standard"
}
)
secrets_client = self.boto_session.client(
service_name="secretsmanager",
region_name=self.boto_session.region_name,
config=boto_config,
)

Trouble sending a batch create entity request in dialogflow

I have defined the following function. The purpose is to make batch create entity request with dialogflow client. I am using this method after sending many individual tests did not scale well.
The problem seems to be the line that defines EntityType. Seems like "entityType" is not valid but that is what is in the dialogflow v2 documentation which is the current version I am using.
Any ideas on what the issue is?
def create_batch_entity_types(self):
client = self.get_entity_client()
print(DialogFlowClient.batch_list)
EntityType = {
"entityTypes": DialogFlowClient.batch_list
}
response = client.batch_update_entity_types(parent=AGENT_PATH, entity_type_batch_inline=EntityType)
def callback(operation_future):
# Handle result.
result = operation_future.result()
print(result)
response.add_done_callback(callback)
After running the function I received this error
Traceback (most recent call last):
File "df_client.py", line 540, in <module>
create_entity_types_from_database()
File "df_client.py", line 426, in create_entity_types_from_database
df.create_batch_entity_types()
File "/Users/andrewflorial/Documents/PROJECTS/curlbot/dialogflow/dialogflow_accessor.py", line 99, in create_batch_entity_types
response = client.batch_update_entity_types(parent=AGENT_PATH, entity_type_batch_inline=EntityType)
File "/Users/andrewflorial/Documents/PROJECTS/curlbot/venv/lib/python3.7/site-packages/dialogflow_v2/gapic/entity_types_client.py", line 767, in batch_update_entity_types
update_mask=update_mask,
ValueError: Protocol message EntityTypeBatch has no "entityTypes" field.
The argument for entity_type_batch_inline must have the same form as EntityTypeBatch.
Look how that type looks like: https://dialogflow-python-client-v2.readthedocs.io/en/latest/gapic/v2/types.html#dialogflow_v2.types.EntityTypeBatch
It has to have entity_types field, not entityTypes.

Symfony/Process exception when running Python3 from Laravel controller on remote server (Debian)

I'm trying to execute a python script (3.6.5) which is located in one folder inside my Laravel app folder. The script is been called from a controller and retrieves the output of the scrpit. I'm using Symfony/process to execute the scrip, like in the code below:
public static function searchAnswers($input)
{
$process = new Process(array('dir', base_path() . '/app/SearchEngine'));
$process->setWorkingDirectory(base_path() . '/app/SearchEngine');
$process->setCommandLine('python3 SearchEngine.py ' . '"'. $input .'"');
$process->setTimeout(2 * 3600);
$process->run();
if (!$process->isSuccessful()) { //Executes after the command finishes
throw new ProcessFailedException($process);
}
$list_ids = array_map('intval', explode(' ', $process->getOutput()));
info($list_ids);
$solicitations = Solicitation::join('answers', 'solicitations.id', '=', 'answers.solicitation_id')
->whereIn('solicitations.id', $list_ids)
->limit(20)
->get();
info($solicitations);
return $solicitations;
}
In my localhost, the script is been called with no issues, either from my terminal or from my running application via HTTP requests. But, after I uploaded my app version to my remote server, which is Debian, I'm getting the following exception:
"""
The command "python3 SearchEngine.py "O que é diabetes?"" failed.\n
\n
Exit Code: 1(General error)\n
\n
Working directory: /var/www/plataformaTS/app/SearchEngine\n
\n
Output:\n
================\n
\n
\n
Error Output:\n
================\n
Traceback (most recent call last):\n
File "/usr/local/lib/python3.4/dist-packages/nltk/corpus/util.py", line 80, in __load\n
try: root = nltk.data.find('{}/{}'.format(self.subdir, zip_name))\n
File "/usr/local/lib/python3.4/dist-packages/nltk/data.py", line 675, in find\n
raise LookupError(resource_not_found)\n
LookupError: \n
**********************************************************************\n
Resource \e[93mstopwords\e[0m not found.\n
Please use the NLTK Downloader to obtain the resource:\n
\n
\e[31m>>> import nltk\n
>>> nltk.download('stopwords')\n
\e[0m\n
Searched in:\n
- '/var/www/nltk_data'\n
- '/usr/share/nltk_data'\n
- '/usr/local/share/nltk_data'\n
- '/usr/lib/nltk_data'\n
- '/usr/local/lib/nltk_data'\n
- '/usr/nltk_data'\n
- '/usr/share/nltk_data'\n
- '/usr/lib/nltk_data'\n
**********************************************************************\n
\n
\n
During handling of the above exception, another exception occurred:\n
\n
Traceback (most recent call last):\n
File "SearchEngine.py", line 20, in <module>\n
stopwords = stopwords.words('portuguese')\n
File "/usr/local/lib/python3.4/dist-packages/nltk/corpus/util.py", line 116, in __getattr__\n
self.__load()\n
File "/usr/local/lib/python3.4/dist-packages/nltk/corpus/util.py", line 81, in __load\n
except LookupError: raise e\n
File "/usr/local/lib/python3.4/dist-packages/nltk/corpus/util.py", line 78, in __load\n
root = nltk.data.find('{}/{}'.format(self.subdir, self.__name))\n
File "/usr/local/lib/python3.4/dist-packages/nltk/data.py", line 675, in find\n
raise LookupError(resource_not_found)\n
LookupError: \n
**********************************************************************\n
Resource \e[93mstopwords\e[0m not found.\n
Please use the NLTK Downloader to obtain the resource:\n
\n
\e[31m>>> import nltk\n
>>> nltk.download('stopwords')\n
\e[0m\n
Searched in:\n
- '/var/www/nltk_data'\n
- '/usr/share/nltk_data'\n
- '/usr/local/share/nltk_data'\n
- '/usr/lib/nltk_data'\n
- '/usr/local/lib/nltk_data'\n
- '/usr/nltk_data'\n
- '/usr/share/nltk_data'\n
- '/usr/lib/nltk_data'\n
**********************************************************************\n
\n
"""
The exception points out to errors of import in the python script. But, when I execute the script from the terminal in the remote server either directly or by Artisan commands, everything works fine. Any ideas of what is happening?
Thanks in advance!
I found the solution.
The error was that the lib nltk.stopwords was installed as root user and my application was trying to access the file as another user with no permissions. So I logged in as this user and installed the libraries again. Now, everything woks like a charm.

Google datalab fails to query and create table

I'm trying to query large amount of data in BigQuery and then upload the table in the desired dataset (datasetxxx) using "datalab" in PyCharm as the IDE. Below is my code:
query = bq.Query(sql=myQuery)
job = query.execute_async(
output_options=bq.QueryOutput.table('datasetxxx._tmp_table', mode='overwrite', allow_large_results=True))
job.result()
However, I ended up with "No project ID found". Project Id is imported through a .jason file as os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = path to the file. I also tried to explicitly declare project Id above as follows.
self.project_id = 'xxxxx'
query = bq.Query(sql=myQuery, context = self.project_id)
This time I ended up with the following error:
TypeError: init() got an unexpected keyword argument 'context'.
It's also an up-to-date version. Thanks for your help.
Re: The project Id is specified in the "FROM" clause and I'm also able to see the path to the .json file using "echo" command. Below is the stack-trace:
Traceback (most recent call last):
File "xxx/Queries.py", line 265, in <module>
brwdata._extract_gbq()
File "xxx/Queries.py", line 206, in _extract_gbq
, allow_large_results=True))
File "xxx/.local/lib/python3.5/site packages/google/datalab/bigquery/_query.py", line 260, in execute_async
table_name = _utils.parse_table_name(table_name, api.project_id)
File "xxx/.local/lib/python3.5/site-packages/google/datalab/bigquery/_api.py", line 47, in project_id
return self._context.project_id
File "xxx/.local/lib/python3.5/site-packages/google/datalab/_context.py", line 62, in project_id
raise Exception('No project ID found. Perhaps you should set one by running'
Exception: No project ID found. Perhaps you should set one by running"%datalab project set -p <project-id>" in a code cell.
So, if you do "echo $GOOGLE_APPLICATION_CREDENTIALS" you can see the path of your JSON.
So, could you make sure if the "FROM" from the query has specified the right external project?
Also, if your QueryOutput destination is your very same project, you are doing it right,
table('dataset.table'.....)
But in order case you should specify:
table('project.dataset.table'....)
I don't exactly know how are you doing the query but the error might be there.
I reproduced this and it worked fine to me:
import google.datalab
from google.datalab import bigquery as bq
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] ="./bqauth.json"
myQuery="SELECT * FROM `MY_EXAMPLE_PROJECT.MY_EXAMPLE_DATASET.MY_EXAMPLE_TABLE` LIMIT 1000"
query = bq.Query(sql=myQuery)
job = query.execute_async(
output_options=bq.QueryOutput.table('MY_EXAMPLE_PROJECT.MY_EXAMPLE_DATASET2.MY_EXAMPLE_TABLE2', mode='overwrite', allow_large_results=True))
job.result()
Here's the updated way if someone in need:
Now you can use the Context in latest version as:
from google.datalab import bigquery as bq
from google.datalab import Context as ctx
ctx.project_id = 'PROJECT_ID'
df = bq.Query(query).execute()
...

Resources