Can't import a new Agent in Dialogflow ES - dialogflow-es

I'm trying to import a DialogFlow ES agent as part of a deployment script. I'm using gcloud alpha dialogflow import client describe here
gcloud alpha dialogflow agent import --source="path/to/archive.zip" --replace-all
If the agent is already in place, the command succeed in updating/replacing it with the definition of the zip file. If the agent is not already created than I have this error.
ERROR: (gcloud.alpha.dialogflow.agent.import) Projects instance [*redacted_project_name*] not found: com.google.apps.framework.request.NotFoundException: No DesignTimeAgent found for project '*redacted_project_name*'.
Is there a command I'm missing in order to be able to use the agent import command line ?

Related

Cannot create Repo with Databricks CLI

I am using Azure DevOps and Databricks. I created a simplified CI/CD Pipeline which triggers the following Python script:
existing_cluster_id = 'XXX'
notebook_path = './'
repo_path = '/Repos/abc#def.at/DevOpsProject'
git_url = 'https://dev.azure.com/XXX/DDD/'
import json
import time
from datetime import datetime
from databricks_cli.configure.config import _get_api_client
from databricks_cli.configure.provider import EnvironmentVariableConfigProvider
from databricks_cli.sdk import JobsService, ReposService
config = EnvironmentVariableConfigProvider().get_config()
api_client = _get_api_client(config, command_name="cicdtemplates-")
repos_service = ReposService(api_client)
repo = repos_service.create_repo(url=git_url, provider="azureDevOpsServices", path=repo_path+"_new")
When I run the pipeline I always get an error (from the last line):
2022-12-07T23:09:23.5318746Z raise requests.exceptions.HTTPError(message, response=e.response)
2022-12-07T23:09:23.5320017Z requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://adb-XXX.azuredatabricks.net/api/2.0/repos
2022-12-07T23:09:23.5321095Z Response from server:
2022-12-07T23:09:23.5321811Z { 'error_code': 'BAD_REQUEST',
2022-12-07T23:09:23.5322485Z 'message': 'Remote repo not found. Please ensure that:\n'
2022-12-07T23:09:23.5323156Z '1. Your remote Git repo URL is valid.\n'
2022-12-07T23:09:23.5323853Z '2. Your personal access token or app password has the correct '
2022-12-07T23:09:23.5324513Z 'repo access.'}
In Databricks, I connect my repo with Azure DevOps: In Git I created a full access token which I added to Databricks' Git Integration and I am able to pull and push in Databricks.
For my CI/CD pipeline, I created variables containing my Databricks Host address and my token. When I change the token, I get a different error message (403 http code) - so the token seems to be fine.
Here a screenshot of my variables.
I have really no clue what I am doing wrong. I tried to run a simplified version of the official Databricks code here.
I tried to reproduce the error with the databricks CLI. I found out that simply _git was missing in the git repo url

elastic_enterprise_search.AppSearch client fails in python sdk on GCloud Dataflow with urllib3 certificate error

I'm working on a DoFn that writes to Elastic Search App Search (elastic_enterprise_search.AppSearch). It works fine when I run my pipeline using the DirectRunner.
But when I deploy to DataFlow the elasticsearch client fails because, I suppose, it can't access a certificate store:
File "/usr/local/lib/python3.8/site-packages/urllib3/util/ssl_.py", line 402, in ssl_wrap_socket
context.load_verify_locations(ca_certs, ca_cert_dir, ca_cert_data)
FileNotFoundError: [Errno 2] No such file or directory
Any advice on how to overcome this sort of problem? I'm finding it difficult to get any traction on how to solve this on google.
Obviously urllib3 is set up properly on my local machine for DirectRunner. I have "elastic-enterprise-search" in the REQUIRED_PACKAGES key of setup.py for my package along with all my other dependencies:
REQUIRED_PACKAGES = ['PyMySQL', 'sqlalchemy',
'cloud-sql-python-connector', 'google-cloud-pubsub', 'elastic-enterprise-search']
Can I package certificates up with my pipeline? How? Should I look into creating a custom docker image? Any hints on what it should look like?
Yes, creating a custom container that has the necessary credentials in it would work well here.

how can we run google app engine with python3 with ndb on local

I am using python google app engine
could you tell me, how i can run python3 google app engine with ndb on local system?
Help me
https://cloud.google.com/appengine/docs/standard/python3
Please try this
Go to service account https://cloud.google.com/docs/authentication/getting-started
create json file
and add install this pip
$ pip install google-cloud-ndb
now open linux terminal
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/credentials.json"
if window then open command prompt
set GOOGLE_APPLICATION_CREDENTIALS=C:\path\to\credentials.json
run this code in python3 in your terminal/command prompt
from google.cloud import ndb
client = ndb.Client()
with client.context():
contact1 = Contact(name="John Smith",
phone="555 617 8993",
email="john.smith#gmail.com")
contact1.put()
see this result in your datastore.. Google console
App Engine is a Serverless service provided by Google Cloud Platform where you can deploy your applications and configure Cloud resources like instances' CPU, memory, scaling method, etc. This will provide you the architecture to run your app.
This service is not meant to be used on local environments. Instead, it is a great option to host an application that (ideally) has been tested on local environments.
Let's say: You don't run a Django application with Datastore dependencies using App Engine locally, You run a Django application with Datastore (and other) dependencies locally and then deploy it to App Engine once it is ready.
Most GCP services have their Client libraries so we can interact with them via code, even on local environments. The ndb you asked belongs to the Google Cloud Datastore and can be installed in Python environments with:
pip install google-cloud-ndb
After installing it, you will be ready to interact with Datastore locally. Please find details about setting up credentials and code snippets in the Datastore Python Client Library reference.
Hope this is helpful! :)
You can simply create emulator instance of the datastore on your local:
gcloud beta emulators datastore start --project test --host-port "0.0.0.0:8002" --no-store-on-disk --consistency=1
And then use it in the code in main app file:
from google.cloud import ndb
def get_ndb_client(namespace):
if config.ENVIRONMENT != ENVIRONMENTS.LOCAL:
# production
db = ndb.Client(namespace=namespace)
else:
# localhost
import mock
credentials = mock.Mock(spec=google.auth.credentials.Credentials)
db = ndb.Client(project="test", credentials=credentials, namespace=namespace)
return db
ndb_client = get_ndb_client("ns1")

Why am I getting : Unable to import module 'handler': No module named 'paramiko'?

I was in the need to move files with a aws-lambda from a SFTP server to my AWS account,
then I've found this article:
https://aws.amazon.com/blogs/compute/scheduling-ssh-jobs-using-aws-lambda/
Talking about paramiko as a SSHclient candidate to move files over ssh.
Then I've written this calss wrapper in python to be used from my serverless handler file:
import paramiko
import sys
class FTPClient(object):
def __init__(self, hostname, username, password):
"""
creates ftp connection
Args:
hostname (string): endpoint of the ftp server
username (string): username for logging in on the ftp server
password (string): password for logging in on the ftp server
"""
try:
self._host = hostname
self._port = 22
#lets you save results of the download into a log file.
#paramiko.util.log_to_file("path/to/log/file.txt")
self._sftpTransport = paramiko.Transport((self._host, self._port))
self._sftpTransport.connect(username=username, password=password)
self._sftp = paramiko.SFTPClient.from_transport(self._sftpTransport)
except:
print ("Unexpected error" , sys.exc_info())
raise
def get(self, sftpPath):
"""
creates ftp connection
Args:
sftpPath = "path/to/file/on/sftp/to/be/downloaded"
"""
localPath="/tmp/temp-download.txt"
self._sftp.get(sftpPath, localPath)
self._sftp.close()
tmpfile = open(localPath, 'r')
return tmpfile.read()
def close(self):
self._sftpTransport.close()
On my local machine it works as expected (test.py):
import ftp_client
sftp = ftp_client.FTPClient(
"host",
"myuser",
"password")
file = sftp.get('/testFile.txt')
print(file)
But when I deploy it with serverless and run the handler.py function (same as the test.py above) I get back the error:
Unable to import module 'handler': No module named 'paramiko'
Looks like the deploy is unable to import paramiko (by the article above it seems like it should be available for lambda python 3 on AWS) isn't it?
If not what's the best practice for this case? Should I include the library into my local project and package/deploy it to aws?
A comprehensive guide tutorial exists at :
https://serverless.com/blog/serverless-python-packaging/
Using the serverless-python-requirements package
as serverless node plugin.
Creating a virtual env and Docker Deamon will be required to packup your serverless project before deploying on AWS lambda
In the case you use
custom:
pythonRequirements:
zip: true
in your serverless.yml, you have to use this code snippet at the start of your handler
try:
import unzip_requirements
except ImportError:
pass
all details possible to find in Serverless Python Requirements documentation
You have to create a virtualenv, install your dependencies and then zip all files under sites-packages/
sudo pip install virtualenv
virtualenv -p python3 myvirtualenv
source myvirtualenv/bin/activate
pip install paramiko
cp handler.py myvirtualenv/lib/python
zip -r myvirtualenv/lib/python3.6/site-packages/ -O package.zip
then upload package.zip to lambda
You have to provide all dependencies that are not installed in AWS' Python runtime.
Take a look at Step 7 in the tutorial. Looks like he is adding the dependencies from the virtual environment to the zip file. So I'd assume your ZIP file to contain the following:
your worker_function.py on top level
a folder paramico with the files installed in virtual env
Please let me know if this helps.
I tried various blogs and guides like:
web scraping with lambda
AWS Layers for Pandas
spending hours of trying out things. Facing SIZE issues like that or being unable to import modules etc.
.. and I nearly reached the end (that is to invoke LOCALLY my handler function), but then my function even though it was fully deployed correctly and even invoked LOCALLY with no problems, then it was impossible to invoke it on AWS.
The most comprehensive and best by far guide or example that is ACTUALLY working is the above mentioned by #koalaok ! Thanks buddy!
actual link

Testing the Chatbots using botium and testmybot packages

How to test the chatbots using botium or testmybot node packages?
I am not able to find any end to end sample to understand this.
There are several samples included in the Github repositories for Testmybot and Botium.
The Botium Wiki contains some useful information and a Walkthrough.
The basic steps for running a Botium script are as follows (from one of the samples)
Install requirements
Install Node.js
Install docker
Install docker-compose
There are other samples available which don't require docker.
Initialize Botium directory
Open a command line window, create a directory, initialize NPM and download Botium package.
mkdir botium
cd botium
npm init
npm install --save botium-core
Load botium library
First, load the botium library and required classes.
const BotDriver = require('botium-core').BotDriver
const Capabilities = require('botium-core').Capabilities
const Source = require('botium-core').Source
Configure capabilities
Tell Botium what kind of chatbot is under test and how to connect to it. In this sample, the chatbot should be loaded into a docker container and Botium has to hook into the Microsoft Bot Framework.
const driver = new BotDriver()
.setCapability(Capabilities.PROJECTNAME, 'core-CreateNewConversation')
.setCapability(Capabilities.CONTAINERMODE , 'docker')
.setCapability(Capabilities.BOTFRAMEWORK_API, true)
.setCapability(Capabilities.BOTFRAMEWORK_APP_ID, 'my microsoft app id')
.setCapability(Capabilities.BOTFRAMEWORK_CHANNEL_ID, 'facebook')
Configure chatbot repository
Botium retrieves the chatbot code directly from the source Github repository. As an alternative, the repository could be cloned first and loaded from a local directory. In a CI environment, loading from Git usually makes more sense.
Additionally, the command to initialize the Git repository ("npm install"), the command to start the chatbot service ("npm start") and some environment variables are required to run the sample chatbot.
driver.setSource(Source.GITURL, 'https://github.com/Microsoft/BotBuilder-Samples.git')
.setSource(Source.GITDIR, 'Node/core-CreateNewConversation')
.setSource(Source.GITPREPARECMD, 'npm install')
.setCapability(Capabilities.STARTCMD, 'npm start')
.setEnv('MICROSOFT_APP_ID', 'my microsoft app id')
.setEnv('MICROSOFT_APP_PASSWORD', 'my microsoft app password')
.setEnv('NODE_DEBUG', 'botbuilder')
.setEnv('DEBUG', '*')
Running a conversation and evaluate response
Botium provides a "fluent interface".
First, the Botium driver is initialized (work directory created, repository downloaded, docker network constructed, ...) and started.
driver.BuildFluent()
.Start()
...
Then, a conversation is started by sending input to the chatbot ("UserSaysText") or by waiting for a reaction from the chatbot ("WaitBotSaysText"). The conversation is tailored to the chatbot in use. In case chatbot doesn't react or shows unexpected reaction, the conversation is ended immediately.
...
.UserSaysText('hi bot')
.WaitBotSaysText((text) => assert('You\'ve been invited to a survey! It will start in a few seconds...', text))
.WaitBotSaysText(null, 10000, (text) => assert('Hello... What\'s your name?', text))
.UserSaysText('John')
.WaitBotSaysText((text) => assert('Hi John, How many years have you been coding?', text))
.UserSaysText('5')
.WaitBotSaysText((text) => assert('What language do you code Node using?', text))
.UserSaysText('CoffeeScript')
.WaitBotSaysText((text) => assert('Got it... John you\'ve been programming for 5 years and use CoffeeScript.', text))
...
In the end, Botium is stopped and some cleanup tasks are performed.
Don't forget the "Exec" call, otherwise nothing will be executed at all!
...
.Stop()
.Clean()
.Exec()
...
Run the program and watch output
Now run the program as usual in a command line window.
[ec2-user#ip-172-30-0-104 botframework]$ node botiumFluent.js
SUCCESS: Got Expected <You've been invited to a survey! It will start in a few seconds...>
SUCCESS: Got Expected <Hello... What's your name?>
SUCCESS: Got Expected <Hi John, How many years have you been coding?>
SUCCESS: Got Expected <What language do you code Node using?>
SUCCESS: Got Expected <Got it... John you've been programming for 5 years and use CoffeeScript.>
READY
[ec2-user#ip-172-30-0-104 botframework]$
TestMyBot
Botium is comparable to what Selenium/Appium are doing (unified API and "Page Object Model"). TestMyBot is a layer above Botium to integrate Botium with CI/CD pipelines and test runners like Mocha and Jasmine. The conversations don't have to be coded as above, but are "scripted" in text files, excel files or yml files, for example:
urvey
#me
hi
#bot
You've been invited to a survey! It will start in a few seconds...
#bot
Hello... What's your name?
#me
John
#bot
Hi John, How many years have you been coding?
#me
10
#bot
What language do you code Node using?
#me
C#
#bot
I didn't understand. Please choose an option from the list.
#me
JavaScript
#bot
Got it... John you've been programming for 10 years and use JavaScript.
All of these files have to be placed in the directory spec/convo, and the test cases for Jasmine or Mocha (or any other test runner) are created on-the-fly with just a short scriptlet (placed into spec/testmybot.spec.js):
const bot = require('testmybot');
bot.helper.jasmine().setupJasmineTestSuite(60000);
It really helps to have knowledge of Jasmine or Mocha. When correctly set up, the only command to run is:
npm run test

Resources