Have Dialogflow's fulfillment webhook enabled on every intent - dialogflow-es

I'm using fulfillment webhooks to store analytics data on my servers, so I need it enabled on every possible intent. So far I've been doing it by manually checking "Enable webhook call for this intent" on every intent. That is kinda dangerous though, as it would be easy to forget doing it on an intent. Is there any global way to have it enabled for all intents?

There is no direct way to do this, but I have made a python script to do the same.
You need to follow below steps to get it done:
Export your agent
Go to settings of your agent, select Export and Import tab and select Export as zip.
This will give you zip file of your agent
Put the zip file in the same folder where your python script file will be
present
Run the python script
A folder named zipped will be created
Go inside that folder and select all the files and folders present in
that folder and zip them
Restore your agent
Go to settings of your agent, select Export and Import tab and select Restore from zip, select the zip file which you created in above step.
Python code:
import zipfile
import json
import os
import glob
cwd = os.getcwd()
zip_ref = zipfile.ZipFile(cwd + '/your_agent.zip', 'r')
zip_ref.extractall('zipped')
zip_ref.close()
cwd = cwd + '/zipped/intents'
files = glob.glob(cwd + "/*.json")
for file in files:
print(file)
if "usersay" not in file:
json_data= json.loads(open(file).read())
json_data['webhookUsed'] = True
with open(file, 'w') as outfile:
json.dump(json_data, outfile)
print('Done')
Hope it helps.

Related

How to access the Hydra config object at runtime

I need to change the output/working directory of the hydra config framework in such a way that it lies outside of my project directory. According to my understanding and the doc, config.yaml would need to look like this:
exp_nr: 0.0.0.0
condition: something
hydra:
run:
dir: /absolute/path/to/folder/${exp_nr}/${condition}/
In my code, I then tried to access and set the path like this:
import os
import hydra
from omegaconf import DictConfig
#hydra.main(config_path="../../config", config_name="config", version_base="1.3")
def main(cfg: DictConfig):
print(cfg)
cwd = os.getcwd()
print(f"The current working directory is {cwd}")
owd = hydra.utils.get_original_cwd()
print(f"The Hydra original working directory is {owd}")
work_dir = cfg.hydra.run.dir
print(f"The work directory should be {work_dir}")
But I get the following output and error:
{'exp_nr': '0.0.0.0', 'condition': 'something'}
The current working directory is /project/path/subdir/subsubdir
The Hydra original working directory is /project/path/subdir/subsubdir
Error executing job with overrides: ['exp_nr=1.0.0.0', 'condition=somethingelse']
Traceback (most recent call last):
File "/project/path/subdir/subsubdir/model.py", line 13, in main
work_dir = cfg.hydra.run.dir
omegaconf.errors.ConfigAttributeError: Key 'hydra' is not in struct
full_key: hydra
object_type=dict
I see that hydra.run.dir doesn't appear in the cfg dict printed first but how can I access the path through the config if os.getcwd() isn't set already? Or what did I do wrong?
The path is correct as I already saved files to the folder before integrating hydra and if the process isn't killed due to the error the folder also gets created but hydra doesn't save any files to it, not even the log file with the parameters it should save by default. I also tried to set the path relative to the standard output path or having an extra config parameter work_dir: ${hydra.run.dir} (returns an Interpolation error).
You can access the Hydra config via the HydraConfig singleton documented here.
from hydra.core.hydra_config import HydraConfig
#hydra.main()
def my_app(cfg: DictConfig) -> None:
print(HydraConfig.get().job.name)

Copy files from blob container to another container using python

I am trying to copy 'specific files' from one folder to another. when I am trying to use Wild card operator (*) at the end, the copy does not happen.
But if I provide just the folder name, then all the files from this source folder are copied to target folder without any issues.
Problem: File copy does not happen when Wild card operator is used.
Can you please help me to fix the problem?
def copy_blob_files(account_name, account_key, copy_from_container, copy_to_container, copy_from_prefix):
try:
blob_service = BlockBlobService(account_name=account_name, account_key=account_key)
files = blob_service.list_blobs(copy_from_container, prefix=copy_from_prefix)
for f in files:
#print(f.name)
blob_service.copy_blob(copy_to_container, f.name.replace(copy_from_prefix,""), f"https://{account_name}.blob.core.windows.net/{copy_from_container}/{f.name}")
except:
print('Could not copy files from source to target')
copy_from_prefix = 'Folder1/FileName_20191104*.csv'
copy_blob_files (accountName, accesskey, copy_fromcontainer, copy_to_container, copy_from_prefix)
The copy_blob method does not support wildcard.
1.If you want to copy specified pattern of blobs, you can filter the blobs in list_blobs() method with prefix(it also does not support wildcard). In your case, the prefix looks like copy_from_prefix = 'Folder1/FileName_20191104', note that there is no wildcard.
The code below works at my side, and all the specified pattern files are copies and blob name replaced:
from azure.storage.blob import BlockBlobService
account_name ="xxx"
account_key ="xxx"
copy_from_container="test7"
copy_to_container ="test4"
#remove the wildcard
copy_from_prefix = 'Folder1/FileName_20191104'
def copy_blob_files(account_name, account_key, copy_from_container, copy_to_container, copy_from_prefix):
try:
block_blob_service = BlockBlobService(account_name,account_key)
files = block_blob_service.list_blobs(copy_from_container,copy_from_prefix)
for file in files:
block_blob_service.copy_blob(copy_to_container,file.name.replace(copy_from_prefix,""),f"https://{account_name}.blob.core.windows.net/{copy_from_container}/{file.name}")
except:
print('could not copy files')
copy_blob_files(account_name,account_key,copy_from_container,copy_to_container,copy_from_prefix)
2.Another way as others mentioned, you can use python to call azcopy(you can use azcopy v10, which is just a .exe file). And for using wildcard in azcopy, you can follow this doc. Then you write you own azcopy command, at last, write your python code as below:
import subprocess
#the path of azcopy.exe, v10 version
exepath = "D:\\azcopy\\v10\\azcopy.exe"
myscript= "your azcopy command"
#call the azcopy command
subprocess.call(myscript)
AzCopy supports wildcards, you could excute AzCopy from your Python code.
An example of how to do this can be found here: How to run Azure CLI commands using python?

How can I allow a Python code to open "%temp%" folder for any user?

I have some users on my PC and I try to create a Python code to open %temp% folder, but the problem is that it works under my account only. When I use the same code on a different account it does not work on the same PC.
My folder path >> C:\Users\MyAccount\AppData\Local\Temp <<,
the problem error with this user 'MyAccount'
This is my code:
import webbrowser
webbrowser.open('C:\Users\MyAccount\AppData\Local\Temp')
I need to pass the correct userFolder to my code to work with.
Example:
my account the path >> **C:\Users\MyAccount\AppData\Local\Temp**
on different account >> C:\Users\ **?** \AppData\Local\Temp
**?** = it should be the name of the user.
Could you please advise me?
If pathlib is an option (comes with Python 3.4+) you can use
from pathlib import Path
Path.home() / 'AppData' / 'Local' / 'Temp'
if not, try
from os import path
path.expanduser('~/AppData/Local/Temp')

Creating a Spark RDD from a file located in Google Drive using Python on Colab.Research.Google

I have been successful in running Python 3 / Spark 2.2.1 program in Google's Colab.Research platform :
!apt-get update
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q http://apache.osuosl.org/spark/spark-2.2.1/spark-2.2.1-bin-hadoop2.7.tgz
!tar xf spark-2.2.1-bin-hadoop2.7.tgz
!pip install -q findspark
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-2.2.1-bin-hadoop2.7"
import findspark
findspark.init()
from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local[*]").getOrCreate()
this works perfectly when I uploaded text files from my local computer to the Unix VM using
from google.colab import files
datafile = files.upload()
and read them as follows :
textRDD = spark.read.text('hobbit.txt').rdd
so far so good ..
My problem starts when I am trying to read a file that is lying in my Google drive colab directory.
Following instructions I have authenticated user and created a drive service
from google.colab import auth
auth.authenticate_user()
from googleapiclient.discovery import build
drive_service = build('drive', 'v3')
after which I have been able to access the file lying in the drive as follows :
file_id = '1RELUMtExjMTSfoWF765Hr8JwNCSL7AgH'
import io
from googleapiclient.http import MediaIoBaseDownload
request = drive_service.files().get_media(fileId=file_id)
downloaded = io.BytesIO()
downloader = MediaIoBaseDownload(downloaded, request)
done = False
while done is False:
# _ is a placeholder for a progress object that we ignore.
# (Our file is small, so we skip reporting progress.)
_, done = downloader.next_chunk()
downloaded.seek(0)
print('Downloaded file contents are: {}'.format(downloaded.read()))
Downloaded file contents are: b'The king beneath the mountain\r\nThe king of ......
even this works perfectly ..
downloaded.seek(0)
print(downloaded.read().decode('utf-8'))
and gets the data
The king beneath the mountain
The king of carven stone
The lord of silver fountain ...
where things FINALLY GO WRONG is where I try to grab this data and put it into a spark RDD
downloaded.seek(0)
tRDD = spark.read.text(downloaded.read().decode('utf-8'))
and I get the error ..
AnalysisException: 'Path does not exist: file:/content/The king beneath the mountain\ ....
Evidently, I am not using the correct method / parameters to read the file into spark. I have tried quite a few of the methods described
I would be very grateful if someone can help me figure out how to read this file for subsequent processing.
A complete solution to this problem is available in another StackOverflow question that is available at this URL.
Here is the notebook where this solution is demonstrated.
I have tested it and it works!
It seems that spark.read.text expects a file name. But you give it the file content instead. You can try either of these:
save it to a file then give the name
use just downloaded instead of downloaded.read().decode('utf-8')
You can also simplify downloading from Google Drive with pydrive. I gave an example here.
https://gist.github.com/korakot/d56c925ff3eccb86ea5a16726a70b224
Downloading is just
fid = drive.ListFile({'q':"title='hobbit.txt'"}).GetList()[0]['id']
f = drive.CreateFile({'id': fid})
f.GetContentFile('hobbit.txt')

Py2exe: Embed static files in exe file itself and access them

I found a solution to add files in library.zip via: Extend py2exe to copy files to the zipfile where pkg_resources can load them.
I can access to my file when library.zip is not include the exe.
I add a file : text.txt in directory: foo/media in library.zip.
And I use this code:
import pkg_resources
import zipfile
from cStringIO import StringIO
my_data = pkg_resources.resource_string(__name__,"library.zip")
filezip = StringIO(my_data)
zip = zipfile.ZipFile(filezip)
data = zip.read("foo/media/text.txt")
I try to use pkg_resources but I think that I don't understand something because I could open directly "library.zip".
My question is how can I do this when library.zip is embed in exe?
Best Regards
Jean-Michel
I cobbled together a reasonably neat solution to this, but it doesn't use pkg_resources.
I need to distribute productivity tools as standalone EXEs, that is, all bundled into the one .exe file. I also need to send out notifications when these tools are used, which I do via the Logging API, using file-based configuration. I emded the logging.cfg fileto make it harder to effectively switch-off these notifications i.e. by deleting the loose file... which would probably break the app anyway.
So the following is the interesting bits from my setup.py:
LOGGING_CFG = open('main/resources/logging.cfg').read()
setup(
name='productivity-tool',
...
# py2exe extras
console=[{'script': productivity_tool.__file__.replace('.pyc', '.py'),
'other_resources': [(u'LOGGINGCFG', 1, LOGGING_CFG)]}],
zipfile=None,
options={'py2exe': {'bundle_files': 1, 'dll_excludes': ['w9xpopen.exe']}},
)
Then in the startup code for productivity_tool.py:
from win32api import LoadResource
from StringIO import StringIO
from logging.config import fileConfig
...
if __name__ == '__main__':
if is_exe():
logging_cfg = StringIO(LoadResource(0, u'LOGGINGCFG', 1))
else:
logging_cfg = 'main/resources/logging.cfg'
fileConfig(logging_cfg)
...
Works a treat!!!
Thank you but I found the solution
my_data = pkg_resources.resource_stream("__main__",sys.executable) # get lib.zip file
zip = zipfile.ZipFile(my_data)
data = zip.read("foo/media/doc.pdf") # get my data on lib.zip
file = open(output_name, 'wb')
file.write(data) # write it on a file
file.close()
Best Regards
You shouldn't be using pkg_resources to retrieve the library.zip file. You should use it to retrieve the added resource.
Suppose you have the following project structure:
setup.py
foo/
__init__.py
bar.py
media/
image.jpg
You would use resource_string (or, preferably, resource_stream) to access image.jpg:
img = pkg_resources.resource_string(__name__, 'media/image.jpg')
That should "just work". At least it did when I bundled my media files in the EXE. (Sorry, I've since left the company where I was using py2exe, so don't have a working example to draw on.)
You could also try using pkg_resources.resource_filename(), but I don't think that works under py2exe.

Resources