How to a server to server dialogflow api call? - dialogflow-es

I have a docker container in a swarm running on centOS.
I need to make a dialogflow POST call to do export an dialogflow agent from the docker container.
I tried using dialogflow developer token, it didn't seem to work out.
Also tried making the same call via api using auth token.
curl --request POST \
'https://dialogflow.googleapis.com/v2/projects/testProject/agent:export?key=PROJECT_KEY' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data '{}' \
--compressed

Related

Why do I need to install certificates for an external URL when installing gitlab?

I am confused.
For now, I just want to self-host gitlab in my local home network without exposing it to the internet. Is this possible? If so can i do this without installing ca-certificates?
Why is gitlab force (?) me to expose my gitlab server to the internet?
Nothing else I've locally installed my NAS/Server requires ca certificates for me to connect to its webservice?: I can just go to xyz.456.abc.123:port in chrome
e.g. in this article, the public url is referenced: https://www.cloudsavvyit.com/2234/how-to-set-up-a-personal-gitlab-server/
You don't need to install certificates to use GitLab and you do not have to have GitLab exposed to the internet to have TLS security.
You can also opt to not use TLS/SSL at all if you really want. In fact, GitLab does not use HTTPS by default.
Using docker is probably the easiest way to demonstrate it's possible:
mkdir -p /opt/gitlab
export GITLAB_HOME=/opt/gitlab
docker run --detach \
--hostname localhost \
--publish 443:443 --publish 80:80 --publish 22:22 \
--name gitlab \
--volume $GITLAB_HOME/config:/etc/gitlab \
--volume $GITLAB_HOME/logs:/var/log/gitlab \
--volume $GITLAB_HOME/data:/var/opt/gitlab \
-e GITLAB_OMNIBUS_CONFIG='external_url "http://localhost"' \
gitlab/gitlab-ee:latest
# give it 15 or 20 minutes to start up
curl http://localhost
You can replace http://localhost in the external_url configuration with the computer hostname you want to use for your local server or even an IP address.

How to mount docker container home directory to Azure Storage

I am new to docker. I'm trying to get atmoz/sftp container work with Azure Storage.
My goal is to have multiple SFTP users who will upload files to their own folders which I can then find on Azure Storage.
I used the following command:
az container create \
--resource-group test \
--name testsftpcontainer \
--image atmoz/sftp \
--dns-name-label testsftpcontainer \
--ports 22 \
--location "East US" \
--environment-variables SFTP_USERS="ftpuser1:yyyy:::incoming ftpuser2:xxx:::incoming" \
--azure-file-volume-share-name test-sftp-file-share \
--azure-file-volume-account-name storagetest \
--azure-file-volume-account-key "zzzzzz" \
--azure-file-volume-mount-path /home
The container is created and run but when I unsuccessfully try to connect via Filezilla I get this in log:
Accepted password for ftpuser2 from 10.240.xxx.xxx port 64982 ssh2
bad ownership or modes for chroot directory component "/home/"
If I use /home/ftpuser1/incoming it works for one of the users.
Do I need to change permissions on the /home directory first? If so, how?
Of course, you can mount the Azure File Share to the container directory /home. And it works perfectly on my side:
And I also make a test with the image atmoz/sftp. And it also works fine. The command here:
az container create -g myResourceGroup \
-n azuresftp \
--image atmoz/sftp \
--ports 22 \
--ip-address Public \
-l eastus \
--environment-variables SFTP_USERS="ftpuser1:yyyy:::incoming ftpuser2:xxx:::incoming" \
--azure-file-volume-share-name fileshare \
--azure-file-volume-mount-path /home \
--azure-file-volume-account-name xxxxxx \
--azure-file-volume-account-key xxxxxx
Here is the screenshot:
Update:
With the requirements, the error shows the bad ownership and it's impossible to control the permissions when you mount the Azure file share to the path /home or /home/user right now. So I recommend you mount the Azure file share to the path /home/user/upload of every user and it will go to the same result as you need.
I could not find a solution to the problem. In the end I used another approach:
- I mounted the Azure storage into another unrelated folder /mount/sftpfiles
- After the container was built, I ran these commands:
apt update
apt-get -y install lsyncd
lsyncd -rsync /home /mnt/sftpfiles
They download a tool called lsyncd which watches for file system changes and copies files to another folder when a change occurs.
This solves my requirement but it has a side effect of duplicating all files (that's not a problem for me).
I'm still open to other suggestions that would help me make this cleaner.

How to backup & restore spinnaker pipelines

I am new to & trying to use spinnaker for the client I am working with. I am somewhat familiar with spinnaker architecture.
I know FRONT50 micro-service is responsible for this task. I am not sure how I can safely backup the pipeline data and restore into a new instance.
I want that to be able to continuously back up these pipelines as they are being added so that when I happen to recreate the spinnaker instance(i.e destroy my the infra and then recreate from scratch) I am able to restore these.
I am currently using Azure as the cloud provider and using Azure Container service.
I found this page here : https://www.spinnaker.io/setup/install/backups/
but does not indicate if the pipelines will also be backed up.
Many thanks in advance
I am not sure about the standard method but you can copy the configurations for pipelines and applicatons from front50 manually.
For pipelines, just do a curl to http://<front50-IP>:8080/pipelines
curl http://<front50-IP>:8080/pipelines -o pipelines.json
For applications config:
curl http://<front50-IP>:8080/v2/applications -o applications.json
To push pipeline config to Spinnaker, you can do:
cat pipelines.json | curl -d#- -X POST \
--header "Content-Type: application/json" --header \
"Accept: /" http://<Front50_URL>:8080/pipelines
P.S: My Spinnaker version is 1.8.1 and both, v1 and v2, k8s providers are supported.
Update-2: If you are using AWS S3 or GCS, you can back up the buckets directly.

How to call WebServices in Puppet manifests?

We need to invoke an external webservice via Puppet. The steps provided to me consist of invoking webservices using curl command. Is there anyway to invoke webservices natively via Puppet instead of calling curl commands instead ?
The curl example is provided below
curl -k -v –H "Content-Type:application/json" –-data some_data_file https://service.server.com/some_service

couchdb 2.0 installation and singel node setup

after installing couchdb 2.0 the docs ask you to to this:
After installation and initial startup, visit Fauxton at
http://127.0.0.01:5984/_utils#setup. You will be asked to set up
CouchDB as a single-node instance or set up a cluster.
This gets in the way when automating the intallation process.
What is actually going on when you decide for one option or the other?
Can the same results be achieved via API calls?
Thanks for any insights
volker
Of course :)
Documentation from the couchdb-documentation repository.
The Cluster Setup Api
If you would prefer to manually configure your CouchDB cluster, CouchDB exposes
the _cluster_setup endpoint for that. After installation and initial setup.
We can setup the cluster. On each node we need to run the following command to
setup the node:
curl -X POST -H "Content-Type: application/json" http://admin:password#127.0.0.1:5984/_cluster_setup -d '{"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"password"}'
After that we can join all the nodes together. Choose one node
as the "setup coordination node" to run all these commands on.
This is a "setup coordination node" that manages the setup and
requires all other nodes to be able to see it and vice versa.
Setup will not work with unavailable nodes.
The notion of "setup coordination node" will be gone once the setup is finished.
From then onwards the cluster will no longer have a "setup coordination node".
To add a node run these two commands:
curl -X POST -H "Content-Type: application/json" http://admin:password#127.0.0.1:5984/_cluster_setup -d '{"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"password", "port": 15984, "remote_node": "<remote-node-ip>", "remote_current_user": "<remote-node-username>", "remote_current_password": "<remote-node-password>" }'
curl -X POST -H "Content-Type: application/json" http://admin:password#127.0.0.1:5984/_cluster_setup -d '{"action": "add_node", "host":"<remote-node-ip>", "port": "<remote-node-port>", "username": "garren", "password":"password"}' -H "Content-Type: application/json"
This will join the two nodes together.
Keep running the above commands for each
node you want to add to the cluster. Once this is done run the
following command to complete the setup and add the missing databases:
curl -X POST -H "Content-Type: application/json" http://admin:password#127.0.0.1:5984/_cluster_setup -d '{"action": "finish_cluster"}'
You CouchDB cluster is now setup.
Source : https://github.com/apache/couchdb-documentation/blob/master/src/cluster/setup.rst

Resources