PHP Code for Search Engine Bot Detection Like Google Bot - bots

Looking for a better and affective PHP code to detect Search Engine Bots like google bot,
Please need recommendation the code below is working perfect or any other code that perform much better then this?
function _bot_detected() {
return (
isset($_SERVER['HTTP_USER_AGENT'])
&& preg_match('/bot|crawl|slurp|spider|mediapartners/i', $_SERVER['HTTP_USER_AGENT']);
}
Regards

Related

Is there a way to intercept the query in nlp.js WebChat API server?

Just getting started with nlp.js, and I'd like to be able to test out some ideas with their Express API server package.
As far as I can tell, there's no way to "intervene" in the QnA bot exchange. For instance, if I'd like to format the output to contain the user's name or a time or whatever.
Say my corpus was a tsv file with:
some question \t welcome, #name
And I wanted to swap out that #name tag? Right now, I just get that string exactly as is.
In the conf.json:
"api-server": {
"port": 3000,
"serveBot": true
}
Maybe there's a pipeline logic to do that?
Can't seem to find a lot of reference material on available events in the pipeline or how to intercede in the WebChat flow out there.

How to get URL of shop in Shopify embeded app?

I'm building a Shopify App using this template: https://github.com/Shopify/shopify-app-template-node/tree/cli_three and I was wondering how I'm able to get the URL of the shop that is using my app (means the URL of the document outside the iFrame). I had a working approach using location.ancestorOrigins (https://developer.mozilla.org/en-US/docs/Web/API/Location/ancestorOrigins), but since this is not compatible with Firefox, I was wondering if there is another approach. Thanks in advance for your help!
Okay, so I came up with a solution by myself, but thanks for your help #TwistedOwl and #David Lazar. I post my code here in case anyone has the same problem in the future:
import {useAppBridge} from "#shopify/app-bridge-react";
function DemoPage() {
const bridge = useAppBridge();
const shopUrl = bridge.hostOrigin.replace('https://', '').replace('www.', '');
console.log(shopUrl);
}
All calls to your App provide a shop parameter (and now a host parameter too) ensuring you always know the shopify store using your App. There are zero situations where your App is called without the shop parameter being part of the equation from Shopify.

Is accessing the Customer Profile API possible using the Cloud9 code editor in the AWS Lambda web console? If so, how?

First off, I'm new to Alexa skill development, so I have much to learn. I've been banging my head off the desk trying to figure this out. I've found various tutorials and have gone over the information provided by Amazon for accessing the Customer Profile API via an Alexa skill but still can't manage to obtain the customer's phone number.
I'm using the AWS console in-line code editor (Cloud9). Most, if not all, instructions use something like 'axios', 'request', or 'https' modules which I don't think is possible unless you use the ask-cli (please correct me if I'm wrong). Also, I followed a tutorial to initially create the skill which had me use Skillinator.io to create an AWS Lambda template based on the skill's JSON in the Amazon Developer console. The format of the code in the Customer Profile API tutorials does not match what was provided by the Skillinator.io tool. The way the Intent handlers are set up is different, which is where I believe my confusion is coming from. Here's an example:
Skillinator.io code:
const handlers = {
'LaunchRequest': function () {
welcomeOutput = 'Welcome to the Alexa Skills Kit!';
welcomeReprompt = 'You can say, Hello!';
this.emit(':ask', welcomeOutput, welcomeReprompt);
},
};
Tutorial code:
const LaunchRequestHandler = {
canHandle(handlerInput) {
return handlerInput.requestEnvelope.request.type === 'LaunchRequest';
},
handle(handlerInput) {
const speechText = 'Welcome to the Alexa Skills Kit!';
return handlerInput.responseBuilder
.speak(speechText)
.reprompt(speechText)
.withSimpleCard('Hello World', speechText)
.getResponse();
}
};
Can anyone shed some light and help me understand why there is a difference in the way the handlers are formatted, and how (if possible) to create the request to the Customer Profile API?
I've already completed the steps for the necessary permissions/account linking.
Thanks in advance.
EDIT:
I've learned that the difference in syntax is due to the different versions of the sdk, Skillinator being 'alexa-sdk' or v1 and the various tutorials using 'ask-sdk' or v2.
I'm still curious as to whether using modules like 'axios' or 'request' is possible via the in-line code editor in AWS console or if it's even possible to access the Customer Profile API using sdk v1?
I've decided to answer the question with what I've learned in hopes that others won't waste as much time as I have trying to understand it.
Basically, it is possible to use the above-mentioned modules in sdk v1 using the AWS console's in-line code editor but you must create a .zip file of your code and any necessary modules and upload that .zip to Lambda.
I've edited my original answer to include my findings for the difference in syntax in the intent handlers.
From what I can tell (and please correct me if I'm wrong), it is not possible to access the Customer Profile API using the sdk v1.

Is it possible to lookup a database in a Dialogflow intent?

I'm trying to make an app using DialogFlow which finds a specific object in a specific place.
This is a generic example.
The user would say something like "Where to I find Dog in Europe" and the app would reply with "Dog can be found in Europe via: breeding, finding it out in the wild or by buying it"
considering Dog as input1 and europe as input2
Ideally the app should be able to cross reference input1 and input2 to find the correct response. Can I implement a database like structure and do this?
You can't access a database from Dialogflow directly, but you can build your own fulfillment backend that can do anything you want. It communicates with Dialogflow via HTTP requests/responses in the Dialogflow Webhook format.
Here is an example fulfillment that reads data from Firebase database - https://github.com/actions-on-google/dialogflow-updates-nodejs
You can't access a database directly in Dialog flow, but you can build your own fulfillment back end. I have been using Airtable as a database and Integromat and Webhooks to query the database and parse the results back to Dialogflow. As a novice coder I found this to be the simnplest way.
KaySubb is right, you can make a fulfillment that reads data from a firebase database(or firestore).
You can do this turning on fulfillment at the bottom page of the intent page.
First go to https://console.firebase.google.com/ (login with google account) and you should be able to see your google cloud platform project.
To use firebase, you need to first install it. Get node.js as you need npm first. I'm not sure what OS you're on but go into command line or terminal and type.
npm install firebase --save
then type:
firebase login
this will authenticate your login and connect your project when you deploy.
Then use go to the directory you want to create your project in:
firebase init functions
Select your project and select javascript, install all dependencies
Now go to functions and open the index.js file. Here you can change you write code needed in js.
Write your functions and type:
firebase deploy
in the command line open in the file directory. When it completes, it will
give you a link. This as the webhook URL in dialogflow (it should start with
https://us-central). If you see only 1 link which says
console.firebase.google.com....... then open that link on a browser, click on
"functions" on the left side of the screen and get the link from there.
This should get you started with firebase, now you can link your project to firebase fulfillment. There is great firestore explanation here
https://www.youtube.com/watch?v=kdk6MhhI8oc
But I'll give you a brief explanation:
On the top of your index.js file you will need:
const functions = require('firebase-functions');
var admin = require("firebase-admin");
admin.initializeApp(functions.config().firebase);
var firestore = admin.firestore();
The basic code is here:
exports.webhook = functions.https.onRequest((request, response) => {
switch(request.body.result.action){
case 'saveData':
let params = request.body.result.parameters
firestore.collection('colName').doc('docName').add({
name:params.name
age:params.age
}).then(() => {
response.send({
speech:
`this is a response for "${params.name}".`
});
})
.catch((e => {
console.log('Error getting documents', e);
response.send({
speech:
`Sorry, something has gone wrong. Try again and if the problem persists, please report it.`
});
}))
break;
default:
}
})
I'll explain what it does:
You need the switch to decide which intent to do. request.body.result.action returns the action name (write this in dialogflow just above the parameters).
Once that is decided request.body.result.parameters give you the parameters from the intent. params.______ gives you the parameter.
I would definitely recommend reading the official documentation:
https://firebase.google.com/docs/firestore/quickstart
to help understand the data structure to help create the ideal database for you. Essentially a collection is a list and within that a doc is one entry. You can name them yourself of using the entries from param.
respond.send is what the bot will reply to the user, I've also shown how to use the parameters in the response.
.catch will just store any errors in the log, you can read the log in console.firebase.google.com.... open your project and click on function. There will be a place to read logs there. You can check any errors encountered over there.
default: will output whatever default response you wrote on dialogflow at the bottom of the intent.
Hope this helps,comment any questions. I have gone through a huge amount as concisely as I could. This will take some time to get used to and become good at, follow the docs and the youtube videos if you have a lot of trouble!
If you're having even more trouble, there is a slack that helps people that I can direct you to.

Chrome text-to-speech API doesn't work

I'm trying the chrome text-to-speech API but even the demo provided by google
https://developer.chrome.com/trunk/extensions/examples/extensions/ttsdemo/ttsdemo.html
doesn't work for me, I can't hear any sound, do you?
I don't think it is a problem of my browser because google.translate.com (which I guess is based on the same technology) works for me if I try the listening mode.
Any idea?
Thanks
As of Chrome 33, Chrome's speech synthesis API is available in JavaScript.
Quick example:
window.speechSynthesis.speak(
new SpeechSynthesisUtterance('Oh why hello there.')
);
Details:
HTML5 Rocks: Introduction to the Speech Synthesis API
. . Hi, Eugenio.
. . This API is only available for extensions. You can port your logic to inside an extension (people would have to install it to use, of course), create an extension that exposes the functions to the "outside world" (people would still need to install the extension to use your app correctly) or simply use a client-side synthesizer (speak.js, for example).
. . You can use WebAudio API (or event tags) and calls to the Google Translate TTS endpoint, but that's not a Public API and it has no guarantees. It can simply stop working because of some limitation from Google, they can change the API or endpoints and yadda yadda. If it's only for testing, that'll probably do, but if it's a bigger project (or a comercial one), I strongly advise against that option.
. . Good luck.
Today (October 2015) there's 55% devices that have Speech Synthesis API support: http://caniuse.com/#feat=speech-synthesis
Here's the example:
// Create the utterance object
var utterance = new SpeechSynthesisUtterance();
utterance.text = 'Hello, World!';
// optional parameters
utterance.lang = 'en-GB'; // language, default is 'en-US'
utterance.volume = 0.5; // volume, from 0 to 1, default is 1
utterance.rate = 0.8; // speaking rate, default is 1
// speak it!
window.speechSynthesis.speak(utterance);
Just to add some links because I also was lost finding the right information.
You can use the so called "speech synthesis api" of Chrome, see demo: https://www.audero.it/demo/speech-synthesis-api-demo.html
Further info:
Talking Web Pages and the Speech Synthesis API
Getting Started with the Speech Synthesis API
Using HTML5 Speech Recognition and Text to Speech
Hope that helps, and hope the links will survive the future.

Resources