According to this article https://cloud.google.com/dialogflow/docs/intents-rich-messages, we can add upto 10 responses in the UI which are then sent sequentially to the user.
However in my web demo case only one of the response is being picked randomly.
Link to web demo: https://bot.dialogflow.com/8195cb64-1104-46e5-8d43-b153828d7205
Just say hi to reproduce. Below is my intent setup
Update: When I'm testing the exact same on the test console on the right hand side it is working as expected.
You have not read it properly.
It says "Text responses are available in all platforms. Your agent can send up to 10 sequential text messages in response to a user input (assuming no other message types are defined in the intent).To add a new line in the UI, press Shift+Enter."
1)The response you are sending are picked randomly by Dialogflow.
2)To solve your query, Your sequential responses will be sent as line break , that means you have to Enter your Response variant 1 and Press Shift+Enter to do line break and write Variant 2 and so on. Screenshot for messages with line break.
Note : The message will be sent as a single message with line breaks between Text you call as variations.
If You want to send variations as different sequential messages you will have to use fulfillment responses (https://cloud.google.com/dialogflow/docs/fulfillment-overview)
You can use
agent.add(Message 1);
agent.add(Message 2);
agent.add(Message 3); .... as many as you want.
Related
#bot.inline_handler(func=lambda query: len(query.query) > 0)
def query_text(query):
sleep(6)
text=query.query
html=requests.get(f'https://google.com/search?q={text}')
# print(html.status_code)
open('index.html','w', encoding='utf-8').write(html.text)
soup=BeautifulSoup(html.text, 'html.parser').find_all('div',{"class":"***********"})
for i in soup:
fk.append(types.InlineQueryResultArticle(id=str(len(fk)), title=f"{i.find('h3').get_text()}",description=f"{i.find('div',{'class':'**********'}).get_text()}",input_message_content=types.InputTextMessageContent(message_text=i.find('a').get('href').replace('/url?q=','https://google.com/url?q=')),hide_url=True,url=i.find('a').get('href').replace('/url?q=','https://google.com/url?q='),thumb_url='https://w7.pngwing.com/pngs/338/520/png-transparent-g-suite-google-play-google-logo-google-text-logo-cloud-computing.png', thumb_width=30, thumb_height=30))
print(i.find('a').get('href').replace('/url?q=','')+'\n')
sleep(2)
bot.answer_inline_query(query.id, fk)
When I write #bot google request
Bot takes it as g go goo google
What is causing the error
"A request to the Telegram API was unsuccessful. Error code: 400. Description: Bad Request: query is too old and response timeout expired or query ID is invalid"
How to make text input timeout so that it doesn't respond to every letter?
I think, the error resides in your way of parsing data. It takes at least 8 seconds (based on sleeps) just to get to the answer method. Telegram inline queries have very few seconds until they are considered old, so, it is better to process data after you call bot.answer_inline_query() and then send it to user using bot.send_message()
I am not certain how it works with async code though.
If you find another solution, please let me know :)
I'm trying to connect Python with Supercollider through OSC, but it's not working.
I'm using Python3 and the library osc4py3.
The original idea was to send a text word by word, but upon trying I realized the connection was not working.
Here's the SC code:
(
OSCdef.new(\texto,{
|msg, time, addr, port|
[msg, time, addr,port].postIn;
},
'/texto/supercollider',
n
)
)
OSCFunc.trace(true);
o = OSCFunc(\texto);
And here's the Python code:
osc_startup()
osc_udp_client("127.0.0.1", 57120, "supercollider")
## here goes a function called leerpalabras to separate words in rows.
with open("partitura.txt", "r") as f:
for palabra in leerpalabras(f):
msg = oscbuildparse.OSCMessage("/texto/supercollider", ",s", palabra)
osc_send(msg, "supercollider")
sleep(2)
osc_terminate()
I've also tried with this, to see if maybe there was something wrong with my for loop (with the startup, and terminate of course):
msg = oscbuildparse.OSCMessage("/texto/supercollider", ",s", "holis")
osc_send(msg, "supercollider")
I run the trace method on SC, nothing appears on the post window when I run the Python script on terminal, but no error appears on neither one of them, so I'm a bit lost on what I can test to make sure is getting somewhere.
It doesn't print on the SC post window, it just says OSCdef(texto, /texto/supercollider, nil, nil, nil).
When I run the SuperCollider piece of your example, and then run:
n = NetAddr("127.0.0.1", 57120);
n.sendMsg('/texto/supercollider', 1, 2, 3);
... I see the message printed immediately (note that you used postIn instead of postln, if you don't fix that you'll get an error instead of a printed message).
Like you, I don't see anything when I send via the Python library - my suspicion is that there's something wrong on the Python side? There's a hint in this response that you have to call osc_process() after sends, but that still doesn't work for me
You can try three things:
Run OSCFunc.trace in SuperCollider and watch for messages (this will print ALL incoming OSC messages), to see if your OSCdef is somehow not receiving messages.
Try a network analyzer like Packet Peeper (http://packetpeeper.org/) to watch network traffic on your local loopback network lo0. When I do this, I can clearly see messages sent by SuperCollider, but I don't see any of the messages I send from Python, even when I send in a loop and call osc_process().
If you can't find any sign of Python sending OSC packets, try a different Python library - there are many others available.
(I'm osc4py3 author)
osc4py3 store messages to send within internal lists and returns immediately. These lists are processed during osc_process() calls or directly by background threads (upon selected theading model).
So, if you have selected as_eventloop threading model, you need to call osc_process() some times, like:
…
with open("partitura.txt", "r") as f:
for palabra in leerpalabras(f):
msg = oscbuildparse.OSCMessage("/texto/supercollider", ",s", palabra)
osc_send(msg, "supercollider")
for missme in range(4):
osc_process()
sleep(0.5)
…
See doc: https://osc4py3.readthedocs.io/en/latest/userdoc.html#threading-model
I want to clear all pending_update_count in my bot!
The output of below command :
https://api.telegram.org/botxxxxxxxxxxxxxxxx/getWebhookInfo
Obviously I replaced the real API token with xxx
is this :
{
"ok":true,"result":
{
"url":"",
"has_custom_certificate":false,
"pending_update_count":5154
}
}
As you can see, I have 5154 unread updates til now!! ( I'm pretty sure this pending updates are errors! Because no one uses this Bot! It's just a test Bot)
By the way, this pending_update_count number are increasing so fast!
Now that I'm writing this post the number increased 51 and reached to 5205 !
I just want to clear this pending updates.
I'm pretty sure this Bot have been stuck in an infinite loop!
Is there any way to get rid of it?
P.S:
I also cleared the webhook url. But nothing changed!
UPDATE:
The output of getWebhookInfo is this :
{
"ok":true,
"result":{
"url":"https://somewhere.com/telegram/webhook",
"has_custom_certificate":false,
"pending_update_count":23,
"last_error_date":1482910173,
"last_error_message":"Wrong response from the webhook: 500 Internal Server Error",
"max_connections":40
}
}
Why I get Wrong response from the webhook: 500 Internal Server Error ?
I think you have two options:
set webhook that do nothing, just say 200 OK to telegram's servers. Telegram wiil send all updates to this url and the queque will be cleared.
disable webhook and after it get updates by using getUpdates method, after it, turn on webhook again
Update:
Problem with webhook on your side. You can try to emulate telegram's POST query on your URL.
It can be something like this:
{"message_id":1,"from":{"id":1,"first_name":"FirstName","last_name":"LastName","username":"username"},"chat":{"id":1,"first_name":"FirstName","last_name":"LastName","username":"username","type":"private"},"date":1460957457,"text":"test message"}
You can send this text as a POST query body with PostMan for example, and after it try to debug your backend.
For anyone looking at this in 2020 and beyond, the Telegram API now supports clearing the pending messages via a drop_pending_updates parameter in both setWebhook and deleteWebhook, as per the API documentation.
Just add return 1; at the end of your hook method.
Update:
Commonly this happens because of queries delay with the database.
I solved is like this
POST tg.api/bottoken/setWebhook to emtpy "url"
POST tg.api/bottoken/getUpdates
POST tg.api/bottoken/getUpdates with "offset" last update_id appeared before
doing this serveral times
POST tg.api/bottoken/getWebhookInfo
had a look if all away.
POST tg.api/bottoken/setWebhook with filled "url"
If you are using webhook, you can follow these steps
On your web browser, enter the following url with your right value of bot
https://api.telegram.org/bot/getWebhookInf
You will get a result like this on your screen
{"ok":true,"result":{"url":"url_value",...}}
On the displayed result, copy the entire url_value without quotes and replace it on this second url
https://api.telegram.org/bot/setWebhook?url=url_value&drop_pending_updates=True
Enter the second url with right bot and url_value in your web browser then press ENTER
Done!
i solve it by Change file access permissions file - set permissions file to 755
and second increase memory limit in php.ini file
A quick&dirty way is to get a temporary webhook here: https://webhook.site/ and
set your webhook to that (it will answer with a HTTP/200 code everytime, reseting your pending messages to zero)
I faced the same issue for my tele bot after user edited existing message. My bot receives update with editedMessage continuously, but update.hasMessage() was empty. As a result number of updates rocketly increased and my bot stack.
I solved this issue by adding handling for use case when message is missing - send 200 code:
public APIGatewayProxyResponseEvent handleRequest(APIGatewayProxyRequestEvent event, Context context) {
update = MAPPER.readValue(event.getBody(), Update.class);
if (!update.hasMessage()) {
return new APIGatewayProxyResponseEvent()
.withStatusCode(200) // -> !!!!!! return code 200
.withBody("message is missing")
.withIsBase64Encoded(false);
}
... ... ...
I am using poplib in Python 3.3 to fetch emails from a gmail account and everything is working well, except that the mails are not marked as read after retrieving them with the retr() method, despite the fact that the documentation says "Retrieve whole message number which, and set its seen flag."
Here is the code:
pop = poplib.POP3_SSL("pop.gmail.com", "995")
pop.user("recent:mymail#gmail.com")
pop.pass_("mypassword")
numMessages = len(pop.list()[1])
for i in range(numMessages):
for j in pop.retr(i+1)[1]:
print(j)
pop.quit()
Am I doing something wrong or does the documentation lie? (or, did I just misinterpret it?)
The POP protocol has no concept of "read" or "unread" messages; the LIST command simply shows all existing messages. You may want to use another protocol, like IMAP, if the server supports it.
You could delete messages after successful retrieval, using the DELE command. Only after a successful QUIT command will the server actually delete them.
Netty-Gurus,
I've been wondering if there is a shortcut/Netty-Utility/smart-trick
for connecting the input of one Channel to the output of
an other channel. In more details consider the following:
Set-Up a Netty (http) server
For an incoming MessageEvent get its ChannelBuffer
and pipe its input to a NettyClient-ChannelBuffer
(which is to be set up along the lines of the NettyServer).
I'm interested in how to achieve bullet-point 3. since my first
thoughts along the lines
// mock messageReceived(ChannelHandlerContext ctx, MessageEvent e):
ChannelBuffer bufIn = (ChannelBuffer) e.getMessage();
ChannelBuffer bufOut = getClientChannelBuffer();// Set-up somewhere else
bufOut.write(bufIn);
seem to me awkward because
A. I have to determine for each and every messageReceived-Event
the target ChannelBuffer
B. To much Low-Level tinkering
My wish/vision would be to connect
--> the input of one Channel
--> to the output of an other channel
and let them do their I/O without any additional coding.
Many thanks in advance!,
Traude
P.S: Issue has arisen as I'm trying to dispatch the various HTTP-requests to the
server (one entry point) to several other servers, depending on
the input content (mapping based on the first HTTP request line).
Obviously, I also need to do the inverse trick -- pipeing back client
to server -- but I guess it'll be similar to the solution of
the question before.
Looks like you need to use a multiplexer in you business handler. The business handler could have a map. With key as "first http request line" and value as the output channel for the server. Once you do a lookup you just do a channel.write(channelBuffer);
Also take a look at bruno de carvalho's tcp tunnel, which may give you more ideas on how to deal with these kind of requirements.