Azure function runtime does not start - azure

I have a set of Azure functions running in the Azure running over a year already shoveling a huge amounts of data and everything has been fine so far. Last code update was about 3 weeks old and since then again it's processed a lot of data. 2 days ago we suddenly noticed it just stopped working and the Azure function does not start at all. On my dev PC obviously everything works like charm.
Error message #1
I'm using DI in Azure function, I load assembly during function startup and register it to the IoC. The error message is, that the type is not assignable to the interface. Which was clearly nonsense as
a. It is
b. It's been working this way since forerer. Since I got really desperate I decided to do an experiment and I modified the code (essentially removing the interface), that would prevent the function runtime to throw this particular error and it was replaced by another.
Error message #2
So the Azure function runtime stopped complaining about object inheritance, but now it is displaying a different error message.
Microsoft.Azure.WebJobs.Extensions.DurableTask: Value cannot be null. (Parameter 'hostConfiguration').
I googled it like crazy, went through DurableTask documentation, but still no ideas. I can restart function, redeploy it, but nothing helps.
My host.json is:
{
"version": "2.0",
"functionTimeout": "00:15:00",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"maxTelemetryItemsPerSecond": 20,
"excludedTypes": "Request"
}
},
"console": {
"isEnabled": true
}
},
"extensions": {
"http": {
"routePrefix": ""
},
"durableTask": {
"hubName": "IngressFunctionHubVS"
}
}
}
Literally any idea why perfectly running Azure function (it literally ran for ~18 month and processed tenths of GBs of data) can suddenly stop working without any our intervention ( I checked the logs - no one touched it) will be highly appreciated.

Related

Jest: Expected done to be called once, but it was called multiple times

I recently upgraded to jest v28 from v24, installed jest-environment-jsdom and my configuration is "jest": { "testEnvironment": "jsdom", "setupFiles": [ "<rootDir>/setup.js" ] }
I was using done it('test', (done) => { done() }) in many places in the same test file, i am getting the error "Expected done to be called once, but it was called multiple times." when i enable multiple test cases, it works fine with single test case.
When you use your spy/mock multiple times in different tests, they don't get reset by default after every test.
This answer should help you: https://stackoverflow.com/a/55873465/16068019

Problems playing, pausing & resuming on Google Assistant (Actions on Google) with live streaming Audio/MP3s using Actions Builder?

This is my first post on StackOverflow (long-time lurker, first-time poster), so go easy on me. ^__^;
For those having trouble in implementing play/pause/resume functionality with a STATIC mp3 I’m assuming the process is the same, so hopefully, this post will help you guys as well.
I’m working on building a live mp3 streaming Google Action, and I seem to be having issues with implementing it in the new Actions Console https://console.actions.google.com/
According to the Google Actions documentation found here:
https://developers.google.com/assistant/conversational/prompts-media - Last updated 2021-03-10 UTC.
I should be able to invoke a Media Response to play an mp3 back to the user using the YAML / JSON example provided in the above link, however, it seems that playing, pausing, and resuming doesn’t work correctly with a streaming mp3 URL.
TLDR; Here's a shorter version of the write up:
https://imgur.com/a/FIgOsl8
For a more detailed analysis see below:
STEPS TO REPRODUCE
Starting with the example provided in the documentation and popping the JSON version sample code (posted here for convenience) in the On Enter section of the scene; I was able to play the media fine.
{
"candidates": [
{
"first_simple": {
"variants": [
{
"speech": "This is a media response."
}
]
},
"content": {
"media": {
"optional_media_controls": [
"PAUSED",
"STOPPED"
],
"media_objects": [
{
"name": "Media name",
"description": "Media description",
"url": "https://storage.googleapis.com/automotive-media/Jazz_In_Paris.mp3",
"image": {
"large": {
"url": "https://storage.googleapis.com/automotive-media/album_art.jpg",
"alt": "Jazz in Paris album art"
}
}
}
],
"media_type": "AUDIO"
}
}
}
]
}
Note: In the above JSON I removed the start_offset node because it’s currently not supported by iOS and is probably put in there as an example for testing purposes.
Here’s an example of the static mp3 media response playing for reference:
https://downloaddave.com/reviews/clients/momentum-br/ga-sr/Screenshot_streaming_playing_no_error_with_test_mp3.png
I noticed that pausing and resuming the static mp3 does not work unless you enabled the following System Intents:
MEDIA_STATUS_PAUSED
MEDIA_STATUS_STOPPED
MEDIA_STATUS_FAILED
MEDIA_STATUS_FINISHED
Otherwise, if you click on the “pause” icon on the Media Response Player or invoke the pause earcon (earcon = ear + icon) you will encounter the following errors:
Sorry, [Your Action’s Display Name] isn't responding right now. Please try again soon.
Did not find any handling for intent event 'actions.intent.MEDIA_STATUS_PAUSED' on scene 'playStreamingAudio'
{
"endConversation": {}
}
Under the Error and status handling section of the scene I added the system intents as seen in the following screenshot.
https://downloaddave.com/reviews/clients/momentum-br/ga-sr/playStreamingAudio_Scene_Configuration_000.png
Note that if I just transition the MEDIA_STATUS_PAUSED to “No Transition” it gives me an error message, Event handler for ‘playStreamingAudio’ has an empty function call and/or empty transition.
If it goes to “End Conversation” it ends the test and exits out of the Media Response Card rather than giving me the option to resume (which seems like a bad user/conversational flow and probably won't pass review).
Tapping the “pause” icon, typing, or saying “pause” doesn’t work unless the MEDIA_STATUS_PAUSED transitions to another Scene which I’ve called pauseStreamingAudio.
In the pauseStreamingAudio scene, I added a prompt letting the user know they can say “play” or “cancel” along with suggestions indicating the same.
{
"candidates": [
{
"first_simple": {
"variants": [
{
"speech": "You can say play to resume audio or cancel to quit."
}
]
},
"suggestions": [{
"title": "Play"
}, {
"title": "Cancel"
}]
}
]
}
From the pauseStreamingAudio Scene, I added a custom intent “play” to go back to the previous Scene I’ve called playSreamingAudio.
I’m not sure if I’m doing this right BUT IT WORKS!
Streaming mp3
Now that I got the foundation working I swapped out the static mp3 to the streaming audio. Here is the Sample Code JSON Builder with streaming mp3 link & “start_offset” removed and the streaming mp3 link.
{
"candidates": [
{
"first_simple": {
"variants": [
{
"speech": "This is a media response."
}
]
},
"content": {
"media": {
"optional_media_controls": [
"PAUSED",
"STOPPED"
],
"media_objects": [
{
"name": "Media name",
"description": "Media description",
"url": "https://prod-35-230-37-193.wostreaming.net/momentum-kvmifmaac-ibc2",
"image": {
"large": {
"url": "https://storage.googleapis.com/automotive-media/album_art.jpg",
"alt": "Jazz in Paris album art"
}
}
}
],
"media_type": "AUDIO"
}
}
}
]
}
The Content-Type of the streaming file that I’m testing with doesn't specifically end in a *.mp3 and when I check the content type is reads as audio/aacp.
Codec: ADTS
Type: Audio
Channels: Stereo
Sample Rate: 44100 Hz
Bits per Sample: 32
AAC Extension: SBR+PS
This works as and I'm able to stream audio form the source file. See screenshot below.
https://downloaddave.com/reviews/clients/momentum-br/ga-sr/Smart_Display_Time_Error_Infinit_NaN_Nan_redlined.png
However, there is a display error on the Media Response Player at the time index by the bottom right Infinity:NaN:NaN (highlighted in the red box).
Likely related, I can no longer trigger the Pause System Intent anymore. Instead, I get the following error:
https://downloaddave.com/reviews/clients/momentum-br/ga-sr/Screenshot_streaming_pause_error.png
Notice that the drop-down is open and there is no response for me to use and troubleshoot.
I also tried looking through the Actions on Google documentation to see if there could be something wrong with the audio stream I was providing, the best thing I could find was,
“Audio for playback must be in a correctly formatted MP3 file. MP3 files must be hosted on a web server and be publicly available through an HTTPS URL. Live streaming is only supported for the MP3 format.”
I found some info on mp3 specs on the SSML page here, but I’m not sure if this applies to the Media Response https://developers.google.com/assistant/conversational/ssml#audio - Last updated 2021-05-25 UTC.
Does anyone have any ideas on how I can get this working or even troubleshoot this?
Could some of these circumstances be an issue with the Media Player itself? How would one go about fixing this?
Anyway, I hope this helps somebody out there & thanks very much in advance. Any help is most appreciated.

HtmlResponse is not supported on this device

I have developed interactive canvas application and it was working fine with devices having display.
Suddenly when I checked it today it says "Application is not responding, try again later.". When I checked in test simulator and gone through debug I have received following error printed in debug.
"sharedDebugInfoList": [
{
"name": "ResponseValidation",
"debugInfo": "",
"subDebugEntryList": [
{
"name": "MalformedResponse",
"debugInfo": "expected_inputs[0].input_prompt.rich_initial_prompt.items[1].html_response: HtmlResponse is not supported on this device..",
"subDebugEntryList": []
}
]
}
],
It was working and users were using it in their mobile device, but this sudden error made me blind to understand it. I have not made even 1 line of change in my code. I have even checked cloud logs and there is nothing. Here is what I am doing when user enters in my action.
app.intent('welcome', (conv) => {
console.log('capabilities = ',conv.surface.capabilities)
if (!conv.surface.capabilities.has('actions.capability.SCREEN_OUTPUT')) {
conv.close('Sorry, this device does not support Interactive Canvas!');
return;
}
conv.ask(`Welcome! Let's start the game`);
conv.ask(new HtmlResponse({
url: '<url of my website having interactive canvas support>',
}));
});
Here is the action that I am facing error.
The action seems to be working ok for me.
The most common reason I've seen for it not working is that the category wasn't set to "Games & Fun" (or was cleared for some reason) or the Interactive Canvas checkbox wasn't set.
To make sure you are still in the "Games & Fun" category, go to the Actions Console, the "Deploy" tab, and the "Directory" information.
Then, towards the bottom of that same page, make sure you have the Interactive Canvas checkbox set.

VSCode stops on invisible breakpoint on "async_hooks.js" while debugging a node.js script

so I built a script in node.js which supposed to take csv files, parse them and enter them to DB.
Sometimes, when I debug my code, it stops on like an invisible breakpoint found in async_hooks.js file, on the first line of the "emitHookFactory" function (line 163).
The call stack states only one call- "emitBeforeNative" on the same file.
I noticed a few things on my trials:
I have 3 types of files I need to parse and put in the DB. It happens only on one of the file types, which is extremely large (3.1m~ lines on csv, while the others have 50~200K lines). I tried to load it partially- only the starting 20K lines (copied them to a new file, no changes in code) and it didn't break. which means the size has to do with the debugger stopping?
I tried to reproduce it with other means but no success. Also, it doesn't happen always (even when ran on the same file)- but like 80-85% of the times.
My script goes like this: query DB and AWS to find a new file > download file to local > stream the file from local > on line event- parse line and perform data manipulations > on end event - loop through all manipulated data, build queries and query the DB to insert it. I've put a few breakpoints on key places and found out the breakpoint SEEMS to happen somewhere in the middle of emitting the line events. The callback function is a normal function, not async, and there are no async operations inside. In fact, there are only array and string manipulations operations inside- not even 3rd party operation or anything unusual.
I tried to look at the internet for solution. Didn't find any clear way to comletely get rid of it, only workaround which I didn't really understand (kinda new to JS environments so I could not get the concepts of how can I disable or ignore it...)
Thanks for the help in advanced.
Based on https://github.com/nodejs/node/issues/15464
There's a way to ignore stepping into Node guts. In your launch.json add the following skipFiles directive:
"skipFiles": [
"<node_internals>/**"
]
or you can ignore particularly /internal/async_hooks with this:
"skipFiles": [
"<node_internals>/internal/async_hooks.js",
"<node_internals>/internal/inspector_async_hook.js"
]
After all, your config may look like this:
{
"version": "0.2.0",
"configurations": [
{
"type": "pwa-node",
"request": "launch",
"name": "Debug",
"runtimeExecutable": "<your executable path>",
"console": "integratedTerminal",
"cwd": "${workspaceFolder}",
"timeout": 30000,
"skipFiles": [
"<node_internals>/**"
]
}
]
}
This also might be related to a known NodeJS bug: https://github.com/nodejs/node/issues/36022
Could you please try whether our new JavaScript debugger still has this problem. For details see the release notes of VS Code 1.42: https://code.visualstudio.com/updates/v1_42#_new-javascript-debugger.

tasks for stack in VS code

I am new to VS code (1.21.1) with HIE 0.1.0.0 - installed using stack. I have been able to define a task for testing:
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"label": "test",
"type": "shell",
"command": "stack build --test",
"group": {
"kind": "build",
"isDefault": true
}
}
]
}
I checked the documentation mentioned but did not understand much.
This task works only on the project I specified it for; how to make it usable for any project I happen to have open in the editor? How to further automate the selection of tasks?
I assume that there is somewhere a collection of tasks to run stack with HIE - could somebody point me to it?
I'm not sure if this is the problem you run into, but Visual Studio Code gives you the ability to edit settings per work-space/project in addition to overall settings:
Make sure that you're editing the USER SETTINGS tab instead of the WORKSPACE SETTINGS tab, if you want the settings to all projects.
Apologies if this is a trivial answer, and the problem is something completely different.

Resources