Availability Test in Application Insights atuomatically disables - azure

We have a very simple Application Insights Availability Test (That hits an HTTPS URL across 4 US regions) (basically it hits our App Service).
What we have observed it this Availability test automatically stops (not a fixed scheduled) but it abruptly stops and then what we have to do is go back to Availability test Edit the Test and Save it again so that it restarts.
This is really weird as we have checked the Activity log also and nothing is reported in it about someone stopping the Test, etc.
Any help on how this can be tackled? As abrupt stoppage of Availability Test (Tests getting grayed out) is really serious as we wont know if there is any outage untill someone reports back on the service.

The behavior was caused by two issues and should not be wide spread:
Issue with ARM Template
A possible bug on App Insights Availability configs
This is interesting corner case, caused by a bug in ARM template and discrepancy between how our Front and Back ends handled it. In provided ARM template there were two tags defined:
"tag": "[concat('hidden-link:/subscriptions/',subscription().id,'/resourceGroups/',resourceGroup().name,'/providers/microsoft.insights/components/',parameters('appInsightsName'))]",
"linkToAiResource": "[concat('hidden-link:', resourceId('microsoft.insights/components', parameters('appInsightsName')))]",
"tags": {
"[variables('tag')]": "Resource",
"[variables('linkToAiResource')]": "Resource"
},
These two tags instantiated into this:
"hidden-link:/subscriptions//subscriptions/xxx/resourceGroups/yyy/providers/microsoft.insights/components/zzz": "Resource",
"hidden-link:/subscriptions/xxx/resourceGroups/yyy/providers/microsoft.insights/components/zzz": "Resource"
Note, we have duplicated “/subscriptions/” in the first tag. So, two issues:
There were two such links in the first place
The first of them is invalid (I guess subscription().id already includes “/subscriptions/” part).
The fix should be straightforward – just leave the second tag (we validated that it starts working).
Now, the bug on our side is that the logic of how this case is handled differs between Front and Back ends:
Front end successfully finds a valid “Resource” hidden-link (ignoring invalid one)
Back end (I guess) uses the first one, finds that it points to (apparently) deleted resource and marks this test as deleted as well. This results in stopping this test
We will adjust the logic. Either will start failing at Front End and will change Back End to use the same logic (fix will be rolled some time in Jan).

Related

Logic App error - ActionFailed. An action failed. No dependent actions succeeded

I'm running into this error - ActionFailed. An action failed. No dependent actions succeeded - when trying to run this logic app to add an IP to be blocked.
Error
I'm not sure where to start. The input looks ok. Help? Thanks in advance!
p.s. - sorry, it won't allow me to post the pics due to not having enough points.
Tried changing some parts of the body. Not sure what to change really.
According to Microsoft's documentation on Submit or Update Indicator API the request body should be as follows:
{
"indicatorValue": "220e7d15b011d7fac48f2bd61114db1022197f7f",
"indicatorType": "FileSha1",
"title": "test",
"application": "demo-test",
"expirationTime": "2020-12-12T00:00:00Z",
"action": "AlertAndBlock",
"severity": "Informational",
"description": "test",
"recommendedActions": "nothing",
"rbacGroupNames": ["group1", "group2"]
}
Since the error you get is too generic, it isn't clear enough to know exactly.
You are not passing in recommendedActions and rbacGroupNames, they may not be required but may want to pass the column even if no value is included.
I would also validate calling this API using manual values (even the exact value from their documentation) and if that does work, use process of elimination to figure out which property is giving you the trouble.
i.e. application might not accept a space value or combining the two values for description should be done outside of the HTTP call using compose and then passed as a single value of the output.

'Delay until' finish time of 'Queue a new build' not working in Azure Logic App

I'm triggering an Azure Logic App from an https webhook for a docker image in Azure Container Registry.
The workflow is roughly:
When a HTTP request is received
Queue a new build
Delay until
FinishTime of Queue a new build
See: Workflow image
The Delay until action doesn't work in that the queueried FinishTime is 0001-01-01T00:00:00.
It complains about the wrong format, so I manually added a Z after the FinishTime keyword.
Now the time stamp is in the right format, however, the timestamp 0001-01-01T00:00:00Z obviously doesn't make sense and subsequent steps are executed without delay.
Anything that I am missing?
edit: Queue a new build queues an Azure pipeline build. I.e. the FinishTime property comes from the pipeline.
You need to set a timestamp in future, the timestamp 0001-01-01T00:00:00Z you set to the "Delay until" action is not a future time. If you set a timestamp as 2020-04-02T07:30:00Z, the "Delay until" action will take effect.
Update:
I don't think the "Delay until" can do what you expect, but maybe you can refer to the operations below. Just add a "Condition" action to judge if the FinishTime is greater than current time.
The expression in the "Condition" is:
sub(ticks(variables('FinishTime')), ticks(utcNow()))
In a word, if the FinishTime is greater than current time --> do the "Delay until" aciton. If the FinishTime is less than current time --> do anything else which you want.(By the way you need to pay attention to the time zone of your timestamp, maybe you need to convert all of the time zone to UTC)
I've been in touch with an Azure support engineer, who has confirmed that the Delay until action should work as I intended to use it, however, that the FinishTime property will not hold a value that I can use.
In the meantime, I have found a workaround, where I'm using some logic and quite a few additional steps. Inconvenient but at least it does what I want.
Here are the most important steps that are executed after the workflow gets triggered from a webhook (docker base image update in Azure Container Registry).
Essentially, I'm initializing the following variables and queing a new build:
buildStatusCompleted: String value containing the target value completed
jarsBuildStatus: String value containing the initial value notStarted
jarsBuildResult: String value containing the default value failed
Then, I'm using an Until action to monitor when the jarsBuildStatus's value is switching to completed.
In the Until action, I'm repeating the following steps until jarsBuildStatus changes its value to buildStatusCompleted:
Delay for 15 seconds
HTTP request to Azure DevOps build, authenticating with personal access token
Parse JSON body of previous raw HTTP output for status and result keywords
Set jarsBuildStatus = status
After breaking out of the Until action (loop), the jarsBuildResult is set to the parsed result.
All these steps are part of a larger build orchestration workflow, where I'm repeating the given steps multiple times for several different Azure DevOps build pipelines.
The final action in the workflow is sending all the status, result and other relevant data as a build summary to Azure DevOps.
To me, this is only a workaround and I'll leave this question open to see if others have suggestions as well or in case the Azure support engineers can give more insight into the Delay until action.
Here's an image of the final workflow (at least, the part where I implemented the Delay until action):
edit: Turns out, I can simplify the workflow because there's a dedicated Azure DevOps action in the Logic App called Send an HTTP request to Azure DevOps, which omits the need for manual authentication (Azure support engineer pointed this out).
The workflow now looks like this:
That is, I can query the build status directly and set the jarsBuildStatus as
#{body('Send_an_HTTP_request_to_Azure_DevOps:_jar''s')['status']}
The code snippet above is automagically converted to a value for the Set variable action. Thus, no need to use an additional Parse JSON action.

Gamemaker variable using porals

I am remaking portal in gamemaker for my semester final, I was wondering how you find an object, if I have one portal down, and go into it, the game crashes, as the 2nd portal isn't placed, and it can't get its .x,.y pos. How do I set a variable to fix this?
We don't know how you determine the destination teleporter, you should clarify that. But one variant could be to check whether the amount of portals is >= 2, so you have at least one place to go
if (instance_number(your_portal_name) >= 2)
{
// proceed the portal mechanics
}
I assume that in some event you have a piece of code that does the teleporting. You just have to place this piece of code in an "if" statement that verifies if the second portal exists. This way, you will attempt teleportation only if the needed exit instance exists. You can use the "instance_exists" function
for example :
if ( instance_exists( exit_portal_or_whatever_you_name_it ) )
{
your_teleportation_code;
}
I would say that based on the information you gave us, German Gorodnev's answer is correct. If you only have one portal and you try to get the position of a non-existent portal, then you will get an error. So you should include an if statement that makes sure the needed portals are there before retrieving the positions.

Selecting endpoints dependent on which level to run tests

I have a bit of structural dilemma in soap. When running tests, it can be possible to run tests at project, test suite or test case level.
Now currently what happens is that we can run a whole project via project level and it will display a prompt box to select an endpoint (through a project level setup script and produces a project report using the project level tear down script).
However, it may be possible that the tester may not want to run a whole project and only wants to run a test suite or even a test case. Now it may be possible that the tester may only want to run only a test suite or even only a test case. Now it would be a hassle disabling suites or cases you don't want to run.
Now the problem i have is that if I start putting prompt boxes to select endpoints at suite or case level, everytime we hit a suite or case, it will always ask for an endpoint. Another thing is that I am thinking not creating suite or test case reposts because if running many suites or cases one by one, it is just an overkill on reporting.
I like your thinking on this, but I was speaking with my professional colleague and what we're thinking is this:
Add the below code for all test suites and test case level in their relevant setup scripts where it asks for endpoint (this is same code used in project set up script for selecting endpoint):
import com.eviware.soapui.support.*
def alert = com.eviware.soapui.support.UISupport
def urls = []
project.properties.each
{
if (it.value.name.startsWith("BASE_URL_"))
{
urls.push(it.value.name.replace("BASE_URL_", ""))
}
}
def urlName = alert.prompt("Please select the environment URL", "Enter URL", urls)
if (urlName)
{
def url = project.getPropertyValue("BASE_URL_" + urlName)
def urlBase = "BASE_URL_" + urlName
project.setPropertyValue("BASE_URL", url)
switch (urlBase){
case "BASE_URL_TEST":
project.setPropertyValue("DOMAIN_NAME", "TEST");
break;
case "BASE_URL_STAGE":
project.setPropertyValue("DOMAIN_NAME", "STAGE");
break;
default:
project.setPropertyValue("DOMAIN_NAME", "NO DOMAIN");
break;
}
}
else
{
log.warn 'haven\'t received user input'
log.warn 'No base URL is selected or cancelled, try again'
assert false
}
Now what we add is the following and we may need to use properties but again see what you think is best:
If test is ran at project level, it will prompt to select endpoint through project setup script but it will not ask for selecting endpoint through test suite or test case setup script. So it's only a single endpoint selection
If test is ran at suite level, it will prompt to select endpoint through project setup script but it will not ask for selecting endpoint through test case setup script. So it's only a single endpoint selection
For running at test case level, well it only runs for that test case so it's at the lowest level as it asks for an endpoint for that test case.
We can't have setup scripts disabled at any level because there maybe over code in those setup script that will need to be exectued, we just need a way to say depending on which level, don't ask for selecting endpoints at lower levels.
Seems complicated to implement but does anyone know best way to implement this or do they even have a better idea than this theory?
Thanks
For a moment, let us assume you get it done for all levels (project, suite, and each case). May be you forgot about the step level ;-)
Do you have any Pros in your approach?, for me, NO.
Cons in your approach:
Each time user executes a test (be it project / suite / any test case), engineer needs to select value from the drop down, which is unwanted though testing against the same server as previous test case & little annoying.
Test execution requires manual intervention each time test execution is invoked.
User Interface is required as drop down being used.
Will be come road block / hurdle for end to end automation or to achieve automation.
Test execution can't done in headless mode. And this is important if you need to use Continuous Integration tools.
Proposed Approach :-
If I have to do the above, I would do the following. That would be clean, damn simple, no such complications would arise that you had mentioned in the long summary.
Looks there are following project properties defined with addresses of the test servers:
BASE_URL_TEST
BASE_URL_STAGE
There is also another project property defined BASE_URL and all the above logic is to allow the user to select the value from above properties to base URL value.
Now all user have to do is change the value for project property BASE_URL. I would think just user have to set one of the below value by hand what he / she needed as (one of them) before proceeding with their tests.
${#Project#BASE_URL_TEST} or
${#Project#BASE_URL_STAGE}
NOTE that a property value can be referred into another property by the use of Property Expansion like above.
With the above, user can set whatever is needed and change only if required or have to change the test server.
No setup script at any level is required any more, and just simply change the value of the property.
Properties are given to make to life simple, which can be used in N number of places and maintain the project easily.
Most Importantly, overcome the Cons mentioned in the beginning.
It is general practice that SoapUI is used to design the tests, and SOAPUI_HOME/bin/testrunner.bat or .sh utility to execute the tests in command line mode and that is the way to achieve Continuous Integration.
That's why use of properties helps here to achieve the above without any issues.
Even simple:
Just have one project property BASE_URL (remove others), user have to just edit the property value and have the test server name / IP address and is done for once, say http://testjuniper. Isn't it dead simple?
And I believe, the engineer would definitely know which server he / she is using to execute the tests.
Having said that, now user do not have to bother at all, irrespective of executing a project / suite / test case, as long as testing is carried out against the same server / environment.
Once, the test execution is finished against TEST environment, the engineer may move on to other environment say STAGING, just change BASE_URL property value accordingly.

Exception of type 'Microsoft.WindowsAzure.StorageClient.StorageClientException' was thrown

Exception of type 'Microsoft.WindowsAzure.StorageClient.StorageClientException' was thrown.
Sometimes even if we have the fabric running and the role manager is up, we get an exception of this sort.
The code breaks at the line:
emailAddressClient.CreateTableIfNotExist("EmailAddress");
public EmailAddressDataContext(CloudStorageAccount account) :
base(account.TableEndpoint.AbsoluteUri, account.Credentials)
{
this.storageAccount = account;
CloudTableClient emailAddressClient =
new CloudTableClient(storageAccount.TableEndpoint.AbsoluteUri,
storageAccount.Credentials);
emailAddressClient.CreateTableIfNotExist("EmailAddress");
}
I give Windows Azure tables camel-cased names all the time without issues.
I wonder if by chance you already used this table name and recently deleted it? For a time after deletion (when the table is still being deleted asynchronously), you won't be able to recreate it. I believe 409 Conflict is the error code to expect in that case.
I agree with Steve Marx, casing does not seem to affect this issue. In fact Microsoft's Azure diagnostics tables are created with unusual casing eg: WADPerformanceCounters. I get the problem even in the dev environment. So it is something else entirely - my opinion.
Error fixed in my case: The problem was an error with the connection string as defined in (or lack thereof) in the webrole or workerrole project properties.
Fix:
Right-click on the webrole under "Roles" folder in your cloud application. Select "Properties" from the context menu.
Select the "Settings" tab.
Verify or Add a setting for you connection string that you will use to initialize table storage.
Mine was a simple error - no setting for my connection string.
Easy fix is to change "EmailAddress" to "Emailaddress". For some reasons it would not allow CamelCasing. So please make sure, you just have one capital letter in the name of the table that too at the beginning. Since the table names are case insensitive, you can also name it as 'emailaddress'

Resources