I just started using nightwatch with browserstack and I'm noticing that when we get a failed test, nightwatch registers the failure, but browserstack does not.
sample test I am using.
Also I am using free trial version of BrowserStack.
My question is:
Are there any ideas how to tell browserstack when a test run failed ?
From BrowserStack doc:
REST API
It is possible to mark tests as either a pass or a fail, using the
following snippet:
var request = require("request");
request({
uri: "https://user:key#www.browserstack.com/automate/sessions/<session-id>.json",
method: "PUT",
form: {
"status": "completed",
"reason":""
}
});
The two
potential values for status can either be completed or error.
Optionally, a reason can also be passed.
My questions are:
How I can get 'session-id' after test execution ?
What if I can see "completed" status in dashboard already ?
A session on BrowserStack has only three types of statuses:
Completed, Error or Timeout. Selenium (and hence, BrowserStack) does not have a way of understanding, if a test has passed or failed. Its by the multiple assertions in your tests that appear on your console, that you infer if a test has passed / failed. These assertions however, do not reach BrowserStack. As you rightly identified, you can use the REST-API, to change the status of the session to 'Error', if you see a failure in your console.
I would suggest fetching the session ID of the test as the test is being executed, since fetching the session ID after the test execution is a lengthy process. In Nightwatch, you can fetch session ID as follows:
browser.session(function(session) {
console.log(session.sessionId);
});
Yes, you can certainly change the status of the session once it is completed. That's where the REST-API comes to help!
If you came here searching for a solution in Python, you could use
requests.put(
"https://api.browserstack.com/automate/sessions/{}.json".format(driver.session_id),
auth=(USERNAME, ACCESS_KEY),
json={"status": "failed", "reason": "test failed"})
Related
We are experimenting with Design automation for Revit and have gotten stuck on a failure that is hard to debug:
basically our code finish and then forge takes over and fails.
Here is part of the log: I have marked in green what part are "our" logs:
Basically the app takes an rfa and should output a json.
I have defined the activity like this:
What can I do to investigate what is causing this issue?
I found the issue:
you need to explicitly set the Succeeded property to true on the DesignAutomationReadyEventArgs. Otherwise it will report as failed.
I'm using Azure's Runs API to get a pipeline run result as described here:
https://learn.microsoft.com/en-us/rest/api/azure/devops/pipelines/runs/get?view=azure-devops-rest-6.0#runresult
I can see in the documentation how to get the state and final result so I can know if the run was a success or a failure. However, in case of a failure, I don't see how I can get the error that occurred in that run as a string.
How can I get the actual error which caused the pipeline run to fail?
You can use the REST API "Timeline - Get" to list the issues (error and warning) associated with a run.
Note:
This API can only list the first 10 issues. If the run has more than 10 issues, the rest will not be listed in the response. To get the complete issues, you can use the API "Builds - Get Build Log" or "Logs - Get" to get the complete logs that contains the complete issues.
[UPDATE]
The buildId is same as the runId, and you can find it from the URL of the pipeline (build) run.
The timelineId is not required in the API request, you can use the request URI like as below.
GET https://dev.azure.com/{organization}/{project}/_apis/build/builds/{buildId}/timeline/?api-version=6.0
I am trying to create consumer side pacts for a POST end point using the pact-python library. But it's failing with the error saying "Missing requests".
Here't the client code which makes the POST API call
def create_user(request):
return requests.post("http://localhost:1234/user", data=request).json()
Here's my test class which create the consumer pacts.
class TestUserConsumer(unittest.TestCase):
def test_user_creation(self):
request = {
"name": "Micky",
"age": 0
}
response = {
"id": 1232,
"name": "Micky",
"age": 0
}
pact = Consumer("user_client").has_pact_with(Provider("user_server"))
pact.start_service()
pact.with_request(
method='post',
path='/user',
body=request
).will_respond_with(status=200, body=response)
with pact:
create_user(request)
pact.verify()
pact.stop_service()
The test failed with the following error.
line 268, in verify
assert resp.status_code == 200, resp.text
AssertionError: Actual interactions do not match expected interactions for mock MockService.
Missing requests:
POST /user
The create_user(request) is getting executed, but still the interactions are not recorded on the pact mock server.
Note : The GET API pact creations are working. Only the POSTs are failing.
Appreciate the help.
I figured out the problem. I was not converting my dictionary to json before making the request. Hence the request body format was incorrectly send. This caused a failure on the mock server while verifying the pact.
I also noticed that, the logs didn't get generated initially. This was due to my assertions added before stopping the server. Since the assertions got failed, the pact mock server didn't get stopped. Hence the logs are not generated at all. Once I stopped the server, the logs got added and that helped me to identify the problem.
I am using Dialogflow ES and once I got the webhook setup, I haven't been having issues. But after a few months, I just started getting a random error. It seems to be inconsistent as in sometimes I get it for a specific web call and other times it works fine. This is from the Raw API response:
"webhookStatus": {
"code": 3,
"message": "Webhook call failed. Error: [ResourceName error] Path '' does not match template 'projects/{project_id=*}/locations/{location_id=*}/agent/environments/{environment_id=*}/users/{user_id=*}/sessions/{session_id=*}/contexts/{context_id=*}'.."
}
The webhook is in GCP Functions in the same project. I have a simple "ping" function in the same agent that calls the webhook. That works properly and pings the function, records some notes in the function log (so I know the function is being called), and returns a response fine, so I know the webhook is connected and working for other intents in the same agent before and after I get the error above.
Other intents in the same agent work (and this one WAS working), but I get this error now. I also tried recreating the intent and I get the same behavior.
The project is linked to a billing account and I have been getting charged for it, so I don't think it is an issue with being on a trial or otherwise. Though the Dialogflow itself is in "trial", but the linked webhook function is billed.
Where can I find what this error means or where to look to resolve it?
After looking at this with fresh eyes, I found out what was happening.
The issue was a mal-formed output context. I was returning the bad output context sometimes (which explained why sometimes it worked and sometimes it didn't). Specifically, I was returning the parameters directly into the output context without the output context 'name' or 'parameters'. Everything looked like it was working and I didn't get any other errors, but apparently, when Dialogflow receives a bad web response, it generates the unhelpful error above.
I have received the following error when trying to save a DSX Scheduled Job:
Job schedule entry could not be created. Status code: 500
Screenshot of the error message:
I've tried about six times over the last few hours and have consistently received the above error message.
Debugging via the browser network inspection tool I can see:
{
"code":"CDSX.N.1001",
"message":"The service is not responding, try again later.",
"description":"",
"httpMethod":"POST",
"url":"https://batch-schedule-prod.spark.bluemix.net:12100/schedules",
"qs":"",
"body":"While attempting to get Bluemix Token from BlueIDToken, unable to retrieve AccessToken, failed with status 400 Bad Request, message = {\"error\":\"invalid_grant\",\"error_description\":\"A redirect_uri can only be used by implicit or authorization_code grant types.\"} Entity: {\"error\":\"invalid_grant\",\"error_description\":\"A redirect_uri can only be used by implicit or authorization_code grant types.\"}",
"statusCode":500,
"duration":666,
"expectedStatusCode":201
}
As per Charles comment, the functionality is working ok now. I guess if this happens to another user some time in the future they should contact support.