We have been trying to use the Acumatica Test Framework but unfortunately we are not managing to get our tests to run correctly.
We have followed the documentation step-by-step to set up the test accordingly. When we run the test, Firefox starts and the log-in page is loading correctly. The username and password are automatically entered, together with the company. The login-page completes successfully but is then resulting in an error.
The error is 'Timed out waiting for the WaitForCallbackToStart condition within the specified timeout: 500ms'
It seems that the test does not recognise that the log-in was successful.
I think I managed to identify the piece of code that checks whether log-in was successful:
"try\r\n{\r\n var win = window == window.top || !window.top.frames['main'] ? window : window.top.frames['main'];\r\n if (win.document.activePanel && win.document.activePanel.getInnerWindow()) win = win.document.activePanel.getInnerWindow();\r\n if (win.px_callback && (win.px_callback.waitCallback || win.px_callback.pendingCallbacks.length)) return true;\r\n else if (win.px_all) for(var item in win.px_all) if (win.px_all[item].callback) return true;\r\n return false;\r\n}\r\ncatch (e)\r\n{\r\n if (e.message.indexOf('denied') != -1 || e.message.indexOf('cross-origin') != -1) return true;\r\n else return false;\r\n}"
This represents a JavaScript code which is executed through Selenium to identify whether the Page has loaded.
However, the above code is returning false. The Test Framework continues to periodically call this code until it returns true (or times out). In my case it never returns true and times out.
We have tried on different versions of Acumatica and also on different machines. But it always results in the same error.
I am included a screenshot of the error below.
In the Visual Studio dialog in the Exception settings area set checkbox Break when this exception type is thrown equal to false and continue execution of the test. This exception is handled by Test SDK inside the LoginToDestinationSite function and you don't need to take care of it.
Related
I'm writing an app which sends an automated call via Amazon Connect. The app needs to retry to another destination number should the first one fail to pick up. The app is being written in Python3 and is to be hosted in Lambda.
This is the resource is used
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/connect.html#Connect.Client.get_contact_attributes
https://docs.aws.amazon.com/connect/latest/APIReference/API_GetContactAttributes.html
The problem is that "send call" is kicked off asynchronously and so it is not immediately clear if the call has succeeded or not. To check the call I invoke "get_contact_attributes" to identify status or any attributes which could point to the status of the placed call.
response=client.start_outbound_voice_contact(
ContactFlowId='XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX',
DestinationPhoneNumber=event["DestinationPhoneNumber"],
SourcePhoneNumber=event["OriginationPhoneNumber"],
InstanceId="YYYYYYYY-YYYY-YYYY-YYYY-YYYYYYYYYYYY",
Attributes={
"message":f'{event["message"]}'
}
)
contactid=response["ContactId"]
attr = client.get_contact_attributes(
InstanceId='YYYYYYYY-YYYY-YYYY-YYYY-YYYYYYYYYYYY',
InitialContactId=contactid
)
I expected it to return "connected_at" or something like it I could use to identify the outcome of the call, however, it only returns "custom" attributes set by myself.
this is the solution i found:
1) in the Contact Flow i added "Set Attribute" node where i set "status=1" right after the start. Basically, if a call enters Contact Flow (i.e. call picked up) it is marked as successfully completed
Set Contact Sttributes
2) inside my Python code (lambda) i check for the status to show up and if it doesn't in so many seconds i cancel the call and try another number:
attr = client.get_contact_attributes(
InstanceId=instanceid,
InitialContactId=contactid
)
stop_call=client.stop_contact(
ContactId=contactid,
InstanceId=instanceid
)
I'm newbie with selenium. I have defined a batch process where I process several pages. These pages are all equals but with different data. Therefore, I process all these pages with the same code. When I start the process this works fine, but, from time to time, I always get the same error in the same point. I have found that when I try to get the data from the web page, there are two tables that I can not get their code when the process doesn't work fine. And I don't understand anything because if I restart the process in the same page that has failed previously then it works fine!!! So, it seems that selenium not always load the content of the web page correctly.
I use python 3 and selenium with these code:
caps = DesiredCapabilities().CHROME
caps["pageLoadStrategy"] = "normal"
#b = WebDriver(executable_path="./chromedriver")
b = webdriver.Chrome(desired_capabilities=caps, executable_path="./chromedriver")
b.set_window_size(300, 300)
b.get(url)
html = b.page_source
self.setSoup(bs4.BeautifulSoup(html,'html.parser'))
b.close()
How can I avoid to get always this error which is produced from time to time?
Edit I:
I have checked that when the process works fine this sentence returns me two tables:
tables = self.get_soup().findAll("table", class_="competitor-table comparative responsive")
But, when the process works wrong, this code returns me 0 tables. How I said before, If I process again the web page that previously has gave me the error, then works fine, therefore, it returns me two tables instead of zero.
For this reason, I supose that selenium is not returns me always the code of the page, because for the same page when it works wrong returns me zero tables and when it works fine returns me two tables.
Edit II:
For example, right now I've got an error in this page:
http://www.fiba.basketball/euroleaguewomen/18-19/game/1912/Olympiacos-Perfumerias-Avenida#|tab=boxscore
The tables that I try to retrieve and I don't get them are these:
How you can see, there is two tables with this CSS class. I don't post the content of the tables because are so big.
This is the code where I try to get the content of the tables:
def set_boxscore(self):
tables = self.get_soup().findAll("table", class_="competitor-table comparative responsive")
local = False
print("Total tablas: {}".format(len(tables)))
for t in tables:
local = not local
if local:
self.home_team.set_stats(t.tfoot.find("tr", class_="team-totals"))
else:
self.away_team.set_stats(t.tfoot.find("tr", class_="team-totals"))
rows = t.tbody.findAll("tr")
for row in rows:
time = row.find("td", class_="min").string
if time.find(MESSAGES.MESSAGE_PLAYER_NOT_PLAY) == -1:
if local:
player = PlayerEuropa(row)
self.home_players.append(player)
else:
player = PlayerEuropa(row)
self.away_players.append(player)
In this code I write the total tables that I can find in them and how you can see, right now, I've got zero tables:
And, now, If I restart the process, then, it will work correctly for me.
Edit III:
One example more, about the process that I have defined. These url's has been processed correctly.
http://www.fiba.basketball/eurocupwomen/18-19/game/2510/VBW-CEKK-Ceglèd-Rutronik-Stars-Keltern#|tab=boxscore
http://www.fiba.basketball/eurocupwomen/18-19/game/2510/Elfic-Fribourg-Tarbes-GB#|tab=boxscore
http://www.fiba.basketball/eurocupwomen/18-19/game/2510/Basket-Landes-BBC-Sint-Katelijne-Waver#|tab=boxscore
But when I have tried to process this other url, then, I've got the error explained previously:
http://www.fiba.basketball/eurocupwomen/18-19/game/0111/Gorzow-Sparta-k-M-R--Vidnoje#|tab=boxscore
To render the web page I use selenium and I always do it at the beginning of the process. I get the content of the web page with this code:
def __init__(self, url):
"""Constructor"""
caps = DesiredCapabilities().CHROME
caps["pageLoadStrategy"] = "normal"
#b = WebDriver(executable_path="./chromedriver")
b = webdriver.Chrome(desired_capabilities=caps, executable_path="./chromedriver")
b.set_window_size(300, 300)
b.get(url)
html = b.page_source
self.setSoup(bs4.BeautifulSoup(html,'html.parser'))
b.close()
After this code is when I start to retrieve the information of the web page. For some reason, sometimes, the web page is not rendered completely because when I try to get some information, this is not found it and get the error explained previously.
I have a step like this:
Then(/^I can see the Eligible Bugs list "([^"]*)"$/) do |bugs|
bugs_list = bugs.split(", ")
assert page.has_css?(".eligible-bugs", :visible => true)
within(".eligible-bugs") do
bugs_list.each do |bug|
assert page.has_content?(bug)
end
end
end
But the step fail sometimes at the " within(".eligible-bugs") do" with the error 'Unable to find css ".eligible-bugs"'
I feel it is odd.
for the assertion has been passed. it means the css is visible.
why within cannot find css? How it happen.
But the step fail sometimes at the " within(".eligible-bugs") do" with the error 'Unable to find css ".eligible-bugs"'
I feel it is odd.
for the assertion has been passed. it means the css is visible.
why within cannot find css? How it happen.
BTW, I have set my max wait time to 5.
Capybara.default_max_wait_time = 5
The only way that should happen, is if the page is dynamically changing while you're running the test - sometime during your check for all bugs content on the page it is changing and the '.eligible-bugs' element is going way. The test and the browser run separately, so how/why it is happening depends on what else your page is doing in the browser, it would also depend on what steps have come before this in the test.
Also, note that it's not necessarily disappearing between the has_css? and the within statement first running. If it disappears at any point during the code inside the within running it could throw the same error as it attempts to reload the '.eligible-bugs' element.
From the title of the question I assume the list that you want to check is the result of a search or filtering action. If it is, does that action remove the existing '.eligible-bugs' element and then after some time replace it with a new one returned from an ajax request or something? If that is the case then, since you control the test data, you should be waiting for the correct results count to show, thereby ensuring any element replacements have completed, before checking for the text. How you do that would depend on the exact makeup of the page, but if each eligible bug was a child of '.eligible-bugs' and had a class of '.eligible-bug' I would write your test something like
Then(/^I can see the Eligible Bugs list "([^"]*)"$/) do |bugs|
bugs_list = bugs.split(", ")
assert_css(".eligible-bugs > .eligible_bug", count: bugs_list.size) # wait until the expected number of results have shown up
within(".eligible-bugs") do
bugs_list.each do |bug|
assert_content(bug)
end
end
end
I have a simple angular / requirejs / node project that loads correctly when viewed from a browser. I'm trying to get e2e tests with karma set up.
I've copied all of the e2e configurations and directory structures from the angular-require-js seed into my own project. Unfortunately, the tests in my own project give bizarre (and ever-changing!) results. Here's the stripped-down test I'm trying to run:
describe('My Application', function() {
beforeEach(function() {
browser().navigateTo('/');
sleep(0.5);
});
it('shows an "Ask a Question" button on the index page', function() {
expect(element('a').text()).toBe('Ask a Question');
});
});
Sometimes the test fails
Executed 1 of 1 (1 FAILED) (0.785 secs / 0.614 secs)
Firefox 22.0 (Mac) My Application shows an "Ask a Question" button on the index page FAILED
element 'a' text
http://localhost:9876/base/test/lib/angular/angular-scenario.js?1375035800000:25397: Selector a did not match any elements.
(but there ARE a elements on the page!)
Sometimes the test hangs
Executed 0 of 0! In these cases the test-runner browser does show that it's trying to run a test, but it never completes:
It just stays like this forever. My app IS displayed in the browser during this hang.
Without element('a') it always passes
The only way to get consistent results is to avoid element(). If I expect(true).toBe(true) then 1 out of 1 tests always pass.
How can I debug this?
I'm at a loss for how to move forward. The test browser is correctly displaying my app, with the relevant 'a' element and everything. The test runner itself seems to only sometimes recognize that it should be running something and NEVER finds the a element. Is there a way to step through the test running process? Is this a common problem that happens when [x] is misconfigured?
Thanks for any suggestions!
karma-e2e.conf.js
basePath = '../';
files = [
'test/lib/angular/angular-scenario.js',
ANGULAR_SCENARIO_ADAPTER,
'test/e2e/**/*.js'
];
autoWatch = false;
browsers = ['Firefox'];
singleRun = true;
proxies = {
'/': 'http://localhost:3000/'
};
urlRoot = "__karma__";
junitReporter = {
outputFile: 'test_out/e2e.xml',
suite: 'e2e'
};
How many anchor tabs do you have on the page?
You may not be referencing the actual anchor you'd expect. Add an id tag to the anchor and test again. If it is the only anchor tag in the page try to match the text rather than expect it to be. IE:
expect((element('#anchor-tag-id').text()).toMatch(/Ask a question/);
If you use chrome open the develop tools on your a element to check the actual values, this may help a lot.
EDIT:
should be
expect(element('#anchor-tag-id').text()).toMatch(/Ask a question/);
sorry added extra ( in the first example
I have a similar situation to this question.
I have a custom sequential SharePoint workflow, deleoped in Visual Studio 2008. It is associated with an InfoPath form submitted to a form library. It is configured to automatically start when an item is created.
It works sometimes. Sometimes it just fails to start.
Just like the question linked above, I checked in the debugger, and the issue is that the InfoPath fields published as columns in the library are empty when the workflow fires. (I access the fields with workflowProperties.Item["fieldName"].) But there appears to be a race condition, as those fields actually show up in the library view, and if I terminate the failed workflow and restart it manually, it works fine!
After a lot of head-scratching and testing, I've determined that the workflow will start successfully if the user is running any version of IE on Windows XP, but it fails if the same user submits the same form data from a Vista or Windows 7 client machine.
Does anyone have any idea why this is happening?
I have used another solution which will only wait until InfoPath property is available (or max 60 seconds):
public SPWorkflowActivationProperties workflowProperties =
new SPWorkflowActivationProperties();
private void onOrderFormWorkflowActivated_Invoked(object sender, ExternalDataEventArgs e)
{
SPListItem workflowItem;
workflowItem = workflowProperties.List.GetItemById(workflowProperties.ItemId);
int waited = 0;
int maxWait = 60000; // Max wait time in ms
while (workflowItem["fieldName"] == null && (waited < maxWait))
{
System.Threading.Thread.Sleep(1);
waited ++;
workflowItem = workflowProperties.List.GetItemById(workflowProperties.ItemId);
}
// For testing: Write delay time in Workflow History Event
SPWorkflow.CreateHistoryEvent(
workflowProperties.Web,
workflowProperties.WorkflowId,
(int)SPWorkflowHistoryEventType.WorkflowComment,
workflowProperties.OriginatorUser, TimeSpan.Zero,
waited.ToString() + " ms", "Waiting time", "");
}
workflowProperties.Item will never get the InfoPath property in the code above.
workflowProperties.List.GetItemById(workflowProperties.ItemId) will after some delay.
This occurs due to the fact that Vista/7 saves InfoPath forms through WebDAV, however XP uses another protocol (sorry, can't remember at the time). SharePoint catches the "ItemAdded" event before the file is actually uploaded (that is, the item is already created, but file upload is currently in progress).
What you can do for a workaround is to add a dealay activity and wait for 10 seconds as the first thing in your workflow (will actually be longer than ten seconds due to the way workflows are built in SPPS). This way the upload will already have ended when you perform to read the item. To inform the users about what's happening, you can add a "logToHistoryList" activity before the delay.