Frank Cucumber Test Case Hangs When Using "when I wait" Test - cucumber

I'm using frank-cucumber to test my iOS app and have run into some problems when my test is of the following form
When I wait to see "OpenButton"
If a UIView with the accessibility label "OpenButton" never shows up, instead of timing out and reporting an error on the test after WAIT_TIMEOUT is hit, cucumber just hangs.
Since I don't see WAIT_TIMEOUT even used in the core_frank_steps.rb I wonder if this is the reason why any test case of the form "When I wait.." will just hang.
Note: core_frank_steps.rb can be found here

# Polls every 0.1s , returns true when element is present
# #param selector [String] Frankly selector e.g. view marked:''
# #param timeout [Int] seconds to wait
def wait_for_element(selector, timeout=10)
#the return value of the yield expression isn't working, so we use a closure
res = nil
wait_until(:timeout => timeout, :message => "Waited for element #{selector} to exist") {
res = element_exists(selector)
}
res
end
The above function helped us get around some of these wait scenarios.

Related

SwtichToContext does not return to original thread

I'm working with an API that can only access its objects on the main thread, so I need to create a new thread to be used for my GUI and then swap back to the original thread for any lengthy calculations involving the API.
So far I have the following code:
[<EntryPoint; STAThread>]
let main _ =
Debug.WriteLine($"[{Thread.CurrentThread.ManagedThreadId}] - Inital thread")
let initCtx = SynchronizationContext.Current
let uiThread = new Thread(fun () ->
let guiCtx = SynchronizationContext.Current
Debug.WriteLine($"[{Thread.CurrentThread.ManagedThreadId}] - New UI thread")
async{
do! Async.SwitchToContext initCtx
Debug.WriteLine($"[{Thread.CurrentThread.ManagedThreadId}] - Back to initial thread")
// Lengthy API calculation here
do! Async.SwitchToContext guiCtx
Debug.WriteLine($"[{Thread.CurrentThread.ManagedThreadId}] - Back to UI thread")
} |> Async.RunSynchronously
)
uiThread.SetApartmentState(ApartmentState.STA)
uiThread.Start()
1
However when I run this I get the output:
[1] - Inital thread
[4] - New UI thread
[5] - Back to initial thread
[5] - Back to UI thread
So it doesn't seem to be switching contexts the way I would expect. How can I switch back to the original thread after creating a new thread this way?
I have tried calling
SynchronizationContext.SetSynchronizationContext(new DispatcherSynchronizationContext(Dispatcher.CurrentDispatcher)) first to ensure that the original thread has a valid SynchronizationContext but that causes the program to exit at the Async.SwitchToContext lines without throwing any exception.
I have also tried using Async.StartImmediate instead of RunSynchronously with the same result.
If I try both of these at the same time then the program just freezes up at the Async.SwitchToContext lines instead of exiting out.

Understanding cats effect `Cancelable `

I am trying to understand how cats effect Cancelable works. I have the following minimal app, based on the documentation
import java.util.concurrent.{Executors, ScheduledExecutorService}
import cats.effect._
import cats.implicits._
import scala.concurrent.duration._
object Main extends IOApp {
def delayedTick(d: FiniteDuration)
(implicit sc: ScheduledExecutorService): IO[Unit] = {
IO.cancelable { cb =>
val r = new Runnable {
def run() =
cb(Right(()))
}
val f = sc.schedule(r, d.length, d.unit)
// Returning the cancellation token needed to cancel
// the scheduling and release resources early
val mayInterruptIfRunning = false
IO(f.cancel(mayInterruptIfRunning)).void
}
}
override def run(args: List[String]): IO[ExitCode] = {
val scheduledExecutorService =
Executors.newSingleThreadScheduledExecutor()
for {
x <- delayedTick(1.second)(scheduledExecutorService)
_ <- IO(println(s"$x"))
} yield ExitCode.Success
}
}
When I run this:
❯ sbt run
[info] Loading global plugins from /Users/ethan/.sbt/1.0/plugins
[info] Loading settings for project stackoverflow-build from plugins.sbt ...
[info] Loading project definition from /Users/ethan/IdeaProjects/stackoverflow/project
[info] Loading settings for project stackoverflow from build.sbt ...
[info] Set current project to cats-effect-tutorial (in build file:/Users/ethan/IdeaProjects/stackoverflow/)
[info] Compiling 1 Scala source to /Users/ethan/IdeaProjects/stackoverflow/target/scala-2.12/classes ...
[info] running (fork) Main
[info] ()
The program just hangs at this point. I have many questions:
Why does the program hang instead of terminating after 1 second?
Why do we set mayInterruptIfRunning = false? Isn't the whole point of cancellation to interrupt a running task?
Is this the recommended way to define the ScheduledExecutorService? I did not see examples in the docs.
This program waits 1 second, and then returns () (then unexpectedly hangs). What if I wanted to return something else? For example, let's say I wanted to return a string, the result of some long-running computation. How would I extract that value from IO.cancelable? The difficulty, it seems, is that IO.cancelable returns the cancelation operation, not the return value of the process to be cancelled.
Pardon the long post but this is my build.sbt:
name := "cats-effect-tutorial"
version := "1.0"
fork := true
scalaVersion := "2.12.8"
libraryDependencies += "org.typelevel" %% "cats-effect" % "1.3.0" withSources() withJavadoc()
scalacOptions ++= Seq(
"-feature",
"-deprecation",
"-unchecked",
"-language:postfixOps",
"-language:higherKinds",
"-Ypartial-unification")
you need shutdown the ScheduledExecutorService, Try this
Resource.make(IO(Executors.newSingleThreadScheduledExecutor))(se => IO(se.shutdown())).use {
se =>
for {
x <- delayedTick(5.second)(se)
_ <- IO(println(s"$x"))
} yield ExitCode.Success
}
I was able to find an answer to these questions although there are still some things that I don't understand.
Why does the program hang instead of terminating after 1 second?
For some reason, Executors.newSingleThreadScheduledExecutor() causes things to hang. To fix the problem, I had to use Executors.newSingleThreadScheduledExecutor(new Thread(_)). It appears that the only difference is that the first version is equivalent to Executors.newSingleThreadScheduledExecutor(Executors.defaultThreadFactory()), although nothing in the docs makes it clear why this is the case.
Why do we set mayInterruptIfRunning = false? Isn't the whole point of cancellation to interrupt a running task?
I have to admit that I do not understand this entirely. Again, the docs were not especially clarifying on this point. Switching the flag to true does not seem to change the behavior at all, at least in the case of Ctrl-c interrupts.
Is this the recommended way to define the ScheduledExecutorService? I did not see examples in the docs.
Clearly not. The way that I came up with was loosely inspired by this snippet from the cats effect source code.
This program waits 1 second, and then returns () (then unexpectedly hangs). What if I wanted to return something else? For example, let's say I wanted to return a string, the result of some long-running computation. How would I extract that value from IO.cancelable? The difficulty, it seems, is that IO.cancelable returns the cancelation operation, not the return value of the process to be cancelled.
The IO.cancellable { ... } block returns IO[A] and the callback cb function has type Either[Throwable, A] => Unit. Logically this suggests that whatever is fed into the cb function is what the IO.cancellable expression will returned (wrapped in IO). So to return the string "hello" instead of (), we rewrite delayedTick:
def delayedTick(d: FiniteDuration)
(implicit sc: ScheduledExecutorService): IO[String] = { // Note IO[String] instead of IO[Unit]
implicit val processRunner: JVMProcessRunner[IO] = new JVMProcessRunner
IO.cancelable[String] { cb => // Note IO.cancelable[String] instead of IO[Unit]
val r = new Runnable {
def run() =
cb(Right("hello")) // Note "hello" instead of ()
}
val f: ScheduledFuture[_] = sc.schedule(r, d.length, d.unit)
IO(f.cancel(true))
}
}
You need explicitly terminate the executor at the end, as it is not managed by Scala or Cats runtime, it wouldn't exit by itself, that's why your App hands up instead of exit immediately.
mayInterruptIfRunning = false gracefully terminates a thread if it is running. You can set it as true to forcely kill it, but it is not recommanded.
You have many way to create a ScheduledExecutorService, it depends on need. For this case it doesn't matter, but the question 1.
You can return anything from the Cancelable IO by call cb(Right("put your stuff here")), the only thing blocks you to retrieve the return A is when your cancellation works. You wouldn't get anything if you stop it before it gets to the point. Try to return IO(f.cancel(mayInterruptIfRunning)).delayBy(FiniteDuration(2, TimeUnit.SECONDS)).void, you will get what you expected. Because 2 seconds > 1 second, your code gets enough time to run before it has been cancelled.

why would python3 recursive function return null

I have this function that when hitting a rate limit will call itself again. It should eventually succeed and return the working data. It works normally then rate limiting works as expected, and finally when the data goes back to normal I get:
TypeError: 'NoneType' object is not subscriptable
def grabPks(pageNum):
# cloudflare blocks bots...use scraper library to get around this or build your own logic to store and use a manually generated cloudflare session cookie... I don't care 😎
req = scraper.get("sumurl.com/"+str(pageNum)).content
if(req == b'Rate Limit Exceeded'):
print("adjust the rate limiting because they're blocking us :(")
manPenalty = napLength * 3
print("manually sleeping for {} seconds".format(manPenalty))
time.sleep(manPenalty)
print("okay let's try again... NOW SERVING {}".format(pageNum))
grabPks(pageNum)
else:
tree = html.fromstring(req)
pk = tree.xpath("/path/small/text()")
resCmpress = tree.xpath("path/a//text()")
resXtend = tree.xpath("[path/td[2]/small/a//text()")
balance = tree.xpath("path/font//text()")
return pk, resCmpress, resXtend, balance
I've tried to move the return to outside of the else scope but then it throws:
UnboundLocalError: local variable 'pk' referenced before assignment
Your top level grabPks doesnt return anything if it is rate limited.
Think about this:
Call grabPks()
You're rate limited so you go into the if statement and call grabPks() again.
This time it succeeds so grabPks() returns the value to the function above it.
The first function now falls out of the if statement, gets to the end of the function and returns nothing.
Try return grabPks(pageNum) instead inside your if block.
well okay... I needed to return grabPKs to make it play nice...:
def grabPks(pageNum):
# cloudflare blocks bots...use scraper library to get around this or build your own logic to store and use a manually generated cloudflare session cookie... I don't care 😎
req = scraper.get("sumurl.com/"+str(pageNum)).content
if(req == b'Rate Limit Exceeded'):
print("adjust the rate limiting because they're blocking us :(")
manPenalty = napLength * 3
print("manually sleeping for {} seconds".format(manPenalty))
time.sleep(manPenalty)
print("okay let's try again... NOW SERVING {}".format(pageNum))
return grabPks(pageNum)
else:
tree = html.fromstring(req)
pk = tree.xpath("/path/small/text()")
resCmpress = tree.xpath("path/a//text()")
resXtend = tree.xpath("[path/td[2]/small/a//text()")
balance = tree.xpath("path/font//text()")
return pk, resCmpress, resXtend, balance

Firebase... Add/Update Firebase Using node.js Script

I have arbitrary JSON that is sensibly laid out like this:
[
{
"id":100,
"name":"Buckeye, AZ",
"status":"OPEN",
"address":{
"street":"416 S Watson RD",
"city":"Buckeye"
...
}
}
]
I've written a node.js script like this for proof of concept (why I'm using node is that the JS API seems better supported than REST or Ruby for this. I could be wrong):
http = require('http')
Firebase = require('firebase')
all_sites_url = "http://supercharge.info/service/supercharge/allSites"
firebase_url = "https://tesla-supercharger.firebaseio.com/"
http.get(all_sites_url, (res) ->
body = ""
res.on "data", (chunk) ->
body += chunk
return
res.on "end", ->
response = JSON.parse(body)
all_sites = response
send_to_firebase(response)
return
return
).on "error", (e) ->
console.log "Got error: ", e
return
send_to_firebase = (response) ->
firebase_ref = new Firebase(firebase_url)
for charger in response
console.log charger
new_child = firebase_ref.push()
new_child.set {id: charger.id, data: charger}, (error) ->
if error
console.log "Data cound not be saved #{error}"
else
console.log "Data saved successfully"
The result is a unique id generated by Firebase, which has as a child a data and an id child. The data child has the expected information like name, status, etc.
What I'd prefer is to generate a key-value pair. E.g., for an id of 100:
- 100
- name
- address
street
city
etc. So my first question is how to accomplish this or if it is even sensible.
After the first time around, this data (call it the data from an external server) will be there and a mobile app will have added some fields. These are not present in the data already there. Next time I fetch data from the external server, I want to update things that have changed that the server would know about, like status. I don't want to tamper with things that only the mobile devices would know about like remote_observations.
I know I'm seeming a bit dense here, but I'm trying to put together a sensible data model that will be updatable from that server using a CRON job and incrementally updatable from a bunch of mobile devices.
Any help is much appreciated.
UPDATE: I have found that this works for getting the structure I want:
send_to_firebase = (response) ->
firebase_ref = new Firebase(firebase_url)
for charger in response
firebase_ref.child(charger.id).update charger, (error) ->
if error
console.log "Data could not be saved #{error}"
else
responses_pending += 1
console.log "Data saved successfully : #{responses_pending} pending"
firebase_ref.on 'value', ->
console.log "value received rp=#{responses_pending}"
process.exit() if (responses_pending -= 1) < 1
So the code I settled on is this:
http = require('http')
Firebase = require('firebase')
firebase_url = '/path/to/your/firebase'
# code to get JSON of the form:
{
"id":100,
"name":"Buckeye, AZ",
"status":"OPEN",
"address":{"street":"416 S Watson RD",
"city":"Buckeye",
"state":"AZ",
"zip":"85326",
"country":"USA"},
... etc.
}
# Asynchronous get of JSON hash from some server or other.
get_my_fine_JSON().on 'complete', (response) ->
send_to_firebase(response)
send_to_firebase = (response) ->
firebase_ref = new Firebase(firebase_url)
length = response.length
for charger in response
firebase_ref.child(charger.id).update charger, (error) ->
if error
console.log "Data could not be saved #{error}"
else
console.log "Data saved successfully"
process.exit() if length -= 1 is 0
Discussion:
The idea was to have a Firebase structure like this:
- 100
- address
street: "123 Main Street"
etc.
That's reason 1 why id is pulled up to be the primary key. Reason 2 is so that I can uniquely identify an object pulled off the external server as the "same" one in my Firebase and apply any updates necessary.
Epiphany 1: Update is more like upsert. If the key is there, whatever hash you supply replaces matching values. If it's not there, then Firebase happily adds it. Which is way cool because it covers both the push and patch cases.
Epiphany 2: This process will hang waiting for events if nothing tells it to stop. That's why the countdown index, length is decremented until the code has upserted (for lack of a better term) each item.
Observation 1: Doing this in node.js is super fast compared with REST using Python or Ruby. And this upsert stuff is wicked cool if I'm understanding it right.
Observation 2: There isn't a ton of wisdom out there as of this writing regarding writing node shell scripts to do this kind of stuff. Maybe it's a good idea, maybe a bad one. I don't know.
Observation 3: Because of the asynchronous nature of node and the Firebase Javascript API (both GOOD THINGs), terminating a process before the last bit is done can be tricky because your process has to hang on just long enough to complete its last request/response with Firebase. This is, as mentioned before, done in the completion handler of the update. Otherwise we wouldn't necessarily be complete when the process exited.
Caveat 1: Related to observation 2, this could be a bad idea, but I haven't been able to find resources that speak to the problem.
Caveat 2: This could be a horrid abuse or misunderstanding of the Firebase update API. I am reporting observed behavior in the limited case of my specific data. YMMV.
Caveat 3: I'm hoping the process lifetime is as I suggest it is in observation 3.
A note to the decaffeinated: The Javascript for this is so trivially different that it shouldn't be too tough to translate. Or go to js2coffee and paste the Coffeescript into the right pane to get real Javascript in the left pane that you can tune.

Watir implicit_wait doesn't seem to work

We are currently using watir-webdriver (0.6.2) with firefox to run acceptance tests.
Our tests take a long time to run, and often fail with timeout errors.
We wanted to decrease the timeout time, for them to fail faster.
We tried:
browser = Watir::Browser.new("firefox")
browser.driver.manage.timeouts.implicit_wait=3
However, we are still experiencing 30s timeouts.
We couldn't find any documentation or question regarding this issue. Does anyone know how to configure Watir waiting timeouts properly?
It depends exactly what you mean by 'timeout'. AFAIK there are three different definitions of timeout commonly discussed when talking about Watir-Webdriver:
How long does the browser wait for a page to load?
How long does Watir-Webdriver explicitly wait before considering an element 'not present' or 'not visible' when told to wait via the '.when_present' function
How long does Watir-Webdriver implicitly wait for an object to appear before considering an element 'not present' or 'not visible' (when not waiting via explicitly call see #2)
#1: Page load
Justin Ko is right that you can set page load timeout as described if your goal is to modify that, though it looks like the canonical way to do that is to set the client timeout before creating the browser and passing it to the browser on creation:
client = Selenium::WebDriver::Remote::Http::Default.new
client.timeout = 180 # seconds – default is 60
b = Watir::Browser.new :firefox, :http_client => client
- Alistair Scott, 'How do I change the page load Timeouts in Watir-Webdriver'
#2: Explicit timeout
But I think #p0deje is right in saying you are experiencing explicit timeouts, though it's not possible to say for sure without seeing your code. In the below I experienced the explicit declaration overriding the implicit (I am unsure if that's intentional):
b = Watir::Browser.new :firefox
b.driver.manage.timeouts.implicit_wait = 3
puts Time.now #=> 2013-11-14 16:24:12 +0000
begin
browser.link(:id => 'someIdThatIsNotThere').when_present.click
rescue => e
puts e #=> timed out after 30 seconds, waiting for {:id=>"someIdThatIsNotThere", :tag_name=>"a"} to become present
end
puts Time.now #=> 2013-11-14 16:24:43 +0000
Watir-Webdriver will wait 30 seconds before failure by default thanks to 'when_present'. Alternatively you can say 'when_present(10)' to alter the default and wait 10 seconds. (Watir-Webdriver > Watir::Wait#when_present.) I can not divine any way to do this globally. Unless you find such a thing - and please tell me if you do - it must be done on each call. :( Edit: Fellow answerer Justin Ko gave me the answer as to how to do what I described above. Edit 2: #jarib added this to Watir, per #justinko in the linked answer: "Update: This monkey patch has been merged into watir-webdriver and so will no longer be needed in watir-webdriver v0.6.5. You will be able to set the timeout using: Watir.default_timeout = 90"
#3 Implicit timeout
The code you provided sets the time Watir-Webdriver will wait for any element to be come present without you explicitly saying so:
b = Watir::Browser.new :firefox
b.driver.manage.timeouts.implicit_wait = 3
puts Time.now #=> 2013-11-14 16:28:33 +0000
begin
browser.link(:id => 'someIdThatIsNotThere').when_present.click
rescue => e
puts e #=> unable to locate element, using {:id=>"someIdThatIsNotThere", :tag_name=>"a"}
end
puts Time.now #=> 2013-11-14 16:28:39 +0000
The implicit_wait is the amount of time selenium-webdriver tries to find an element before timing out. The default is 0 seconds. By changing it to "3", you are actually increasing the amount of time that it will wait.
I am guessing that you actually want to change the timeout for waiting for the page to load (rather than for finding an element). This can be done with:
browser.driver.manage.timeouts.page_load = 3
For example, we can say to only wait 0 seconds when loading Google:
require 'watir-webdriver'
browser = Watir::Browser.new :firefox
browser.driver.manage.timeouts.page_load = 0
browser.goto 'www.google.ca'
#=> Timed out waiting for page load. (Selenium::WebDriver::Error::TimeOutError)
Update: Since Watir 6.5, the default timeout is configurable using
Watir.default_timeout = 3
We experienced the same issue and chose to override Watir methods involving timeouts, namely
Watir::Wait.until { ... }
Watir::Wait.while { ... }
object.when_present.set
object.wait_until_present
object.wait_while_present
Here is the code, you can put it in your spec_helper.rb if using rspec
# method wrapping technique adapted from https://stackoverflow.com/a/4471202/177665
def override_timeout(method_name, new_timeout = 3)
if singleton_methods.include?(method_name)
old_method = singleton_class.instance_method(method_name)
define_singleton_method(method_name) do |timeout = new_timeout, *args, &block|
old_method.bind(self).(timeout, *args, &block)
end
else
old_method = instance_method(method_name)
define_method(method_name) do |timeout = new_timeout, *args, &block|
old_method.bind(self).(timeout, *args, &block)
end
end
end
# override default Watir timeout from 30 seconds to 3 seconds
module Watir
module Wait
override_timeout(:until)
override_timeout(:while)
end
module EventuallyPresent
override_timeout(:when_present, 5) # 5 secs here
override_timeout(:wait_until_present)
override_timeout(:wait_while_present)
end
end
We used answer from https://stackoverflow.com/a/4471202/177665 to get it working.

Resources