I'm not sure what's going on here but my code fails with exit code 15 and no other errors messages depending on what value I pass to time.sleep()
import docker, docker.utils, time
Thread(target = partial(image_runner.run, create_biobox_args(app))).start()
time.sleep(1)
client = docker.Client(**docker.utils.kwargs_from_env(assert_hostname = False))
container = filter(lambda x: x['Image'] == name, client.containers())[0]['Id']
while client.inspect_container(container)["State"]["Status"] == "running":
time.sleep(20)
I have a docker container started in another thread with Thread(...).start(). However if I use time.sleep() with a value greater than 10 my code will fail with exit code 15, but work otherwise. Any idea what's going on here? I've been trying to debug this but haven't a clue.
Turns out my feature tests timeout after 10 seconds and this what causes the problem. Adjusting the length of the cucumber feature tests default timeout fixes this problem.
Related
I have a function that takes some time to fully complete its process, so I continuously update a status variable. I have a function that gets that status and returns in the command line. However the issue is that when my first function is running my command line is taken up by its execution. How do I allow anyone using my program to check the status without halting execution of the first function? Heres an example of what I mean
def functionA():
while(log):
status = minutes_completed/total
upload_logs(log)
set_status(id,status)
def set_status(id , status):
status_map[id] = status
def get_status(id):
return status_map[id]
How would I be able to call get_status if functionA is still running on my terminal?
I want to establish a connection with a b0 client for Coppelia Sim using the Python API. Unfortunately, this connection function does not have a timeout and will run indefinitely if it fails to connect.
To counter that, I tried moving the connection to a separate process (multiprocessing) and check after a couple of seconds, whether the process is still alive. If it still is, I kill the process and continue with the program.
This sort of works, as it does not block my program anymore, however the process does not stop when the connection is successfully made, so it kills the process, even when the connection succeeds.
How can I fix this and also write the b0client to the global variable?
def connection_function():
global b0client
b0client = b0RemoteApi.RemoteApiClient('b0RemoteApi_pythonClient', 'b0RemoteApi', 60)
print('Success!')
return 0
def establish_b0_connection(timeout):
connection_process = multiprocessing.Process(target=connection_function)
connection_process.start()
# Wait for [timeout] seconds or until process finishes
connection_process.join(timeout=timeout)
# If thread is still active
if connection_process.is_alive():
print('[INITIALIZATION OF B0 API CLIENT FAILED]')
# Terminate - may not work if process is stuck for good
connection_process.terminate()
# OR Kill - will work for sure, no chance for process to finish nicely however
# connection_process.kill()
connection_process.join()
print('[CONTINUING WITHOUT B0 API CLIENT]')
return False
else:
return True
if __name__ == '__main__':
b0client = None
establish_b0_connection(timeout=5)
# Continue with the code, with or without connection.
# ...
I am using CherryPy to speak to an authentication server. The script runs fine if all the inputted information is fine. But if they make an mistake typing their ID the internal HTTP error screen fires ok, but the server keeps running and nothing else in the script will run until the CherryPy engine is closed so I have to manually kill the script. Is there some code I can put in the index along the lines of
if timer >10 and connections == 0:
close cherrypy (< I have a method for this already)
Im mostly a data mangler, so not used to web servers. Googling shows lost of hits for closing CherryPy when there are too many connections but not when there have been no connections for a specified (short) time. I realise the point of a web server is usually to hang around waiting for connections, so this may be an odd case. All the same, any help welcome.
Interesting use case, you can use the CherryPy plugins infrastrcuture to do something like that, take a look at this ActivityMonitor plugin implementation, it shutdowns the server if is not handling anything and haven't seen any request in a specified amount of time (in this case 10 seconds).
Maybe you have to adjust the logic on how to shut it down or do anything else in the _verify method.
If you want to read a bit more about the publish/subscribe architecture take a look at the CherryPy Docs.
import time
import threading
import cherrypy
from cherrypy.process.plugins import Monitor
class ActivityMonitor(Monitor):
def __init__(self, bus, wait_time, monitor_time=None):
"""
bus: cherrypy.engine
wait_time: Seconds since last request that we consider to be active.
monitor_time: Seconds that we'll wait before verifying the activity.
If is not defined, wait half the `wait_time`.
"""
if monitor_time is None:
# if monitor time is not defined, then verify half
# the wait time since the last request
monitor_time = wait_time / 2
super().__init__(
bus, self._verify, monitor_time, self.__class__.__name__
)
# use a lock to make sure the thread that triggers the before_request
# and after_request does not collide with the monitor method (_verify)
self._active_request_lock = threading.Lock()
self._active_requests = 0
self._wait_time = wait_time
self._last_request_ts = time.time()
def _verify(self):
# verify that we don't have any active requests and
# shutdown the server in case we haven't seen any activity
# since self._last_request_ts + self._wait_time
with self._active_request_lock:
if (not self._active_requests and
self._last_request_ts + self._wait_time < time.time()):
self.bus.exit() # shutdown the engine
def before_request(self):
with self._active_request_lock:
self._active_requests += 1
def after_request(self):
with self._active_request_lock:
self._active_requests -= 1
# update the last time a request was served
self._last_request_ts = time.time()
class Root:
#cherrypy.expose
def index(self):
return "Hello user: current time {:.0f}".format(time.time())
def main():
# here is how to use the plugin:
ActivityMonitor(cherrypy.engine, wait_time=10, monitor_time=5).subscribe()
cherrypy.quickstart(Root())
if __name__ == '__main__':
main()
I am working python program using flask, where i want to extract keys from dictionary. this keys is in text format. But I want to repeat this above whole process after every specific interval of time. And display this output on local browser each time.
I have tried this using flask_apscheduler. The program run and shows output but only once, but dose not repeat itself after interval of time.
This is python program which i tried.
#app.route('/trend', methods=['POST', 'GET'])
def run_tasks():
for i in range(0, 1):
app.apscheduler.add_job(func=getTrendingEntities, trigger='cron', args=[i], id='j'+str(i), second = 5)
return "Code run perfect"
#app.route('/loc', methods=['POST', 'GET'])
def getIntentAndSummary(self, request):
if request.method == "POST":
reqStr = request.data.decode("utf-8", "strict")
reqStrArr = reqStr.split()
reqStr = ' '.join(reqStrArr)
text_1 = []
requestBody = json.loads(reqStr)
if requestBody.get('m') is not None:
text_1.append(requestBody.get('m'))
return jsonify(text_1)
if (__name__ == "__main__"):
app.run(port = 8000)
The problem is that you're calling add_job every time the /trend page is requested. The job should only be added once, as part of the initialization, before starting the scheduler (see below).
It would also make more sense to use the 'interval' trigger instead of 'cron', since you want your job to run every 5 seconds. Here's a simple working example:
from flask import Flask
from flask_apscheduler import APScheduler
import datetime
app = Flask(__name__)
#function executed by scheduled job
def my_job(text):
print(text, str(datetime.datetime.now()))
if (__name__ == "__main__"):
scheduler = APScheduler()
scheduler.add_job(func=my_job, args=['job run'], trigger='interval', id='job', seconds=5)
scheduler.start()
app.run(port = 8000)
Sample console output:
job run 2019-03-30 12:49:55.339020
job run 2019-03-30 12:50:00.339467
job run 2019-03-30 12:50:05.343154
job run 2019-03-30 12:50:10.343579
You can then modify the job attributes by calling scheduler.modify_job().
As for the second problem which is refreshing the client view every time the job runs, you can't do that directly from Flask. An ugly but simple way would be to add <meta http-equiv="refresh" content="1" > to the HTML page to instruct the browser to refresh it every second. A much better implementation would be to use SocketIO to send new data in real-time to the web client.
I would recommend that you start a demonized thread, import your application variable, then you can use with app.app_context() in order to log into to your console.
It's a little bit more fiddly but allows the application to run separated by different threads.
I use this method to fire off a bunch of http requests concurrently. The alternative is wait for each response before making a new one.
I'm sure you've realised that the thread will become occupied of you run an infinitely running command.
Make sure to demonize the thread so that when you stop your web app it will kill the thread at the same time gracefully.
I'm writing a scenario that checks for a timeout exception. This is my Ruby code:
# Execute the constructed command, logging out each line
log.info "Executing '#{command.join(' ')}'"
begin
timeout(config['deploy-timeout'].to_i) do
execute_and_log command
end
rescue Exception => e
log.info 'Err, timeout, ouch'
raise Timeout::Error
end
I'd like to check either the output for Err, timeout or if an Exception was raised in Gherkin/Cucumber.
Then(/^the output should be 'Err, timeout, ouch'$/) do
puts 'Err, timeout, ouch'
end
How can I do that?
Gherkin isn't the place where the magic happens. The magic happens in the step definition files where you use Ruby, selenium, RSpec and other technologies (e.g.Capybara) to create the desired behaviors. So to rephrase your question, "How can I test a timeout exception given a cucumber, ruby, RSpec and selenium implementation?"
Selenium has the concept of implicit waits. That is, the duration over which selenium retries an operation before declaring failure. You can control implicit waits by setting the following in your env.rb:
# Set the amount of time the driver should wait when searching for elements
driver.manage.timeouts.implicit_wait = 20
# Sets the amount of time to wait for an asynchronous script to finish
# execution before throwing an error. If the timeout is negative, then the
# script will be allowed to run indefinitely.
driver.manage.timeouts.script_timeout = 20
# Sets the amount of time to wait for a page load to complete before throwing an error.
# If the timeout is negative, page loads can be indefinite.
driver.manage.timeouts.page_load = 20
The units are seconds. You will need to set implicit_wait higher than your 'Err, timeout, ouch' timeout. Think.
I believe that WebDriver throws Error::TimeOutError when a wait time is exceed. Your code throws, what? Timeout::Error? So in the rescue sections:
Given(/^ .... $/ do
...
rescue Error::TimeOutError => e
#timeout_exception = "Error::TimeOutError"
end
rescue Timeout::Error => f
#timeout_exception = "Err, timeout, ouch"
end
end
Then(/^the output should be 'Err, timeout, ouch'$/) do |expectedException|
expect(#timeout_exception).to eq(expectedException), "Err, timeout, ouch"
end
The above assumes that you are using rspec/exceptions, i.e RSpec 3.
First, you need to be able to execute the code that needs testing in such a way that the timeout be guaranteed. It's one option to call the cli and set up config files and whatnot but you would be much better off if you can call just the relevant part of the code from your step definition directly.
Next, you want to be able to test the outcome of your 'When' steps in your 'Then' steps. You enclose the actual call in the 'When' step in a 'try' block and store the result as well as any exception somewhere where the 'Then' step will be able to do assertions on them.
Something along the line (please excuse the syntax):
Given(/a configuration that will cause a timeout/) do
sharedContext.config = createConfigThatWillCauseTimeout()
When(/a command is executed/) do
begin:
sharedContext.result = executeCommand(sharedContext.config)
rescue Exception => ex
sharedContext.exception = ex
Then(/there should have been a Timeout/) do
logContent = loadLogContent(sharedContext.config)
assert sharedContext.exception.is_a?(Timeout::Error)
assert logContent.contains("Err, timeout, ouch")