Importing webbrowser takes 30+ seconds to load, which really slows down the starting up of my program. I've tried setting my default browser to IE and Chrome, but still yielded the same result. Tried it in other machines too.
I'm running Python 3.6.4 (Windows 7 x64) with a fairly fast internet connection. I'm fairly new in python programming as well.
My questions will be:
What causes this slowdown? I'm watching youtube videos importing webbrowser, they seem to import it instantaneously. What can I do about it?
I've tried "cheating" my way around this desperately by putting the import in a function of a button so that it would not affect the startup of the program (didn't work. kind of stupid now I think about it. still took 30+ seconds to startup)
Another desperate measure I'm planning to do is put the import into a multi thread so it could import at the background while starting up the program. I haven't done multi threading yet, so I still need to learn this. Would this work though?
I don't know what other information I could share regarding this since I'm really lost here. Any advice would be much appreciated.
Edit: I made a simple py to time the execution of the code.
import timeit
start = timeit.default_timer()
import webbrowser
stop = timeit.default_timer()
print('Time: ', stop - start)
Output:
How are you so sure that slowdown is due to "import". This could be due to slower loading in your Chrome browser, try clearing the cache of Chrome browser. Did your Chrome browser is upto the mark as speed is considered ? also please show me the code.
Related
I am trying to build a test which will fail upon slow / sluggish / jankiness performance of a web page or the elements within it.
By "slow" or "sluggish" i mean mostly the following:
Very delayed response when scrolling down the page
Delayed response when clicking an element
This sluggishness on particular page is happening when the system under test is scaled out.
One Idea i had to tackle this was to make a time out when explicitly waiting for some element to appear in the dome, but frankly i doubt this will answer my need, because i do see all elements in the dome, its just the page interaction is very slow both in FF and Chrome.
My Stack is Python3, Selenium WebDriver, Pytest
Thanks in advance!
I am using the spyder interface with a python script that works through many time steps in succession. At a random point in my code, the process will terminate and the console will say "kernel died, restarting...". I tried running the same script in pycharm and the process also terminates, seemingly at a random point, with some exit code which I assume means the same thing.
Anyone have any tips on how to get rid of this problem? Even a workaround so I can get some work done. This is incredibly frustrating.
Note: I recently moved and got a new router and internet service, not sure if that might affect things.
I've noticed that datalab is occasionally extremely slow (to the point where I believe it's just hanging).
import json
import pprint
import subprocess
import pyspark
For instance, on this really trivial code block, the code takes forever to train. If I keep trying to refresh the page and run it, sometimes it works. What can cause this?
Resize your VM or create a new one with a larger memory/CPUs. You can do this by stopping the VM and then editing it in Compute Engine.
I have an experiment in which I present stimuli using PsychoPy / PyGaze and track eye movements with an EyeTribe eye tracker. In this experiment I update the size of two visual stimuli on each frame (at 60 Hz). I prepare each frame beforehand and afterwards loop through all of the screen objects and present them. Meanwhile, a continuous sound is playing. When I run this experiment in dummy mode (mouse movement is used as a simulation for gaze position), there are no timing issues for the visual presentation. However, when I run the experiment while performing eye tracking, the timing of the visual presentation is no longer accurate (higher variability in duration of frames).
I tried looking into the multi threading more, but in the pytribe script of PyGaze I can't find any evidence that one thread is waiting for an event coming from the eye tracking thread. So, I have no idea how to figure out what is causing the timing issues or how to solve this? (I hope I explained the problem sufficiently specific).
It's worse than just needing a separate thread for eyetrack versus stimulus rendering. What you really need is a separate process that avoids the python Global Interpreter Lock (GIL). The GIL prevents different threads from running on different processors.
For improved temporal precision I would really recommend you switch from pygaze to iohub (which also has support for eyetribe I believe). iohub does run genuinely on a different core of the machine where possible so that your stimuli and eye data can be processed independently in time, and it handles all the sync stuff for you.
Adding to Jon's answer: Hanne also emailed about the problem, and it turns out she was running her experiments from Spyder. When run from the command prompt, there shouldn't be any timing issues. (Obviously, the GIL is still around, but in practice this doesn't seem to affect screen timing.)
To prevent any issues in the future, I've added a class that allows for running the EyeTribe in a parallel Process. See: https://github.com/esdalmaijer/PyTribe/blob/master/pytribe.py#L365
Example use:
if __name__ == "__main__":
from pygaze.display import Display
from pygaze.screen import Screen
from pytribe import ParallelEyeTribe
disp = Display()
scr = Screen()
scr.draw_fixation(fixtype='cross')
tracker = ParallelEyeTribe()
tracker.start_recording()
disp.fill(scr)
disp.show()
tracker.log("Stimulus onset")
time.sleep(10)
disp.show()
tracker.log("Stimulus offset")
tracker.stop_recording()
tracker.close()
disp.close()
I am writing a program for school, and I have come across a problem. It is a webcrawler, and sometimes it can get stuck on a url for over 24 hours. I was wondering if there was a way to continue the while loop and go to the next url if it takes over a set amount of time. Thanks
If you are using urllib in Python 3 You can use the timeout argument in the urllib.request.urlopen(url, timeout=<t in secs>). That parameters is used for all blocking operation used internally by urllib.
But if you are using a different library, consult the doc or mention in the question.