I'm trying to read a png file and output the numpy matrix of the image in terminal using imread function of opencv on the server like this
import cv2
from flask import Flask
import os
#application.route('/readImage',methods=['POST'])
def handleHTTPPostRequest():
imagePath = f'{os.getcwd()}/input.png'
print('image path is', imagePath)
print(cv2.__version__)
im = cv2.imread(imagePath,cv2.IMREAD_COLOR)
print(im)
return 'success'
This is giving expected output on my local machine(Ubuntu 18.04) no matter howmany times I execute it. I moved this to elastic beanstalk(CentOS) with necessary setup. The request runs fine(gives proper logs along with success) the very first time I make a post call.
But when I make the post call second time, it's only outputting first two logs(imagepath and cv2 version) and is stuck there for a while. and after sometime, it's showing this error
End of script output before headers: application.py
I have added one more line just before cv2.imread just to make sure that the file exists
print('does the file exists',os.path.isfile(imagePath) )
This is returning true everytime. I have restarted the server multiple times, looks like it only works the very first time and cv2.imread() is stuck after the first post call.What am I missing
When you print from a request handler, Flask tries to do something sensible, but print really isn't what you want to be doing, as it risks throwing the HTTP request/response bookkeeping off.
A fully-supported way of getting diagnostic info out of a handler is to use the logging module. It will require a small bit of configuration. See http://flask.pocoo.org/docs/1.0/logging/
To anyone facing this issue, I have found a solution. Add this to your ebextensions config file
container_commands:
AddGlobalWSGIGroupAccess:
command: "if ! grep -q 'WSGIApplicationGroup %{GLOBAL}' ../wsgi.conf ; then echo 'WSGIApplicationGroup %{GLOBAL}' >> ../wsgi.conf; fi;"
Saikiran's final solution worked for me. I was getting this issue when I tried calling methods from the opencv-python library. I'm running Ubuntu 18.04 locally and it works fine there. However, like Saikiran's original post, when deployed to Elastic Beanstalk the first request works and then the second one does not. For my EB environment, I'm using a Python3.6-based Amazon Linux server.
Related
I have a python telegram bot and I have deployed it on Heroku. But problem is that my program actually creates pickled files while working. I have one database which is saving the required data and pickles to save some nested classes which I have to use later at some stage. So these pickle files are one of the important parts of the program. I am using the dill module for pickling.
I was able to save these files locally, but can't do when I am doing it in Heroku. I'll share the logs below. It is not even reaching the pickling part, but giving an error in opening the file itself.
import dill
def saving_test(test_path, test_obj):
try:
save_test_logger.info('Saving test...')
try:
save_test_logger.info('opening file')
test_file = open(test_path, 'wb')
except Exception as exc:
save_test_logger.exception('Error opening file')
return 0
dill.dump(test_obj, test_file)
save_test_logger.debug(f'file saved in {test_path}')
test_file.close()
return 1
except Exception as exc:
save_test_logger.exception('saving error')
test_file.close()
return exc
saving 859ab1303bcd4a65805e364a989ac8ca
2020-10-07T20:53:18.064670+00:00 app[web.1]: Could not open file ./test_objs/859ab1303bcd4a65805e364a989ac8ca.pkl
And I have added logging also to my program, but now I am confused about where can I see the original logs which are supposed to catch the exceptions also.
This is my first time using Heroku and I am comparatively new to programming also. So please help me out here to identify the route cause of the problem.
I found the problem. Even though I have pushed everything, directory test_objs wasn't there on Heroku. I just added 2 more lines of code to make a directory using os module if the directory does not exist. That solved the problem. I am not deleting the question so that just in case someone gets stuck or confused in a similar kind of situation, this question might be able to help them.
I am trying to write a simple script that given an arbitrary URL will return the title tag of that website. Because many of the URLs I want to resolve need to have JavaScript enabled, I need to use something like requests_html's render function to do this. However, I have encountered an issue with the library where the example URL below never terminates. I have tried the timeout arg of the render call and it did not work. Can anyone help me figure out how to get this to timeout properly or some other work around to make sure it doesn't get stuck?
This is my current code that does not terminate (it gets stuck on the render call):
from requests_html import HTMLSession
session = HTMLSession()
r = session.get('http://shan-shui-inf.lingdong.works/')
# render with JS
r.html.render(sleep = 1, keep_page=True)
# Also does not work: r.html.render(sleep = 1, keep_page=True, timeout = 3)
title = r.html.find('title', first=True).full_text
I have already tried solutions like: Timeout on a function call and Python timeout decorator which still did not timeout strangely enough.
NOTE: I am using Python 3.7.4 64-bit on Windows 10.
I would suggest to put r.session.close() at last. This worked for me.
Ok I'm quite late here,
This is what I've done:
pip install -U pyppeteer
(pip installed the 0.2.6 version for me)
Then it worked somehow
(unrelated)
If you want the Chromium browser to appear on the screen you'll need to change requests_html.py (somewhere in site-packages)'s 714th line,
headless=True -> headless=False
So I have this GUI that I made with tkinter and everything works well. What it does is connects to servers and sends commands for both Linux or Windows. I went ahead and used pyinstaller to create a windowed GUI without console and when I try to uses a specific function for sending Windows commands it will fail. If I create the GUI with a console that pops up before the GUI, it works like a charm. What I'm trying to figure out is how to get my GUI to work with the console being invisible to the user.
The part of my code that has the issue revolves around subprocess. To spare you all from the 400+ lines of code I wrote, I'm providing the specific code that has issues. Here is the snippet:
def rcmd_in(server):
import subprocess as sp
for i in command_list:
result = sp.run(['C:/"Path to executable"/rcmd.exe', '\\\\' + server, i],
universal_newlines=True, stdout=sp.PIPE, stderr=sp.STDOUT)
print(result.stdout)
The argument 'server' is passed from another function that calls to 'rcmd_in' and 'command_list' is a mutable list created in the root of the code, accessible for all functions.
Now, I have done my due diligence. I scoured multiple searches and came up with an edit to my code that makes an attempt to run my code with that console invisible, found using info from this link: recipe-subprocess. Here is what the edit looks like:
def rcmd_in(server):
import subprocess as sp
import os, os.path
si = sp.STARTUPINFO()
si.dwFlags |= sp.STARTF_USESHOWWINDOW
for i in command_list:
result = sp.run(['C:/"Path to executable"/rcmd.exe', '\\\\' + server, i],
universal_newlines=True, stdin=sp.PIPE, stdout=sp.PIPE,
stderr=sp.STDOUT, startupinfo=si, env=os.environ)
print(result.stdout)
The the problem I have now is when it runs an error of "Error:8 - Internal error -109" pops up. Let me add I tried using functions 'call()', 'Popen()', and others but only 'run()' seems to work.
I've reached a point where my brain hurts and I can use some help. Any suggestions? As always I am forever great full for anyone's help. Thanks in advance!
I figured it out and it only took me 5 days! :D
Looks like the reason the function would fail falls on how Windows handles stdin. I found a post that helped me edit my code to work with pyinstaller -w (--noconsole). Here is the updated code:
def rcmd_in(server):
import subprocess as sp
si = sp.STARTUPINFO()
si.dwFlags |= sp.STARTF_USESHOWWINDOW
for i in command_list:
result = sp.Popen(['C:/"Path to executable"/rcmd.exe', '\\\\' + server, i],
universal_newlines=True, stdin=sp.PIPE, stdout=sp.PIPE,
stderr=sp.PIPE, startupinfo=si)
print(result.stdout.read())
Note the change of functions 'run()' to 'Popen()'. The 'run()' function will not work with the print statement at the end. Also, for those of you who are curious the 'si' variable I created is preventing 'subprocess' from opening a console when being ran while using a GUI. I hope this will become useful to someone struggling with this. Cheers
I have a Google Colab notebook with PyTorch code running in it.
At the beginning of the train function, I create, save and download word_to_ix and tag_to_ix dictionaries without a problem, using the following code:
from google.colab import files
torch.save(tag_to_ix, pos_dict_path)
files.download(pos_dict_path)
torch.save(word_to_ix, word_dict_path)
files.download(word_dict_path)
I train the model, and then try to download it with the code:
torch.save(model.state_dict(), model_path)
files.download(model_path)
Then I get a MessageError: TypeError: Failed to fetch.
Obviously, the problem is not with the third party cookies (as suggested here), because the first files are downloaded without a problem. (I actually also tried adding the link in my Allow section, but, surprise surprise, it made no difference.)
I was originally trying to save the model as is (which, to my understanding, saves it as a Pickle), and I thought maybe Colab files doesn't handle downloading Pickles well, but as you can see above, I'm now trying to save a dict object (which is also what word_to_ix and tag_to_ix) are, and it's still not working.
Downloading the file manually with right-click isn't a solution, because sometimes I leave the code running while I do other things, and by the time I get back to it, the runtime has disconnected, and the files are gone.
Any suggestions?
I have python 3.3 installed.
i use the example they use on their site:
import urllib.request
response = urllib.request.urlopen('http://python.org/')
html = response.read()
the only thing that happens when I run it is I get this :
======RESTART=========
I know I am a rookie but I figured the example from python's own website should be able to work.
It doesn't. What am I doing wrong?Eventually I want to run this script from the website below. But I think urllib is not going to work as it is on that site. Can someone tell me if the code will work with python3.3???
http://flowingdata.com/2007/07/09/grabbing-weather-underground-data-with-beautifulsoup/
I think I see what's probably going on. You're likely using IDLE, and when it starts a new run of a program, it prints the
======RESTART=========
line to tell you that a fresh program is starting. That means that all the variables currently defined are reset and/or deleted, as appropriate.
Since your program didn't print any output, you didn't see anything.
The two lines I suggested adding were just tests to figure out what was going on, they're not needed in general. [Unless the window itself is automatically closing, which it shouldn't.] But as a rule, if you want to see output, you'll have to print what you're interested in.
Your example works for me. However, I suggest using requests instead of urllib2.
To simplify the example you linked to, it would look like:
from bs4 import BeautifulSoup
import requests
resp = requests.get("http://www.wunderground.com/history/airport/KBUF/2007/12/16/DailyHistory.html")
soup = BeautifulSoup(resp.text)