I have the following cron.yaml:
cron:
- description: "TEST_TEST_TEST"
- url: /cronBatchClean
- schedule: every 2 minutes
And then in app.yaml:
service: environ-flexible
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 3
With this as main.py:
from flask import Flask, request
import sys
app = Flask(__name__)
#app.route('/cronBatchClean')
def cronBatchClean():
print("CRON_CHECK", file=sys.stderr)
return "CRON_CHECK"
When I type in the full URL, I receive "CRON_CHECK" on screen but this doesn't seem to be executing. Also in App Engine dashboard, when I click on CRON jobs there aren't any listed.
Any help in getting this to execute would be much appreciated,
Thanks :)
EDIT 1
I now have the cron task executing but I'm receiving a 404 error. When I type the full URL (that is - https://.appspot.com/cronBatchClean) the respective code executes.
I added a GET handler but I'm still not receiving any luck.
#app.route('/cronBatchClean', methods=['GET'])
def cronBatchClean():
print("CRON_JOB_PRINT", file=sys.stderr)
return "CRON_CHECK"
In the cron.yaml there are unnecessary “-” characters, that are starting the new list. YAML Syntax
Correct format for Cron Jobs cron.yaml, see Google Cloud documentation:
cron:
- description: "TEST_TEST_TEST"
url: /cronBatchClean
schedule: every 2 minutes
To deploy Cron Job use gcloud command :
$ gcloud app deploy cron.yaml
To solve this problem i changed the service name to default then i deploy with the default service the cron job task path was pointed to the default path of app engine so that when the task was scheduled 404 error raised because the path doesn't match when the service name set to "environ-flexible"
In app.yaml
change :
service: environ-flexible
to
service: default
Related
I have a problem getting a cron job to correctly target a specific service on the Google App Engine Standard for a python app.
I can successfully create a cron job for a python app on Google App Engine (Standard) with the app.yaml file and a cron.yaml file. The service is not defined and is the default service. the cron.yaml does not specify a target.
If I set the service name in the app.yaml file to service1,
the URL for the app changes from projectID.ew.r.appspot.com to service1-dot-projectID.ew.r.appspot.com.
Next, I specify the target in the cron job to service1 and redeploy the app.yaml and cron.yaml.
The cron job now fails with status 400 every time it runs.
From what I can see in the ProtoPayLoad logs, the host is not using the correct URL.
The cron job uses the URL service1.projectID.ew.r.appspot.com, according the protoPayLoad "host" value in the log. The cron job returns a Status 400.
Why does the cron job not use the service1-dot-projectID.ew.r.appspot.com URL?
What can I do to get the cron job to correctly target a specific service?
#app.yaml with working cron job
#project URL: projectID.ew.r.appspot.com
runtime: python38
handlers:
- url: /static
static_dir: static/
- url: /.*
script: auto
#cron.yaml with working cron job
cron:
- description: "working cron job"
url: /myjob/
schedule: every 2 minutes
Here are the versions of files for the broken cron job
#app.yaml with broken cron job
#project URL: service1-dot-projectID.ew.r.appspot.com
service: service1
runtime: python38
handlers:
- url: /static
static_dir: static/
- url: /.*
script: auto
#cron.yaml with working cron job
cron:
- description: "working cron job"
url: /myjob/
schedule: every 2 minutes
target: service1
The error in the app engine log file has the protoPayLoad details.
host:"service1.projectID.ew.r.appspot.com"
I think it should be the following
host:"service1-dot-projectID.ew.r.appspot.com"
My project has multiple services deployed that will use cron jobs specific to each service.
I cannot just use the default service name.
I appreciate all the help.
It is only after posting a question do you find the answer.
This project is a Django project.
The django Settings file has a list of URLs defined in the ALLOWED_HOSTS variable.
service1-dot-projectID.ew.r.appspot.com was defined but
service1.projectID.ew.r.appspot.com was not defined.
Once this was added, the cron job worked perfectly, targeting a service as normal.
I have been trying to deploy a basic app to google engine app(because Azure is an extortion) for the past few days, I have learned that Gunicode does not work on windows system and that the alternative is waitress. I read all the answers related to the subject here, before I posted this question!!!
So I have been trying different setups, reading about it and I still can't get it running. My field is data science, but deployment seems to be obligatory nowadays. If someone can help me out please, it would be very appreciated.
app.py file
from flask import Flask, render_template, request
from waitress import serve
app = Flask(__name__)
#app.route('/')
def index():
name = request.args.get("name")
if name == None:
name = "Reinhold"
return render_template("index.html", name=name)
if __name__ == '__main__':
#app.run(debug=True)
serve(app, host='0.0.0.0', port=8080)
Gcloud app deploy will look for the gunicode to start the deployment which will be at the app.yaml file, I tried different setups there and I ended up setting it up None as Flask will look for an alternative in my humble view. Though I still think that would be better to setup the waitress server there.
app.yaml file
runtime: python37
#entrypoint: None
entrypoint: waitress-serve --listen=*:8080 serve:app
GCloud also will look for an appengine_config.py file where it will find the dependencies(I think)
from google.appengine.ext import vendor
vendor.add('venv\Lib')
The requirements.txt file will be the following:
astroid==2.3.3
autopep8==1.4.4
Click==7.0
colorama==0.4.3
dominate==2.4.0
Flask==1.1.1
Flask-Bootstrap==3.3.7.1
Flask-WTF==0.14.2
isort==4.3.21
itsdangerous==1.1.0
Jinja2==2.10.3
lazy-object-proxy==1.4.3
MarkupSafe==1.1.1
mccabe==0.6.1
pycodestyle==2.5.0
pylint==2.4.4
six==1.13.0
typed-ast==1.4.1
visitor==0.1.3
waitress==1.4.2
Werkzeug==0.16.0
wrapt==1.11.2
WTForms==2.2.1
In the google console I could access the log view to see what was going wrong during the deployment and that is what I got from the code I shared here.
{
insertId: "5e1e9b4500029d71f92c1db9"
labels: {…}
logName: "projects/bokehflaskgcloud/logs/stderr"
receiveTimestamp: "2020-01-15T04:55:33.288839846Z"
resource: {…}
textPayload: "/bin/sh: 1: exec: None: not found"
timestamp: "2020-01-15T04:55:33.171377Z"
}
If someone could help solve this, that would be great because google seems to be a good alternative to deploy some work. Azure and VScode have a good interaction so it isnt as hard to deploy it there, but the cost of it after the trial is insane.
That is what I get once I try to deploy the application.
Error: Server Error
The server encountered an error and could not complete your request.
Please try again in 30 seconds.
easily run your flask app using Gunicorn:
runtime: python37
entrypoint: gunicorn -b :$PORT main:app
you need to add gunicorn to your requirments.txt
check this documentation on how to define application startup in python 3
make sure that you run your app using flask run method, in case you want to test your app locally:
if __name__ == '__main__':
app.run(host='127.0.0.1', port=8080, debug=True)
appengine_config.py is not used in Python 3. The Python 2 runtime uses this file to install client libraries and provide values for constants and "hook functions". The Python 3 runtime doesn't use this file.
the app.py file there is no mention of flask library
Please add following import at line 2.
from flask import Flask, request, render_template
I am trying to setup a Celery application under Flask to accept API requests and then separate Celery workers to perform the long running tasks. My problem is that my Flask and everything else in my environment uses MongoDB so I do not want to setup a separate SQL db just for the Celery results. I cannot find any good examples of how to properly configure Celery with a MongoDB cluster as the backend.
Here are the settings I have tried to make it accept:
CELERY_RESULT_BACKEND = "mongodb"
CELERY_MONGODB_BACKEND_SETTINGS = {"host": "mongodb://mongodev:27017",
"database": "celery",
"taskmeta_collection": "celery_taskmeta"}
No matter what I do, Celery seems to ignore the config settings and launched without any results backend. Does anywon have a working example using the latest version of Celery? The only other examples I can find are of v3 Celery setups and that didn't work for me either since I am using a Mongo replica cluster in production which seems unsupported for that version.
[Edit]Adding more information in the complicated way I am setting the config to work with the rest of the application.
The config values are first passed as environment variables through a docker-compose file like this:
environment:
- PYTHONPATH=/usr/src/
- APP_SETTINGS=config.DevelopmentConfig
- FLASK_ENV=development
- CELERY_BROKER_URL=amqp://guest:guest#rabbit1:5672
- CELERY_BROKER_DEV=amqp://guest:guest#rabbit1:5672
- CELERY_RESULT_SERIALIZER=json
- CELERY_RESULT_BACKEND=mongodb
- CELERY_MONGODB_BACKEND_SETTINGS={"host":"mongodb://mongodev:27017","database":"celery","taskmeta_collection":"celery_taskmeta"}
Then, inside the config.py file they are loaded:
class DevelopmentConfig(BaseConfig):
"""Development configuration"""
CELERY_BROKER_URL = os.getenv('CELERY_BROKER_DEV')
CELERY_RESULT_SERIALIZER = os.getenv('CELERY_RESULT_SERIALIZER')
CELERY_RESULT_BACKEND = os.getenv('CELERY_RESULT_BACKEND')
CELERY_MONGODB_BACKEND_SETTINGS = ast.literal_eval(os.getenv('CELERY_MONGODB_BACKEND_SETTINGS'))
Then, when Celery is initiated, the config is loaded:
app = Celery('celeryworker', broker=os.getenv('CELERY_BROKER_URL'),
include=['celeryworker.tasks'])
print('app initiated')
app.config_from_object(app_settings)
app.conf.update(accept_content=['json'])
print("CELERY_MONGODB_BACKEND_SETTINGS",
os.getenv('CELERY_MONGODB_BACKEND_SETTINGS'))
print("celery config",app.conf)
When the application comes up here is what I see with all my troubleshooting prints. I have redacted a lot of the config output just to show what I have here is passing through the config.py to app.config but being ignored by Celery. You can see the value makes it into the celery.py file and I am sure Celery does something with it because before I added the ast.literal_eval in the config.py Celery would throw an error saying that the MongoDB backend settings needed to be a dict rather then a string. Unfortunately now that it is being passed as a proper dict Celery ignores it.
app_settings SGSDevOps.config.DevelopmentConfig
app initiated
CELERY_MONGODB_BACKEND_SETTINGS {"host":"mongodb://mongodev:27017","database":"celery","taskmeta_collection":"celery_taskmeta"}
celery config Settings(Settings({'BROKER_URL': 'amqp://guest:guest#rabbit1:5672', 'CELERY_INCLUDE': ['celeryworker.tasks'], 'CELERY_ACCEPT_CONTENT': ['json']}, 'BROKER_URL': 'amqp://guest:guest#rabbit1:5672', 'CELERY_MONGODB_BACKEND_SETTINGS': None, 'CELERY_RESULT_BACKEND': None}))
APP_SETTINGS config.DevelopmentConfig
app.config <Config {'ENV': 'development', 'CELERY_BROKER_URL': 'amqp://guest:guest#rabbit1:5672', 'CELERY_MONGODB_BACKEND_SETTINGS': {'host': 'mongodb://mongodev:27017', 'database': 'celery', 'taskmeta_collection': 'celery_taskmeta'}, 'CELERY_RESULT_BACKEND': 'mongodb', 'CELERY_RESULT_SERIALIZER': 'json', }>
-------------- celery#a5ea76b91f77 v4.2.1 (windowlicker)
---- **** -----
--- * *** * -- Linux-4.9.93-linuxkit-aufs-x86_64-with-debian-9.4 2018-10-29 17:25:27
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: celeryworker:0x7f28e828f668
- ** ---------- .> transport: amqp://guest:**#rabbit1:5672//
- ** ---------- .> results: mongodb://
- *** --- * --- .> concurrency: 2 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. celeryworker.tasks.longtime_add
I still do not know why the above config is not working but I found a workaround to update the config after the app loads using the new config value names:
app = Celery('celeryworker', broker=os.getenv('CELERY_BROKER_URL'),
backend=os.getenv('CELERY_RESULT_BACKEND'),
include=['SGSDevOps.celeryworker.tasks'])
print('app initiated')
app.config_from_object(app_settings)
app.conf.update(accept_content=['json'])
app.conf.update(mongodb_backend_settings=ast.literal_eval(os.getenv('CELERY_MONGODB_BACKEND_SETTINGS')))
I have created a google cloud app ( with nodeJS ) . That have 1 endpoing with the app.yaml
runtime: nodejs
env: flex
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 1
disk_size_gb: 10
And a cronJob with a cron.yaml that calls to this endpoint like this :
cron:
- description: "Lead Data Update"
url: /cronjobPrivate/processLeads
schedule: every 1 minutes
I'm gettin crazy because if i try to access from the url of my cloud app like
https://[my-project].appspot.com/cronjobPrivate/processLeads The response is correct and the app is working.
But the cronjob always says "fails" and does not create any record on the logs.
I have read the documentation of google but i didn't find nothing. Maybe i'm missing something between the cronjobs <-> app service ...but i don't know what...
I am trying to run a cron in google app engine which should run a node.js script every 2 minute but i always get 404 error in the log. I see the cron is running every 2 minute but its not finding the script cronRun.js. Below is the relevant part of the code.
cron.yaml
cron:
- description: "daily summary job"
url: /task
schedule: every 2 minutes
app.yaml
runtime: nodejs
env: flex
api_version: 1
threadsafe: true
handlers:
- url: /task
script: cronRun.js
log:
"GET /task" 404
By this I see that I am not defining the path correctly.
Below is the file structure
Have you tried making a request to the /task endpoint directly, without going through the Cron job?
The problem may be related to the application itself and not the Cron job, so I would recommend you to test it by accessing your App Engine application with the /task endpoint.
If it does not work, the issue is definitely caused by the application and not Cron.