I'm having trouble setting up a cronjob for laravel 5 scheduler.
It seems like the dokku run <app> <command> command doesn't perform all commands inside the dokku container.
for example ... if I log into my server and I perform dokku run <app> php artisan migrate --force then it does migrate the DB.
But if try to perform a command like dokku run <app> php artisan schedule:run then I get the following response:
(touch /app/storage/framework/schedule-716d44b3c0c7f157011de8e9c5eca60e; /app/.heroku/php/bin/php artisan feed:import-last; rm /app/storage/framework/schedule-716d44b3c0c7f157011de8e9c5eca60e) > /dev/null 2>&1 &
But it doesn't process the the underlying actions.
The strange thing is that when I log into the dokku container with dokku run <app> bash and I run php artisan schedule:run then I get the same response, but then it does process the underlying actions.
This means that the cronjob * * * * * /bin/bash -c 'dokku run admin-feedshop php artisan schedule:run' won't do anything since it doesn't process the underlying actions this way.
Is there anyone who knows how I can get this thing working?
(I'm running dokku version 0.3.18)
Related
I have created a cron job and its only working when I run the below command:
php bin/magento cron:run
But the cron should run automatically. What Could be the issue and why the cron is not running automatically in my case?
Could anyone please help me and guide on this?
Cronjobs in Magento are separate from the cronjobs ran on the server. A cronjob in Magento can't run without a cronjob on the server.
If you want cronjobs in Magento to run automatically you'll have to add a cronjob on the server.
* * * * * bin/magento cron:run
I do not recommend copy-pasting the above as a cronjob on your server, it does for example not log any output.
If you want a more detailed explanation about the workings of cronjobs in Magento I recommend the article in Magento DevDocs below.
https://devdocs.magento.com/guides/v2.4/config-guide/cli/config-cli-subcommands-cron.html
I have node APIs and I run that with the help of docker-compose. I host that on EC2, so whenever I go and check for the logs I type docker-compose logs and it will give me all logs on the screen but how can I save all logs to the file automatically. What I mean is when I deploy new docker on the server then it should start saving all the logs to specific file so later I can go and check that out.
I can save docker-compose logs manually by executing this command:
docker-compose logs > logs.txt
You can try to use a scheduled crontab
For example:
0 1 * * * /bin/sh backup.sh
In your case, I guess that will be something like this:
0 1 * * * docker-compose logs > logs.txt
You can also read more about crontabs here
I'm running my small node app on an Ubuntu server, I can use a simple Upstart script to automatically start the node server on server launch.
However, I'd like the node app to run at particular times of day -- what are my options for implementing this?
Is it best to do this within the Node app or from the Ubuntu environment?
What about cron?
$ crontab -e
# in the editor that opens:
0 */2 * * * ~/path/to/your/node/script.js > /tmp/out.log 2> /tmp/err.log
That will run your script.js every 2 hours, ie 0:00 (midnight), 2:00, 4:00, etc. It also logs to your /tmp directory, but that's entirely optional.
I have been using docker for a couple of months now, and am working on dockerizing various different server images. One consistent issue is that many servers need to run cron jobs. There is a lot of discussion about that online (including on Stackoverflow), but I don't completely understand the mechanics of it.
Currently, I am using the host's cron and docker exec into each container to run a script. I created a convention about the script's name and location; all my containers have the same script. This avoids having the host's cron depending on the containers.
Basically, once a minute, the host's cron does this:
for each container
docker exec -it <containername> /cronscript/minute-script
That works, but makes the containers depend on the host.
What I would like to do is create a cron container that kicks off a script within each of the other containers - but I am not aware of an equivalent to "docker exec" that works from one container to the other.
The specific situations I have right now are running a backup in a MySQL container, and running the cron jobs Moodle requires to be run every minute. Eventually, there will be additional things I need to do via cron. Moodle uses command-line PHP scripts.
What is the "proper" dockerized way to kick off a script from one container in another container?
Update: maybe it helps to mention my specific use cases, although there will be more as time goes on.
Currently, cron needs to do the following:
Perform a database dump from MySQL. I can do that via mysqldump TCP link from a cron container; the drawback here is that I can't limit the backup user to host 127.0.0.1. I might also be able to somehow finagle the MySQL socket into the cron container via a volume.
Perform regular maintenance on a Moodle installation. Moodle includes a php command line script that runs all of the maintenance tasks. This is the biggie for me. I can probably run this script through a volume, but Moodle was not designed with that situation in mind, and I would not rule out race conditions. Also, I do not want my moodle installation in a volume because it makes updating the container much harder (remember that in Docker, volumes are not reinitialized when you update the container with a new image).
Future: perform routine maintenance on a number of other of my servers, such as cleaning out email queues, etc.
My solution is:
install crond inside container
install Your soft
run cron as a daemon
run Your soft
Part of my Dockerfile
FROM debian:jessie
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY .crontab /usr/src/app
# Set timezone
RUN echo "Europe/Warsaw" > /etc/timezone \
&& dpkg-reconfigure --frontend noninteractive tzdata
# Cron, mail
RUN set -x \
&& apt-get update \
&& apt-get install -y cron rsyslog mailutils --no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
CMD rsyslogd && env > /tmp/crontab && cat .crontab >> /tmp/crontab && crontab /tmp/crontab && cron -f
Description
Set timezone, because cron need this to proper run tasks
Install cron package - package with cron daemon
Install rsyslog package to log cron task output
Install mailutils package if You want to send e-mails from cron tasks
Run rsyslogd
Copy ENV variables to tmp file, because cron run tasks with minimal ENV and You tasks may need access to containers ENV variables
Append Your .crontab file (with Your tasks) to tmp file
Set root crontab from tmp file
Run cron daemon
I use this in my containers and work very well.
one-process-per-container
If You like this paradigm, then make one Dockerfile per cron task. e.g.
Dockerfile - main program
Dockerfile_cron_task_1 - cron task 1
Dockerfile_cron_task_1 - cron task 2
and build all containers:
docker build -f Dockerfile_cron_task_1 ...
I have a simple meteor app that I'm running on an Amazon EC2 server. Everything is working great. I start it manually with my user via meteor in the project directory.
However, what I would like is for this app to
Run on boot
Be immune to hangups
I try running it via nohup meteor &, but when I try to log out of the EC2 instance, I get the "You have running jobs" message. Continuing to log out stops the app.
How can I get the app to start on startup and stay up (unless it crashes for some reason)?
Install forever and use a start script.
$ npm install -g forever
I have several scripts for managing my production environment - the start script looks something like:
#!/bin/bash
forever stopall
export MAIL_URL=...
export MONGO_URL=...
export MONGO_OPLOG_URL=...
export PORT=3000
export ROOT_URL=...
forever start /home/ubuntu/apps/myapp/bundle/main.js
exit 0
Conveniently, it will also append to a log file in ~/.forever which will show any errors encountered while running your app. You can get the location of the log file and other stats about your app with:
$ forever list
To get your app to start on startup, you'd need to do something appropriate for your flavor of linux. You can maybe just put the start script in /etc/rc.local. For ubuntu see this question.
Also note you really should be bundling your app if using it in production. See this comparison for more details on the differences.
I am using upstart on Ubuntu server which you should be able to easily install on Amazon linux.
This is roughly my /etc/init/myapp.conf:
start on (local-filesystems and net-device-up IFACE=eth0)
stop on shutdown
respawn
respawn limit 99 5
script
export HOME="/home/deploy"
export NODE_ENV="production"
export MONGO_URL="mongodb://localhost:27017/myappdb"
export ROOT_URL=http://localhost
export MAIL_URL=smtp://localhost:25
export METEOR_SETTINGS='{"somesetting":true}'
cd /var/www/myapp/bundle/
exec sudo -u deploy PORT=3000 /usr/bin/node main.js >> /var/log/node.log 2>&1
end script
I can then manually start and stop myapp like this:
sudo start myapp
sudo stop myapp
I believe this package solves your problem: https://github.com/arunoda/meteor-up
which seems to use forever: https://github.com/nodejitsu/forever