Laravel PHP queue:work not working on linux - linux

I tried to use supervisor and this is my config:
Status
In my job table
Also the "tries=3" is not working and my worker.log is also null

The jobs in your table are in the 'default' queue, but you've told your workers to only process jobs from the 'jobs' queue.
In the supervisor config either remove --queue=jobs entirely, or change it to --queue=default

Related

How to get specific node PID in Node-Red?

I am using Node-Red V2.2.2. I would like to restart an specific node of the flow after an error is triggered in it.
I have managed to restart the full flow getting node-red process id. After modifying: settings.js in my .node-red folder:
functionGlobalContext: {
// os:require('os'),
'pid': process.pid
},
I am able to get general process pid from a function node:
var General_pid = context.global.pid
And kill and restart the global process from an Exec node sending General_pid in msg.payload :
Being comando.sh:
#!/bin/bash
taskkill //PID $1 //F
sleep 4
node-red
But i am unable to do this with specific nodes inside the node-red flow.
Almost all info i have searched relied on Status node to get node specific pid,
but in my case, this is the Status node structure (no PID in there):
I have also tried to get PID based on status.source.id using:
RED.nodes.getNode(id);
But RED.nodes is undefined (altough RED is defined, but it only shows functions on print)
Any idea on how to be able to get the node PID to kill it and restart it? I could do it from an Exec node, but if there is an easier way even better.
You don't.
Nodes are not separate processes that can be restarted independently of Node-RED. (While some nodes may fork a new process, e.g. a python script, Node-RED has no access to this and it is all handled inside the node in question)
You have 2 choices:
You can trigger a restart of the deployed flow by making a HTTP call to the /flows Admin API with the header set to reload. Assuming the node with the failure is well written then it should restart cleanly.
Restart all of Node-RED as you are already

How to configure Spark spark_worker_opts for Jupyter notebooks

I use Pyspark with Spark 2.4 in the standalone mode on Linux for processing a lot of incoming data via Kafka using a Jupyter notebook (currently for testing). I want to add these options to this notebook in order to prevent the /tmp/ directory to be filled with dozens of gigabytes after few hours:
spark.worker.cleanup.enabled=true
spark.worker.cleanup.appDataTtl=120
But these conf locations do not work:
Spark’s default configuration (spark/conf/spark-env.sh) seems not be used by Juypter notebooks at all:
SPARK_WORKER_OPTS="spark.worker.cleanup.enabled=true
spark.worker.cleanup.appDataTtl=120"
So, I created a sperate kernel configuration in ~/.local/share/jupyter/kernels/python3-spark1/kernel.json that I can select in Jupyterhub and that is really used for the RAM adjustments what I can see in htop:
"env": {
"PYSPARK_SUBMIT_ARGS": "--master local[*]
--conf spark.worker.cleanup.enabled=true --conf=spark.worker.cleanup.appDataTtl=120 driver-memory 145g --executor-memory 50g pyspark-shell"
but the /tmp still fills with dozens of gigs.
I also tried the “magic” in a jupyter cell but it also did not work.
Do you know how to configure the Jupyter notebooks for this Spark adjustments properly?
Configuration properties that apply only to the worker in the form "-Dx=y"
export SPARK_WORKER_OPTS="$SPARK_WORKER_OPTS -Dspark.worker.cleanup.enabled=true -Dspark.worker.cleanup.interval=60 -Dspark.worker.cleanup.appDataTtl=120"
If that not work you can try any of the below options.
Option-1: Updating default.conf
In Worker node set the following configuration option in the /spark/conf/spark-defaults.conf file:
spark.worker.cleanup.enabled: Enables periodic cleanup of worker and application directories. Disabled by default. Set to true to enable it. Note: that this only affects standalone mode, as YARN works differently.
spark.worker.cleanup.interval: The frequency, in seconds, that the worker cleans up old application work directories. The default is 30 minutes.
spark.worker.cleanup.appDataTtl: The number of seconds to retain application work directories on each worker. The default is 7 days.
Then stop and start the workers.
sbin/stop-worker.sh - Stops all worker instances on the machine the script is executed on.
sbin/start-worker.sh - Starts a worker instance on the machine the script is executed on.
Option-2: If you setup a spark cluster using docker-compose then set environment in Docker compose file
spark-worker-x:
image: spark-worker
container_name: spark-worker-x
environment:
- SPARK_WORKER_OPTS="-Dspark.worker.cleanup.enabled=true -Dspark.worker.cleanup.interval=60 -Dspark.worker.cleanup.appDataTtl=120"

Ansible: setup a cron job on one host

I'm deploying a 2-hosts service that also needs to setup a cron job. This job should only be run on one of the two machines (I dont care which). what's the easiest way to do so?
I know that the shell module in Ansible supports "run_once", but the cron module does not.
I could setup the cron job on both machines and then use the command "crontab -r" to remove all the jobs (provided no other jobs are needed there) on one machine. this is dirty, but very easy.
any better ideas?
I know that the shell module in Ansible supports "run_once", but the cron module does not.
Wrong. run_once is a property of a task, not of action modules.
Use cron module and set run_once for the task (mind the indentation level), for example:
- cron:
name: "check dirs"
minute: "0"
hour: "5,2"
job: "ls -alh > /dev/null"
run_once: true

Correct way to restar/reload application for a different release

I have following folder structure:
current
releases
2192091029019/
1029012901920/
Latest release gets pushed to current folder, and I afterwards start it wiht pm2 start, however If I upload new release with different folder name and do pm2 reload from new folder it still trys to reference original release from where application was started. Is there a way to restart application respecting new code?
I have same problem with this release structure but with supervisord+Rails instead pm2 + node.
In my case i need to completely restart supervisord every deploy to fix that.
So in your case it may work like this:
pm2 stop
kill -SIGTERM {pm2_pid}
pm2 startup
It's hackish but working solution.

How to run a nodejs script every second

I need to run my nodejs script for every second ,Similar to PHP cron jobs. I have tried some nodejs cron libraries like https://github.com/ncb000gt/node-cron but the issue was first run should be manual i:e I have to run the file with cron script for first time manually.
But in php cron jobs, they run by the server so if the apache server running script will automatically start and even if the script return an error for a cycle then script will run again from the beginning from the next cycle
So is there any way to achieve this in nodejs ?
You have two options:
using Node as a daemon, with something like Supervisord to run your node-cron script. This alternative is wasteful on resources such as RAM because Node and Supervisord are running all the time.
using the system's crontab, you can run your script like calling Node on the command line, such as * * * * node /path/to/your/script.js. This alternative is highly efficient but lacks some control, like being able to log the output in case of an error, although you could just pipe the output to a file: node script.js > logfile

Resources