Perforce Checkpoint Notification - perforce

We have a job scheduled in crontab that does create a nightly checkpoint; however, we want to notify a DL or group as soon as a new checkpoint file, "p4_1.ckp.XXXX.gz," is added under "/p4/1/checkpoints."
I appreciate your help.

Related

Trigger workflow job with Databricks Autoloader

I have requirement to monitor S3 bucket for files (zip) to be placed. As soon as a file is placed in S3 bucket, the pipeline should start processing the file. Currently I have Workflow Job with multiple tasks the performs processing. In Job parameter, I have configured S3 bucket file path and able to trigger pipeline. But I need to automate the monitoring through Autoloader. I have setup Databricks autoloader in another notebook and managed to get the list of files that are arriving S3 path by querying checkpoint.
checkpoint_query = "SELECT * FROM cloud_files_state('%s') ORDER BY create_time DESC LIMIT 1" % (checkpoint_path)
But I want to integrate this notebook with my job but no clue how to integrate it with pipeline job. Some pointers to help will be much appreciable.
You need to create a workflow job and add the pipeline as the upstream task and your notebook as the downstream. Currently there is no way to run custom notebooks within a dlt pipeline.
Check this for how to create a workflow: https://docs.databricks.com/workflows/jobs/jobs.html#job-create

Clear /spark_metadata directory from previous job every time a new streaming job is submitted

Let's say you want to replace an old kafka spark streaming job running in aws ECS. When a new task definition is deploying there will be 2 jobs pointing to the same spark_metadata folder until the deploy process is finished.
Is it required to always clear up the spark metadata folder from previous task execution?

Recover Slurm job submission script from an old job?

I accidentally removed a job submission script for a Slurm job in terminal using rm command. As far as I know there are no (relatively easy) ways of recovering that file anymore, and I hadn't saved it anywhere. I have used that job submission script many many times before, so there are a lot of Slurm job submissions (all of them finished) that have used it. Is it possible to recover that job script from an old finished job somehow?
If Slurm is configured with the ElasticSearch plugin, then you will find the submission script for all completed jobs in the ElasticSearch instance used in the setup.
Another option is to install sarchive

Spring XD job deployment error

I have observed this error multiple times, when XD shell / XD-container is unexpectedly quit and the job is still in deployed state. When xd container, and xd-shell are restarted, its not allowing us to recreate the job with the same name.
But it is not seen in the job list as well. As a workaround, we are creating the job with a new name. Is there a solution for this issue. I assumed this is happening only with XD on windows, but it seems to happen on Unix box as well.
springXD job error
This is mainly because the job status is also saved in the zookeeper namespace, we have been able to avoid create the job with new names cleaning the zookeeper namespace where the job located.

Cassandra - What precaustion should I take while deleting the backup files?

As per documentation at http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/operations/ops_backup_incremental_t.html,
As with snapshots, Cassandra does not automatically clear incremental backup files. DataStax recommends setting up a process to clear incremental backup hard-links each time a new snapshot is created.
So is it safe to trigger deleting all the files in backups directory immediately after invoking snapshot?
How can I check, whether snapshot was, not only invoked successfully, but also completed successfully?
What if I end-up deleting a backup hard-link which got created "just after" invoking the snapshot, but before the moment I triggered deletion of backup files?

Resources