aws auto-stop-idle does not detect papermill - linux

I am using papermill to parametrize jupyter notebook deployed on AWS Sagemaker. I also used this lifestyle configuration that will auto shutdown if there are no running/idle notebooks. Unfortunately, it does not detect the Papermill process and continues to shutdown after reaching the specified idle time. What do I need to do to keep Sagemaker alive until the completion of Papermill

You could edit the idleness detection script to account for papermill processes.
Alternatively, if you have async jobs, which you can formulate as python code, you could use SageMaker processing jobs to execute them, which will not depend on your notebook instance being up.

Related

Stop dag google composer in comand or dag python. composer-1.19.8-airflow-1.10.15

composer-1.19.8-airflow-1.10.15
i need to stop dag running for a certain time automatically if it is still running. is it possible to stop a dag with the command console or via our code? I saw that there was an api but the version we used is deprecated or doesn't exist for composer.
My process needs to stop at 23:50 PM, but my dag is sometimes running non-stop. How to do it automatically.

Databricks Notebook Schedule

I have scheduled an ADB notebook to run on a schedule. Will the notebook run if the cluster is down? Right now the cluster is busy so unable to stop and try it out. Will the notebook start the cluster and run or would wait for the cluster to be up?
If you're scheduling the notebook to run on the existing cluster, then cluster will be started if it's stopped. But in reality, it's better to execute the notebook on the new cluster - there will be less chance of breaking things if you change library version or something like. If you need to speedup the job execution you may look onto instance pools.

Using databricks for twtter sentiment analysis - issue running the official tutorial

I am starting to use Databricks and tried to implement one of the official tutorials (https://learn.microsoft.com/en-gb/azure/azure-databricks/databricks-sentiment-analysis-cognitive-services) from the website. However, I run into an issue - not even sure if I can call it an issue - when I run the second notebook (analysetweetsfromeventhub) then all commands (2nd, 3rd, 4th ...) are officially waiting to run, but never run. See the picture. Any idea what might be? Thanks.
After you cancel a running streaming cell in a notebook attached to a Databricks Runtime cluster, you cannot run any subsequent commands in the notebook. The commands are left in the “waiting to run” state, and you must clear the notebook’s state or detach and reattach the cluster before you can successfully run commands on the notebook.
Note that this issue occurs only when you cancel a single cell; it does not apply when you run all and cancel all cells.
In the meantime, you can do either of the following:
To remediate an affected notebook without restarting the cluster, go to the notebook’s Clear menu and select Clear State:
If restarting the cluster is acceptable, you can solve the issue by turning off idle context tracking. Set the following Spark configuration value on the cluster:
spark.databricks.chauffeur.enableIdleContextTracking false
Then restart the cluster.

Apache Zeppelin Notebook Deployment

I'm new to ETL development with PySpark and I've been writing my scripts as paragraphs on Apache Zeppelin Notebooks. I was curious what the typical flow was for a deployment process? How are you converting your code from a Zeppelin Notebook to your ETL pipeline?
Thanks!
Well that heavily depends on the sort of ETL that you're doing.
If you want to keep the scripts in the notebooks and you just need to orchestrate their execution then you have a couple options:
Use Zeppelin's built-in scheduler
Use cron to launch your notebooks via curl commands and Zeppelin's REST API
But if you already have an up-and-running workflow management tool like Apache Airflow then you can add new tasks that launch the aforementioned curl commands to trigger the notebooks (with Airflow, you can use BashOperator or PythonOperator), but keep in mind that you'll need some workarounds to have a sequential execution of different notes.
One major tech company that's betting heavily on notebooks is Netflix (you can take a look at this), and they developed a set of tools to improve the effeciency of notebook-based ETL pipelines, like Commuter and Papermill. They're more into Jupyter, so Zeppelin compatibility is still not provided, but the core concepts should be the same when working with Zeppelin.
For more on Netflix' notebook-based pipelines, you can refer to this article shared on their tech blog.

Zeppelin notebook execute not manual

is there a way to execute the spark code in a zeppelin notebook, without having to do it interactively? I'm looking for something specific or if anyone could point me in the correct direction. Or alternatively, other ways to submit spark code, which is currently in a zeppelin notebook. The reason I can't use spark-submit is that there is no command line access due to security reasons.
Zeppelin provides REST API which, among other functions, can be used to run either individual paragraphs, either synchronously
http://[zeppelin-server]:[zeppelin-port]/api/notebook/run/[noteId]/[paragraphId]
or asynchronously
http://[zeppelin-server]:[zeppelin-port]/api/notebook/job/[noteId]/[paragraphId]
as well as whole notebook:
http://[zeppelin-server]:[zeppelin-port]/api/notebook/job/[noteId]
It is also possible to define CRON jobs, both from notebook itself and from the REST API.

Resources