I am attempting to setup a cron job to run on a 24 hour interval within my chaincode. I am trying to use the following library to set up the scheduler: https://github.com/jasonlvhit/gocron. If I run this outside of Fabric in a standalone go file it runs as it should. But running it on a deployed chaincode it is not setting up a new scheduled task. Is there something in Fabric that is preventing this library from working within the chaincode? If that is the case, can you recommend another solution that would work with Fabric.
Related
composer-1.19.8-airflow-1.10.15
i need to stop dag running for a certain time automatically if it is still running. is it possible to stop a dag with the command console or via our code? I saw that there was an api but the version we used is deprecated or doesn't exist for composer.
My process needs to stop at 23:50 PM, but my dag is sometimes running non-stop. How to do it automatically.
I am designing a dataproc workflow template with multiple spark jobs. These spark jobs would run in sequence one after the other. There could be scenarios where the workflow would run few jobs successfully and might fail for others. Is there a way to just rerun the failed jobs once I have done workaround to fix the issues which failed those jobs in the first place. Please note that I am not looking for job retry mechanism of jobs. I want to re-run the workflow again by avoiding running already successful jobs.
Dataproc Workflows do not support this use case.
Please take at Cloud Composer - Apache Airflow-based orchestration service which is more flexible and should be able to satisfy your use case.
I am developing the Kubernetes helm for deploying the Python application. Within python application i have a Database that has to be connected.
I want to run the Database scripts that would create db, create user, create table or any alter Database column and any sql script. I was thinking this can be run as a initContainer but that it is not recommended way since this will be running every time even when there is no db scripts also to run.
Below is the solution i am looking for:
Create Kubernetes job to run the scripts which will connect to postgres db and run the scripts from the files. Is there way that in Kunernetes Job to connect to Postgres service and run the sql scripts?
Please suggest any good approach for sql script to be run in kubernetes which we can monitor also with pod.
I would recommend you to simply use the idea of 'postgresql' sub-chart along with your newly developed app helm chart (check here how to use it within the section called "Use of global variables").
It uses the concept of 'initContainers' instead of Job, to let you initialize on startup a user defined schema/configuration of database from the custom *.sql script.
I m planning to host the the many job in serice fabric as stateless service run asyc. plan is to host on the multiple nodes and them running in parallel with the que mechanism. The only issue (may be) if I follow the design with multiple jobs, running on many node , running same time hitting the same database, could cause a database issue?. In typical on prem application, it used to be SQL queue, so SQL could read the message and process them. But in this scenario, the service fabric nodes it self instructing the database may cause slowness at the DB level.
Does anyone has faced the issue? or deployed background run asyc process on all SF nodes running parallel for data concentric work?
I have been trying to create a hub and register nodes using selenium grid on Jenkins CI.
I have tried creating "execute shell" and perform this process first running the selenium hub and tried registering the nodes in further steps but nothing worked. If I do it in this way. It only runs Selenium Hub but unable to register the nodes to it.
I have tried installing the selenium grid plugin for jenkins but nothing works.
Finally, I tried creating three different jobs to start and hub and register nodes to it.
Is there anyway I can do this process in a single job or Is there anyway if I run the first job which is starting the hub and then automatically the other two jobs should start the process.
Starting hub and registering node on Jenkins server is one-time process which you can do from terminal.
Or
In Jenkins execute shell section try below commands:
To start grid hub
java -jar selenium-server-standalone-2.53.0.jar -role hub -timeout 300000 &
// do not forget to add "&" at the end to run this process in the background.
To register node
java -jar selenium-server-standalone-2.53.0.jar -role node -hub http://localhost:4444/grid/register &
I don't think you can run a Selenium Grid from Jenkins UNLESS the grid is ran in the foreground of a user session so that there is a "space" to run the browsers within. It probably wont work if you run the grid as a background process. You didn't say if your using Linux or Windows, but in either case, you'll have the same problem I think.