Forgive me if I don't understand elixir really well as I am new to it...
I'm using quantum-elixir as a cron api to dynamically create cron jobs. When someone POSTS to a route I save the cron job details into my Ecto Repo and then simultaneously create a quantum job with Quantum.add_job.
In development when I close my server and restart it, i have to re-add all my cron jobs because they don't survive through a restart. So that got me thinking that if my application were to crash that would make me lose all the cron jobs. (I'm thinking about of scenarios where I host the app on Google compute engine and for whatever reason need to do a reset on the compute instance, ie upgrades on the box, etc.)
So I was wondering what the appropriate way to restart my app is while keeping these cron jobs?
Right now I have the following:
worker(Task,[MyApp.RebootTask, :reboot, []], restart: :transient)
in the start function of my application module.
Is this the right approach? What other considerations do I need to factor in?
Any guidance is greatly appreciated
I query my db and create a list with the job definition of every item
%Quantum.Job{
name: job_name,
overlap: false,
run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster},
schedule: Crontab.CronExpression.Parser.parse!(schedule),
task: task,
state: :active,
timezone: "Europe/Zurich"
}
To have the jobs started at application startup, I do something like this
defmodule Alerts.Scheduler do
use Quantum.Scheduler, otp_app: :alerts
require Logger
#environmet_blacklist [:test]
def init(opts) do
case Enum.member?(#environmet_blacklist, Mix.env()) or IEx.started?() do
true ->
IO.inspect(opts)
opts
false ->
delete_all_jobs()
opts_with_jobs = get_startup_config(opts)
opts_with_jobs |> IO.inspect()
opts_with_jobs
end
end
def get_startup_config(opts) do
job_definition = Alerts.Business.Alerts.get_all_alert_jobs_config()
(opts |> List.delete(List.keyfind(opts, :jobs, 0))) ++ [jobs: job_definition]
end
In my application start
def start(_type, _args) do
[
Alerts.Repo,
AlertsWeb.Endpoint |> supervisor([]),
if(System.get_env() != :test, do: Alerts.Scheduler),
Alerts.VersionSupervisor |> supervisor([])
]
|> Supervisor.start_link(strategy: :one_for_one, name: Alerts.Supervisor)
end
It doesn't look like Quantum persists dynamically-added cronjobs, since the more typical approach is to define your cronjobs (named or otherwise) in your config.exs.
Since you're already storing the job details with Ecto, it's just a matter of reading those details and readding them when your application starts. Since you're already using Quantum, the following in config/config.exs ought to do the trick:
config :quantum, cron: [
"#reboot": &MyApp.some_function_to_read_and_readd_my_cronjobs/0
]
Related
My application runs with Camunda 7.7. Until now, all the data was saved in the Camunda tables (ACT_XXX)- they become big. So now I want to clean up the tables and configure Camunda such, that the data is clean up after 14 days.
Until now I tries to set the TTL to 1 day (easier to test!)
List<ProcessDefinition> processDefinitions = processEngine.getRepositoryService()
.createProcessDefinitionQuery()
.deploymentId(deployment.getId()).list();
for (ProcessDefinition processDefinition : processDefinitions) {
processEngine.getRepositoryService()
.updateProcessDefinitionHistoryTimeToLive(
processDefinition.getId(), 1);
}
and the clean up window during the afternoon:
configuration.setHistoryCleanupBatchWindowStartTime("15:00");
configuration.setHistoryCleanupBatchWindowEndTime("16:00");
This, this does not work. Can someone help?
In my case, when running with Spring Boot, it just works by defining the following properties:
camunda:
bpm:
generic-properties:
properties:
historyCleanupBatchWindowStartTime: "00:01"
historyCleanupBatchWindowEndTime: "23:59"
historyCleanupStrategy: endTimeBased
To be sure, can you check colum REMOVAL_TIME on objects you need to delete? It should be populated automatically by the engine if you set ttl on process definition.
P.s. I'm running 7.11.0
There wasn't any queue named default in our Rails code. But it seems Sidekiq sets queue for ActiveStorage::PurgeJob as default. That was why purge_later never worked.
[ActiveJob] Enqueued ActiveStorage::PurgeJob (Job ID: .. ) to Sidekiq(default) with arguments
Is there a way to have different queue name than "default" here? I couldn't find documentation about it yet.
Setting the name of the Active Job queue used by Active Storage
You can change the queue used by Active Storage for its async jobs at the configuration level like this
config.active_storage.queue = :low_priority
To make this an application-wide change, put it into your application.rb. For environment-specific changes, put it into the relevant environment file under config/environments
See the documentation here:
https://guides.rubyonrails.org/configuring.html#configuring-active-storage
This did not work for me, instead the following worked
config.active_storage.queues = Hash.new(:default)
This is due to purge_job.rb looking up the queue name like so
queue_as { ActiveStorage.queues[:purge] }
For Rails 7.1, setting config.active_storage.queue doesn't impact the queue used by the PurgeJob.
This did the trick:
config.active_storage.queues.analysis = "my-queue"
config.active_storage.queues.purge = "my-queue"
I have a python3 script that attempts to reindex certain documents in an existing ElasticSearch index. I can't update the documents because I'm changing from an autogenerated id to an explicitly assigned id.
I'm currently attempting to do this by deleting existing documents using delete_by_query and then indexing once the delete is complete:
self.elasticsearch.delete_by_query(
index='%s_*' % base_index_name,
doc_type='type_a',
conflicts='proceed',
wait_for_completion=True,
refresh=True,
body={}
)
However, the index is massive, and so the delete can take several hours to finish. I'm currently getting a ReadTimeoutError, which is causing the script to crash:
WARNING:elasticsearch:Connection <Urllib3HttpConnection: X> has failed for 2 times in a row, putting on 120 second timeout.
WARNING:elasticsearch:POST X:9200/base_index_name_*/type_a/_delete_by_query?conflicts=proceed&wait_for_completion=true&refresh=true [status:N/A request:140.117s]
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='X', port=9200): Read timed out. (read timeout=140)
Is my approach correct? If so, how can I make my script wait long enough for the delete_by_query to complete? There are 2 timeout parameters that can be passed to delete_by_query - search_timeout and timeout, but search_timeout defaults to no timeout (which is I think what I want), and timeout doesn't seem to do what I want. Is there some other parameter I can pass to delete_by_query to make it wait as long as it takes for the delete to finish? Or do I need to make my script wait some other way?
Or is there some better way to do this using the ElasticSearch API?
You should set wait_for_completion to False. In this case you'll get task details and will be able to track task progress using corresponding API: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete-by-query.html#docs-delete-by-query-task-api
Just to explain more in the form of codebase explained by Random for the newbee in ES/python like me:
ES = Elasticsearch(['http://localhost:9200'])
query = {'query': {'match_all': dict()}}
task_id = ES.delete_by_query(index='index_name', doc_type='sample_doc', wait_for_completion=False, body=query, ignore=[400, 404])
response_task = ES.tasks.get(task_id) # check if the task is completed
isCompleted = response_task["completed"] # if complete key is true it means task is completed
One can write custom definition to check if the task is completed in some interval using while loop.
I have used python 3.x and ElasticSearch 6.x
You can use the 'request_timeout' global param. This will reset the Connections timeout settings, as mentioned here
For example -
es.delete_by_query(index=<index_name>, body=<query>,request_timeout=300)
Or set it at connection level, for example
es = Elasticsearch(**(get_es_connection_parms()),timeout=60)
I have a bit of structural dilemma in soap. When running tests, it can be possible to run tests at project, test suite or test case level.
Now currently what happens is that we can run a whole project via project level and it will display a prompt box to select an endpoint (through a project level setup script and produces a project report using the project level tear down script).
However, it may be possible that the tester may not want to run a whole project and only wants to run a test suite or even a test case. Now it may be possible that the tester may only want to run only a test suite or even only a test case. Now it would be a hassle disabling suites or cases you don't want to run.
Now the problem i have is that if I start putting prompt boxes to select endpoints at suite or case level, everytime we hit a suite or case, it will always ask for an endpoint. Another thing is that I am thinking not creating suite or test case reposts because if running many suites or cases one by one, it is just an overkill on reporting.
I like your thinking on this, but I was speaking with my professional colleague and what we're thinking is this:
Add the below code for all test suites and test case level in their relevant setup scripts where it asks for endpoint (this is same code used in project set up script for selecting endpoint):
import com.eviware.soapui.support.*
def alert = com.eviware.soapui.support.UISupport
def urls = []
project.properties.each
{
if (it.value.name.startsWith("BASE_URL_"))
{
urls.push(it.value.name.replace("BASE_URL_", ""))
}
}
def urlName = alert.prompt("Please select the environment URL", "Enter URL", urls)
if (urlName)
{
def url = project.getPropertyValue("BASE_URL_" + urlName)
def urlBase = "BASE_URL_" + urlName
project.setPropertyValue("BASE_URL", url)
switch (urlBase){
case "BASE_URL_TEST":
project.setPropertyValue("DOMAIN_NAME", "TEST");
break;
case "BASE_URL_STAGE":
project.setPropertyValue("DOMAIN_NAME", "STAGE");
break;
default:
project.setPropertyValue("DOMAIN_NAME", "NO DOMAIN");
break;
}
}
else
{
log.warn 'haven\'t received user input'
log.warn 'No base URL is selected or cancelled, try again'
assert false
}
Now what we add is the following and we may need to use properties but again see what you think is best:
If test is ran at project level, it will prompt to select endpoint through project setup script but it will not ask for selecting endpoint through test suite or test case setup script. So it's only a single endpoint selection
If test is ran at suite level, it will prompt to select endpoint through project setup script but it will not ask for selecting endpoint through test case setup script. So it's only a single endpoint selection
For running at test case level, well it only runs for that test case so it's at the lowest level as it asks for an endpoint for that test case.
We can't have setup scripts disabled at any level because there maybe over code in those setup script that will need to be exectued, we just need a way to say depending on which level, don't ask for selecting endpoints at lower levels.
Seems complicated to implement but does anyone know best way to implement this or do they even have a better idea than this theory?
Thanks
For a moment, let us assume you get it done for all levels (project, suite, and each case). May be you forgot about the step level ;-)
Do you have any Pros in your approach?, for me, NO.
Cons in your approach:
Each time user executes a test (be it project / suite / any test case), engineer needs to select value from the drop down, which is unwanted though testing against the same server as previous test case & little annoying.
Test execution requires manual intervention each time test execution is invoked.
User Interface is required as drop down being used.
Will be come road block / hurdle for end to end automation or to achieve automation.
Test execution can't done in headless mode. And this is important if you need to use Continuous Integration tools.
Proposed Approach :-
If I have to do the above, I would do the following. That would be clean, damn simple, no such complications would arise that you had mentioned in the long summary.
Looks there are following project properties defined with addresses of the test servers:
BASE_URL_TEST
BASE_URL_STAGE
There is also another project property defined BASE_URL and all the above logic is to allow the user to select the value from above properties to base URL value.
Now all user have to do is change the value for project property BASE_URL. I would think just user have to set one of the below value by hand what he / she needed as (one of them) before proceeding with their tests.
${#Project#BASE_URL_TEST} or
${#Project#BASE_URL_STAGE}
NOTE that a property value can be referred into another property by the use of Property Expansion like above.
With the above, user can set whatever is needed and change only if required or have to change the test server.
No setup script at any level is required any more, and just simply change the value of the property.
Properties are given to make to life simple, which can be used in N number of places and maintain the project easily.
Most Importantly, overcome the Cons mentioned in the beginning.
It is general practice that SoapUI is used to design the tests, and SOAPUI_HOME/bin/testrunner.bat or .sh utility to execute the tests in command line mode and that is the way to achieve Continuous Integration.
That's why use of properties helps here to achieve the above without any issues.
Even simple:
Just have one project property BASE_URL (remove others), user have to just edit the property value and have the test server name / IP address and is done for once, say http://testjuniper. Isn't it dead simple?
And I believe, the engineer would definitely know which server he / she is using to execute the tests.
Having said that, now user do not have to bother at all, irrespective of executing a project / suite / test case, as long as testing is carried out against the same server / environment.
Once, the test execution is finished against TEST environment, the engineer may move on to other environment say STAGING, just change BASE_URL property value accordingly.
I am working on Oracle 10gR2.
And here is my problem -
I have a procedure, lets call it *proc_parent* (inside a package) which is supposed to call another procedure, lets call it *user_creation*. I have to call *user_creation* inside a loop, which is reading some columns from a table - and these column values are passed as parameters to the *user_creation* procedure.
The code is like this:
FOR i IN (SELECT community_id,
password,
username
FROM customer
WHERE community_id IS NOT NULL
AND created_by = 'SRC_GLOB'
)
LOOP
user_creation (i.community_id,i.password,i.username);
END LOOP;
COMMIT;
user_Creation procedure is invoking a web service for some business logic, and then based on the response updates a table.
I need to find a way by which I can use multi-threading here, so that I can run multiple instances of this procedure to speed up things. I know I can use *DBMS_SCHEDULER* and probably *DBMS_ALERT* but I am not able to figure out, how to use them inside a loop.
Can someone guide me in the right direction?
Thanks,
Ankur
what you can do is submit lots of jobs in the same time. See Example 28-2 Creating a Set of Lightweight Jobs in a Single Transaction
This fills a pl/sql table with all jobs you want to submit in one tx, all at the same time. As soon as they are submitted (enabled) they will start running, as many as the system can handle, or as many as are allowed by a resource manager plan.
The overhead that the Lightweight jobs have is very ... minimal/light.
I would like to close this question. DBMS_SCHEDULER as well as DBMS_JOB (though DBMS_SCHEDULER is preferred) can be used inside the loop to submit and execute the job.
For instance, here's a sample code, using DBMS_JOB which can be invoked inside a loop:
...
FOR i IN (SELECT community_id,
password,
username
FROM customer
WHERE community_id IS NOT NULL
AND created_by = 'SRC_GLOB'
)
LOOP
DBMS_JOB.SUBMIT(JOB => jobnum,
WHAT => 'BEGIN user_creation (i.community_id,i.password,i.username); END;'
COMMIT;
END LOOP;
Using a commit after SUBMIT will kick off the job (and hence the procedure) in parallel.