USE CASE: Create AtomicSequence using last saved id in DB (not start from zero) and generate id after last saved id in db.
First we are checking if AtomicSequence instance is there or not if not we create AtomicSequence from last saved id (if the entry is in db.).
In HazelcastAtomicSequenceManager getSequenceGenerator method is two-step process.
Step 1: getHzInstance().getAtomicLong(key). // It will get if not present create a new one with 0 initial value.
Step 2: this.sequence.compareAndSet(0, startVal); // set value if initial value is zero.
Now consider Thread 1 come check and see the AtomicSequence for the given key is not present and execute stpe1 still did not execute step 2.
Thread 2 come and see the AtomicSequence is created (As step 1 is executed by thread1) and go ahead and increment it to 1.As the initial value still zero as Thread 2 did not execute step 2.
Now thread 1 will try to execute step2 but unable to it as initial value became 1 or something not equal to zero. So the atomicsequence will generate id from 1 next instead it should start from last save id, Due to which our test case is failing.
Any way to fix this issue
You need to get and try compareAndSet in a loop, until it succeeds:
long current;
do {
current = atomicLong.get();
} while (!atomicLong.compareAndSet(current, startVal));
Related
Background
I have an Intent that fetches some Data from an API. This data contains an array and I am iterating over the first 10 entries of said array and read the results back to the user. However the Array is almost always bigger than 10 entries. I am using Lambda for my backend and NodeJS as my language.
Note that I am just starting out on Alexa and this is my first skill.
What I want to archive is the following
When the user triggers the intent and the first 10 entries have been read to the user Alexa should ask "Do you want to hear the next 10 entries?" or something similar. The user should be able to reply with either yes or no. Then it should read the next entries aka. access the array again.
I am struggling with the Alexa implementation of this dialog.
What I have tried so far: I've stumbled across this post here, however I couldn't get it to work and I didn't find any other examples.
Any help or further pointers are appreciated.
That tutorial gets the concept right, but glosses over a few things.
1: Add the yes and no intents to your model. They're "built in" intents, but you have to add them to the model (and rebuild it).
2: Add your new intent handlers to the list in the .addRequestHandlers(...) function call near the bottom of the base skill template. This is often forgotten and is not mentioned in the tutorial.
3: Use const sessionAttributes = handlerInput.attributesManager.getSessionAttributes(); to get your stored session attributes object and assign it to a variable. Make changes to that object's properties, then save it with handlerInput.attributesManager.setSessionAttributes(sessionAttributes);
You can add any valid property name and the values can be a string, number, boolean, or object literal.
So assume your launch handler greets the customer and immediately reads the first 10 items, then asks if they'd like to hear 10 more. You might store sessionAttributes.num_heard = 10.
Both the YesIntent and LaunchIntent handlers should simply pass a num_heard value to a function that retrieves the next 10 items and feeds it back as a string for Alexa to speak.
You just increment sessionAttributes.num_heard by 10 each time that yes intent runs and then save it with handlerInput.attributesManager.setSessionAttributes(sessionAttributes).
What you need to do is something called "Paging".
Let's imagine that you have a stock of data. each page contains 10 entries.
page 1: 1-10, page 2: 11-20, page 3: 21-30 and so on.
When you fetching your data from DB you can set your limitations, In SQL it's implemented with LIMIT ,. But how you get those values based on the page index?
Well, a simple calculation can help you:
let page = 1 //Your identifier or page index. Managed by your client frontend.
let chunk = 10
let _start = page * chunk - (chunk - 1)
let _end = start + (chunk - 1)
Hope this helped you :)
I have a python3 script that attempts to reindex certain documents in an existing ElasticSearch index. I can't update the documents because I'm changing from an autogenerated id to an explicitly assigned id.
I'm currently attempting to do this by deleting existing documents using delete_by_query and then indexing once the delete is complete:
self.elasticsearch.delete_by_query(
index='%s_*' % base_index_name,
doc_type='type_a',
conflicts='proceed',
wait_for_completion=True,
refresh=True,
body={}
)
However, the index is massive, and so the delete can take several hours to finish. I'm currently getting a ReadTimeoutError, which is causing the script to crash:
WARNING:elasticsearch:Connection <Urllib3HttpConnection: X> has failed for 2 times in a row, putting on 120 second timeout.
WARNING:elasticsearch:POST X:9200/base_index_name_*/type_a/_delete_by_query?conflicts=proceed&wait_for_completion=true&refresh=true [status:N/A request:140.117s]
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='X', port=9200): Read timed out. (read timeout=140)
Is my approach correct? If so, how can I make my script wait long enough for the delete_by_query to complete? There are 2 timeout parameters that can be passed to delete_by_query - search_timeout and timeout, but search_timeout defaults to no timeout (which is I think what I want), and timeout doesn't seem to do what I want. Is there some other parameter I can pass to delete_by_query to make it wait as long as it takes for the delete to finish? Or do I need to make my script wait some other way?
Or is there some better way to do this using the ElasticSearch API?
You should set wait_for_completion to False. In this case you'll get task details and will be able to track task progress using corresponding API: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete-by-query.html#docs-delete-by-query-task-api
Just to explain more in the form of codebase explained by Random for the newbee in ES/python like me:
ES = Elasticsearch(['http://localhost:9200'])
query = {'query': {'match_all': dict()}}
task_id = ES.delete_by_query(index='index_name', doc_type='sample_doc', wait_for_completion=False, body=query, ignore=[400, 404])
response_task = ES.tasks.get(task_id) # check if the task is completed
isCompleted = response_task["completed"] # if complete key is true it means task is completed
One can write custom definition to check if the task is completed in some interval using while loop.
I have used python 3.x and ElasticSearch 6.x
You can use the 'request_timeout' global param. This will reset the Connections timeout settings, as mentioned here
For example -
es.delete_by_query(index=<index_name>, body=<query>,request_timeout=300)
Or set it at connection level, for example
es = Elasticsearch(**(get_es_connection_parms()),timeout=60)
I was experimenting about the new Sitecore 7.2 functionality SwitchOnRebuildLuceneIndex
Apparently, this functionality allow me to access in a readonly mode my Index meanwhile I am rebuilding the index.
Is there any way to have a full operational index (not read-only) meanwhile I am rebuilding the index?
The test that I am performing is the following one:
1) Rebuild a custom index with 30k items (it takes 30 sec)
2) meanwhile the index is rebuilding: Add a Sitecore Items (via code)
3) meanwhile the index rebuilding: access the custom index (via code) to get the count of items
4) after the index completed the rebuild: access the custom index (via code) to get the count of items
In step 3 it returns the original item counts 30000
In step 4 it returns the updated item counts 30001
thanks for the help
Stelio
I do not believe that this will be possible. Conceptually speaking, Sitecore is essentially a software that makes databases more user-friendly and defines a structure that both technical and non-technical persons can understand and follow. What you are talking about goes against the concept of ACID, database locks and transactions. I have commented with more technical (database) annotations on your steps, inline, below:
Rebuild a custom index... - Place a lock on the items in the database and start transaction
meanwhile ...: add a sitecore item... - A separate transaction running against the items, though not affecting the locked set that the transaction started in step 1 is using
meanwhile ...: access the custom... - Another transaction runs after the transaction in step 2, thus including the count of all items (including the locked set and the newly added item)
after the index completed... - Transaction 1 completed and lock released; Get count of items from custom index returns a different count than count of items if not counted from the index (the latter is greater, as a new item was added)
As such, step 3 returns the new count of items and step 4 returns the original.
If you want to keep track of the changes, which happened during the index rebuild you could use the IntervalAsynchronousStrategy as your index rebuild strategy.
<strategies hint="list:AddStrategy">
<intervalAsyncMaster type="Sitecore.ContentSearch.Maintenance.Strategies.IntervalAsynchronousStrategy, Sitecore.ContentSearch">
<param desc="database">master</param>
<param desc="interval">00:00:05</param>
<!-- whether full index rebuild should be triggered if the number of items in history engine exceeds ContentSearch.FullRebuildItemCountThreshold -->
<CheckForThreshold>true</CheckForThreshold>
</intervalAsyncMaster>
</strategies>
This reads through the History table and updates your index accordingly.
If you check the Sitecore implementation of this class, you can see that it handles the rebuilding event. If rebuilding is running it doesn't do anything and waits for the next time it is scheduled, and if the rebuild has finished it collects the entries from History table and applies them to the index. See Run method of the class:
...
if (IndexCustodian.IsIndexingPaused(this.index))
{
CrawlingLog.Log.Debug(string.Format("[Index={0}] IntervalAsynchronousUpdateStrategy triggered but muted. Indexing is paused.", this.index.Name), null);
}
else if (IndexCustodian.IsRebuilding(this.index))
{
CrawlingLog.Log.Debug(string.Format("[Index={0}] IntervalAsynchronousUpdateStrategy triggered but muted. Index is being built at the moment.", this.index.Name), null);
}
else
{
CrawlingLog.Log.Debug(string.Format("[Index={0}] IntervalAsynchronousUpdateStrategy executing.", this.index.Name), null);
HistoryEntry[] history = HistoryReader.GetHistory(this.Database, this.index.Summary.LastUpdated);
...
I would like to retrieve the scenario state in the "After" scenario hook. I noticed that the .failed? method does not consider pending steps as failed steps.
So How can I determine that a scenario did not execute completely, because it failed OR because some steps were not implemented/defined.
You can use status method. The default value of status is :skipped, the failed one is :failed and the passed step is :passed. So you can write something like this:
do sth if step.status != :passed
Also, if you use !step.passed? it does the same thing because it only checks for the :passed status.
http://cukes.info/api/cucumber/ruby/yardoc/Cucumber/Ast/Scenario.html#failed%3F-instance_method
On that subject, you can also take a look at this post about demoing your feature specs to your customers: http://multifaceted.io/2013/demo-feature-tests/
LiohAu, you can use the 'status' method on a scenario itself rather than on individual steps. Try this: In hooks, add
After do |scenario|
p scenario.status
end
This will give the statuses as follows:
Any step not implemented / defined, it'll give you :undefined
Scenario fails (when all steps are defined) :failed
Scenario passes :passed
Using the same hook, it'll give you the status for scenario outline, but for each example row (since for each example row, it is an individual scenario). So if at all you want the result of an entire outline, you'll need to capture result for all example rows and compute the final result accordingly.
Hope this helps.
I have a delete trigger on table A that deletes from table B. I am testing what happens when trigger fails, and for that purpose I renamed table B, so that trigger cannot find it.
These are my normal steps:
begin transaction
delete from C
delete from A -- this errors for reason mentioned
-- At this point the transaction is automatically rolled-back.
However if I do these steps:
begin transaction
delete from C
delete from B -- this errors for reason mentioned
-- At this point transaction is not rolled back, and I can still commit it.
How come in the first scenario, the transaction is rolling back? Shouldn't it be up to application to call either rollback or commit?
And the whole difference is trigger failing vs. statement failing for same reason, I would expect the behavior to be identical.
edit to add example:
create table A (a int primary key)
create table B (a int primary key)
create table C (a int primary key)
create trigger Atrig on A for delete as delete B from B, deleted where B.a=deleted.a
insert into A values(1)
insert into A values(2)
insert into B values(2)
insert into B values(3)
insert into C values(1)
insert into C values(2)
insert into C values(3)
Now rename table B to B2 (I used UI to rename it, so don't have sql command to do so)
begin transaction
delete C where a=3
delete A where a = 2
Above returns this error, and rolls back transaction:
System.Data.OleDb.OleDbException (0x80040E37): [42000]
[Message Class: 16]
[Message State: 1]
[Transaction State: 0]
[Server Name: sybase_15_5_devrs1]
[Procedure Name: Atrig]
[Line Number: 1]
[Native Code: 208]
[ASEOLEDB]B not found. Specify owner.objectname or use sp_help to check whether the object exists (sp_help may produce lots of output).
However if I do this:
begin transaction
delete C where a=3
delete B where a = 2
above returns error, but transaction is not rolled back, and I can issue'commit transaction':
System.Data.OleDb.OleDbException (0x80040E37): [42000]
[Message Class: 16]
[Message State: 1]
[Transaction State: 0]
[Server Name: sybase_15_5_devrs1]
[Native Code: 208]
[ASEOLEDB]B not found. Specify owner.objectname or use sp_help to check whether the object exists (sp_help may produce lots of output).
I'm thinking the behavior has something to do with this topic
In the table "Rollbacks caused by errors in data modification", it says:
Context: Transaction only
Behavior: Current command is aborted. Previous commands are not rolled back, and subsequent commands are executed.
Context: Trigger in a transaction
Behavior: Trigger completes, but trigger effects are rolled back.
All data modifications since the start of the transaction are rolled back. If a transaction spans multiple batches, the rollback affects all of those batches.
Any remaining commands in the batch are not executed. Processing resumes at the next batch.
Context: Transaction only
Behavior: Current command is aborted. Previous commands are not rolled back, and subsequent commands are executed.
Context: Trigger in a transaction
Behavior: Trigger completes, but trigger effects are rolled back.
All data modifications since the start of the transaction are rolled back. If a transaction spans multiple batches, the rollback affects all of those batches.
Any remaining commands in the batch are not executed. Processing resumes at the next batch.
After reading the same manual that you specified in the answer this morning, I've tested in my sybase environment and that's what happened.