How to find out the rarkey value for delete key - livecode

How to find out the rarkey value for delete key. If I use rarkey then I can't use any other key is it any alternative method for "on rawkeyDown theKey"

If you capture the rawKey value in a rawKeyDown handler, you must pass the message if you want any of the key presses to be handled by the engine. This is the case with any of the key messages you want to handle (e.g., rawKeyUp, keyDown, keyUp); if you do not pass the message, it's normal functionality will be preempted.
on rawKeyDown pCode
if pCode is 65535 then
# do what you want to do to handle the delete key here
else
pass rawKeyDown
end if
end rawKeyDown

Related

Way to get messages from previous flows in a linear chain

I recently had a scenario like below:
Flow_A ------> Flow_B ------> Flow_C ------> Flow_D
Where
Flow_A is the initiator and should pass messageA.
Flow_B should pass messageA+messageB.
Flow_C should pass messageA+messageB+messageC
Flow_D should pass messageA+messageB+messageC+messageD.
So, I was thinking to enhance the headers with an old message and again pass to another flow. But, it will be very bulky at the end.
Should I store the message somewhere and then pass the messageId in the header, so that the next flow can get the old message with the messageId?
What should be the best way to achieve this?
See Claim Check pattern: https://docs.spring.io/spring-integration/docs/current/reference/html/message-transformation.html#claim-check
You store a message using ClaimCheckInTransformer and get its id as an output payload.
You move this id into a header and produce the next message.
Repeat #1 and #2 steps for this second message to be ready for the third one.
And so on to prepare environment for the fourth message.
To restore those messages you need to repeat the procedure opposite direction.
Get a header from the message into a payload, remove it and call ClaimCheckOutTransformer to restore a stored message. I say "remove header" to let the stack to be restored properly: the ClaimCheckOutTransformer has a logic like this:
AbstractIntegrationMessageBuilder<?> responseBuilder = getMessageBuilderFactory().fromMessage(retrievedMessage);
// headers on the 'current' message take precedence
responseBuilder.copyHeaders(message.getHeaders());
So, without removing that header, the same message id is going to be carried into the next step and you will be is a loop - StackOverflowError.
Another is to store messages manually somewhere, e.g. MetadataStore and collect their ids in the list for payload. This way you don't need extra logic to deal with headers. Everything in a list of your payload. You can consult the store any time for any id item in that list!

How to cleanup the JdbcMetadataStore?

Initially our flow of cimmunicating with google Pub/Sub was so:
Application accepts message
Checks that it doesn't exist in idempotencyStore
3.1 If doesn't exist - put it into idempotency store (key is a value of unique header, value is a current timestamp)
3.2 If exist - just ignore this message
When processing is finished - send acknowledge
In the acknowledge successfull callback - remove this msg from metadatastore
The point 5 is wrong because theoretically we can get duplicated message even after message has processed. Moreover we found out that sometimes message might not be removed even although successful callback was invoked( Message is received from Google Pub/Sub subscription again and again after acknowledge[Heisenbug]) So we decided to update value after message is proccessed and replace timestamp with "FiNISHED" string
But sooner or later we will encounter that this table will be overcrowded. So we have to cleanup messages in the MetaDataStore. We can remove messages which are processed and they were processed more 1 day.
As was mentioned in the comments of https://stackoverflow.com/a/51845202/2674303 I can add additional column in the metadataStore table where I could mark if message is processed. It is not a problem at all. But how can I use this flag in the my cleaner? MetadataStore has only key and value
In the acknowledge successfull callback - remove this msg from metadatastore
I don't see a reason in this step at all.
Since you say that you store in the value a timestamp that means that you can analyze this table from time to time to remove definitely old entries.
In some my project we have a daily job in DB to archive a table for better main process performance. Right, just because we don't need old data any more. For this reason we definitely check some timestamp in the raw to determine if that should go into archive or not. I wouldn't remove data immediately after process just because there is a chance for redelivery from external system.
On the other hand for better performance I would add extra indexed column with timestamp type into that metadata table and would populate a value via trigger on each update or instert. Well, MetadataStore just insert an entry from the MetadataStoreSelector:
return this.metadataStore.putIfAbsent(key, value) == null;
So, you need an on_insert trigger to populate that date column. This way you will know in the end of day if you need to remove an entry or not.

Cucumber "OR" clause?

Is it possible to specify some kind of "OR" (alternative) clause in Cucumber?
I.e. if I have two valid responses to some event I would like my test to pass if either of them happens.
Something like that:
"When I press a button"
"Then I should see the text 'Boo'"
"Or I should see the text 'Foo'"
My particular scenario is a login screen. When I try to log in with some random password, I should see an error message "invalid password" if the server is working or a message "network error" if it is not.
You can't really define OR functionality using the Gherkin but you can pass in a list and check that one of the values in the list matches what was returned.
Define list:
Then the greeting service response will contain one of the following messages
|Hello how are you doing?|
|Welcome to the front door!|
|How has your day been?|
|Come right on in!|
Check list:
#Then("the get messages service response will contain one of the following messages")
public void text_matching_one_of_the_following(List<String> greetingMessages){
boolean success = false;
for(String message : greetingMessages){
assertTrue(textMatchesResponse(message));
}
}
OR is not supported. You can use Given, When, Then, And and But. Please refer to http://docs.behat.org/en/v2.5/guides/1.gherkin.html
But perhaps you could make use of the But keyword to achieve what you are looking for.

LibGDX key pressed/held

How to distinguish when a key is only pressed or held. Using this code:
Gdx.input.isKeyPressed(Keys.D)
returns true every time the button is held, but how to get only 'one time' press?
Simply use: Gdx.input.isKeyJustPressed(int key)

Start key's token sorts after end token

I am trying to run some processes in Cassandra, using Cassandra 1.2.6 and ColumnFamilyInputFormat. I am getting the stack trace bellow.
I tryed switching to both RandomPartitioner and MurmurPartitioner (I re created the keyspaces from beggining in both cases), but the problem persists.
How to figure why is this happening?
java.lang.RuntimeException: InvalidRequestException(why:Start key's token sorts after end token)
at org.apache.cassandra.hadoop.ColumnFamilyRecordReader$WideRowIterator.maybeInit(ColumnFamilyRecordReader.java:453)
at org.apache.cassandra.hadoop.ColumnFamilyRecordReader$WideRowIterator.computeNext(ColumnFamilyRecordReader.java:459)
at org.apache.cassandra.hadoop.ColumnFamilyRecordReader$WideRowIterator.computeNext(ColumnFamilyRecordReader.java:406)
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at org.apache.cassandra.hadoop.ColumnFamilyRecordReader.getProgress(ColumnFamilyRecordReader.java:103)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.getProgress(MapTask.java:514)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:539)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
Caused by: InvalidRequestException(why:Start key's token sorts after end token)
at org.apache.cassandra.thrift.Cassandra$get_paged_slice_result.read(Cassandra.java:14168)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_get_paged_slice(Cassandra.java:769)
at org.apache.cassandra.thrift.Cassandra$Client.get_paged_slice(Cassandra.java:753)
at org.apache.cassandra.hadoop.ColumnFamilyRecordReader$WideRowIterator.maybeInit(ColumnFamilyRecordReader.java:438)
... 12 more
The underlying call you are using is get_range_slices, which returns a range of rows. You can either input a start and end token, or a start and end key. It looks like you are using a start and end key.
The problem with this is that, with the RandomPartitioner (MurmurPartitioner), keys are stored in token order. The token is obtained by MD5 hashing (Murmur hashing) the key, so token order is in general different to key ordering. You can therefore only make a get_range_slices request where the end token is greater than the start token. If you specify a key range, your request will fail if hash(start) > hash(end), which can happen even if start < end.
I don't know what you're trying to do, but you probably want to use the token range. Or if you are paging through results then set the end key to blank and use the last key given as the next start key.

Resources