pt-table-checksum does't work with multi source (mariadb) Edit - percona

pt-checksum is getting to never ending loop in multi-source (channel based) replication :
A multi source replication in my environment
n1 -> n2 (created n1 as channel replication)
n2 -> n3
n1 -> n3 (n3 will replicate from both n1 and n2 channels)
Whenever without channel wise replication pt-checksum is working fine. (traditional replication/default replication works without any issues)
Once i enabled as channel wise(only n1, removed the n2 channel) replication (n1->n3) , the pt-checksum is not going through.
Replica n3-VirtualBox is stopped. Waiting.
Replica n3-VirtualBox is stopped. Waiting.
from general log n1:
Query SELECT 'pt-table-checksum keepalive'
Query SELECT 'pt-table-checksum keepalive'

I resolved the problem by following method:
On Master DB Server
(i) For safe side, I made copy of original /bin/pt-table-checksum
cp /bin/pt-table-checksum /bin/pt-table-checksumorg
(ii) Open the /bin/pt-table-checksum file
vi /bin/pt-table-checksum
(iii) Go to line number 8590
Press esc, type 8590 , press shift g
(iv) replace the line
my #lagged_slaves = map { {cxn=>$_, lag=>undef} } #$slaves;
by
my #lagged_slaves = ();
the program immediately works and returns the expected results.

I have same problem. I also get infinite loop of "Replica dbslave is stopped." However if I break the output by pressing "Ctl + c", I get the inconsistent database and table name. I verified it and found the inconsistency in slave. Then I used pt

Related

Excel removes my query connection on it's own and gives me several error messages

I know that this is a really long post but I'm not sure of what part of my process is making my file crash, so I tried to detail everything about what I did to get to the error messages.
So, first of all, I created a query on Kusto, which looks something similar to this but in reality is 160 lines of code, this is just a summarized version of what my code might do just to show my working process.
First, what I do in Session_Id_List is create a list of all distinct Session Id's from the past day.
Then on treatment_alarms1 I count the amount of alarms for each type of alarm that was active during each session.
Then, on treatment_alarms2 I create a list which might look something like this
1x Alarm_Type_Number1
30x Alarm_Type_Number2
7x Alarm_Type_Number3
and like that for each treatment, so I have a list of all alarms that were active for that treatment.
Lastly, I create a left outer join with Session_Id_List and treatment_alarms2. This means that I will get shown all of the treatment ID's, even the ones that did not have any active alarms.
let _StartTime = ago(1d);
let _EndTime = ago(0d);
let Session_Id_List = Database1
| where StartTime >= _StartTime and StartTime <= _EndTime
| summarize by SessionId, SerialNumber, StartTime
| distinct SessionId, StartTime, SerialNumber;
let treatment_alarms1 = Database1
| where StartTime >= _StartTime and StartTime <= _EndTime and TranslatedData_Status == "ALARM_ACTIVE"
| summarize number_alarms = count() by TranslatedData_Value, SessionId
| project final_Value = strcat(number_alarms, "x ", TranslatedData_Value), SessionId;
let treatment_alarms2 = Database1
| where StartTime >= _StartTime and StartTime <= _EndTime and TranslatedData_Status == "ALARM_ACTIVE"
| join kind=inner treatment_alarms1 on SessionId
| summarize list_of_alarms_all = make_set(final_Value) by SessionId
| project SessionId, list_of_alarms_all;
let final_join = Session_Id_List
| join kind=leftouter treatment_alarms2 on SessionId;
final_join
| project SessionId, list_of_alarms_all
Then I put this query into Excel, by using the following method
I go to Tools -> Query to Power BI on Kusto Explorer
I go to Data -> Get Data -> From Other Sources -> Blank Query
I go to advanced editor
I copy and paste my query and press "Done" at the bottom
If you see now, the preview of my data will show "List" on the list_of_alarms_all column, rather than showing me the actual values of the list.
To fix this issue I first press the arrows on the header of the column
I press on "Extract Values"
I select Custom -> Concatenate using special characters -> Line Feed -> Press OK
That works fine for all of the ID's that do have alarms on them, it shows them as a list and tells me how many there are, the issue is with the ID's that did not have any treatments where I get "Error" on the Excel preview. Once I press "Close & Load" the data is put on the worksheet and it looks fine, the "Error" are all gone and instead I get empty cells where the "Error" would be at.
The problem now starts when I close the file and try to open it again.
First I get this message. So I press yes to try and enter the file.
Then I get this other message. The problem with this message is that it says that I have the file open when that is not true. I even tried to restart my laptop and open the file again and I would still get the message when in reality I don't have that file open.
Then I get this message, telling me that the connection to the query was removed.
So my problem here is that 1) I can't edit the file anymore unless I make a copy because I keep getting the message saying that I already have the file opened and it is locked for editing and 2) I would like to refresh this query with VBA maybe once a week from now on but I can't because when I save the file the connection to the query is deleted by excel itself.
I'm not sure of why this is happening, I'm guessing it's because of the "Error" I get on the empty cells when I try to extract the values from the lists. If anybody has any information on how I can fix this so I don't get these error messages please let me know.
I was not able to reproduce your issue, however there are some things you might want to try.
Within ADX, you could wrap you query with a function, so you won't have to copy a large piece of code into your Excel.
You could deal with null values (this is what gives you the Error values) already in your query. Note the use of coalesce.
// I used datatable to mimic your query results
.create-or-alter function SessionAlarms()
{
datatable (SessionId:int,list_of_alarms_all:dynamic)
[
1, dynamic([10,20,30])
,2, dynamic([])
,3, dynamic(null)
]
| extend list_of_alarms_all = coalesce(list_of_alarms_all, dynamic([]))
}
You can use Power Query ADX connector and copy your query/function As Is
If you haven't dealt with null values in you KQL you can take care of the error in Excel by using Replace Errors

DAG source return false on emitFromTraverser and processor wait for all element loaded by source before start processing

USECASE
HazelcastJet version 0.6.1
Hazelcast version 3.10.2
Given this (simpified version) of a DAG
VERTICES
S1
Source that emits 5 items of type A (read from DB with partitioning)
Local parallelism = 1
S2
Source that emits 150K items of type B (Iterator that read from DB in batch of 100 with partitioning)
Local parallelism = 1
AD
Processor that adapts types A->A1 and B->B1 and emits one by one
FA
Processors.filterP that accepts only items of type A1 and emits one by one
FB
Processors.filterP that accepts only items of type B1 and emits one by one
CL
Processor that first accumulate all items of type A1, then when it receive an item of type B1, enriches it with some staff got from proper A1, and emit, one by one.
WR
Sink that writes B1
Local parallelism = 1
NOTE:
Just to give meaning to the filter processor: in the DAG there are other sources that flows into the same adapter AD and then goes to other paths using filter processors.
EDGES
S1 --> AD
S2 --> AD
AD --> FA (from ordinal 0)
AD --> FB (from ordinal 1)
FA --> CL (to ordinal 0 with priority 0 distributed and broadcast)
FB --> CL (to ordinal 1 with priority 1)
CL --> WR
PROBLEM
If source S2 have "few" items to load (i.e. 15K) the emitFromTraverser never returns false.
If source S2 have "many" items to load (i.e. 150K) the emitFromTraverser returns false after:
All A1 items have been processed by CL
About 30% of B1 items have already been transmitted to CL but no one have been processed by CL (DiagnosticProcessor log that element are sent to CL but not processed)
S2 code for reference:
protected void init(Context context) throws Exception {
super.init(context);
this.iterator = new BQueryIterator(querySupplier, batchSize);
this.traverser = Traversers.traverseIterator(this.iterator);
}
public boolean complete() {
boolean result = emitFromTraverser(this.traverser);
return result;
}
QUESTION
Is it correct that CL doesn't process items until source ends?
Is the usage of priority + distributed + broadcast correct on CL Vertex?
UPDATE
It seems that the completeEdge on CL edge 1 is never called.
Someone can tell me why?
Thanks!
You suffer from a deadlock caused by priority. Your DAG branches from AD and then rejoins in CL, but with a priority.
AD --+---- FA ----+-- CL
\ /
+-- FB --+
Setting a priority causes that no item from lower-priority edge is processed before all items from higher-priority edge are processed. AD will eventually get blocked by backpressure from the lower-priority path, which is not processed by CL. So AD is blocked because it can't emit to the lower priority edge and CL is blocked, because it it's still waiting for items from the higher priority edge, resulting in a deadlock.
In your case, you can resolve it by making 2 AD vertices, each processing items from one of the sources:
S1 --- AD1 ----+--- CL
/
S2 --- AD2 --+
After a while I've understood what's the problem...
CL processor cannot know when all the A1 items have been processed because all items they all come from the AD processor.
So it need to wait for all sources coming from AD before starting the processing of B1 items.
Not sure but probably after a lot of items B loaded, all Inboxes buffers in the DAG become full and can't accept any other B from S2, but at the same time cannot process B1 items to continue: that's the deadlock.
Maybe DAG would be able to detect this?
I don't know Jet so deeply but it would be nice to have that warning.
Maybe is there some logging to enable?
I hope someone can confirm my answer and suggest how to improve and detect these problems.

How can I search one table for a value from another table?

This is for a simple script I'm writing for an oldschool MUD game for those who know what that is.
Basically I am searching through one table (which I'm reading from gmcp) and trying to search the value's on that table to see if any of them match any of the value's on another table which I'm storing the value's I'm looking for.
I've successfully managed to do it with singular value's by simply using a "for" loop to grab the value from gmcp and store it as a variable and then using another "for" loop to search the other table to see if any of the values there match the variable.
Trouble is, it only works for a singular value and misses all the others if there is more than one value I need to check in that table.
The code I have is as follows,
for _, v in pairs(gmcptable) do
checkvalue = v
end
for _, v in pairs(mytable) do
if v == checkvalue then
echo("yay")
else
echo("no!")
end
end
again this works fine for gmcp tables with one value, but fails if more. I tried doing this to,
for _, v in pairs(gmcptable) do
checkvalue = v
for _, v in pairs(mytable) do
if v == checkvalue then
echo("yay")
else
echo("no!")
end
end
end
my hope was that it might set the variable, run the second for loop to check the variable and then repeat for the next value on the gmcp table since it's a for loop and the second loop was within the loop, but that didn't work either. I also tried making my own function to add to the mix and simplify it,
function searchtable(table, element)
for _, v in pairs(table) do
if v == element then
return true
else
return false
end
end
end
for _, v in pairs(gmcptable) do
if searchtable(mytable, v) == true then
echo("yay")
else
echo("no!")
end
end
that was a bust also... I'm sure I'm just overlooking something or showing what an amateur I am, but I've googled loads and tried everything I can think of, but I'm just self taught and only recently started understanding how tables and for loops even work. Hopefully someone out there can get back to me with something that works soonish!
UPDATE!
#Piglet Okay so, gmcptable was actually me trying to simplify the question for those who could answer the coding question. gmcptable actually is a long list of tables received by my client via the connection from the server the game this is for. so in all actuality, I have 3 tables I'm parsing data from. "gmcp.Char.Items.List.items", "gmcp.Char.Items.Add" and "gmcp.Char.Items.Remove". Now gmcp.Char.Items.List.items is the list of everything in the room I'm in within the game. gmcp.Char.Items.Add is the list of anything that enter the room and is sent each time anything enters the room aside from other players and gmcp.Char.Items.Remove is the same, but for when anything leaves the room. I'm trying to use this information to create a targeting table that will automatically add desired targets to my targeting que and remove them if they are not in the room. the room list (gmcp.Char.Items.List) is updated only when I enter or exit the room and possibly when I look, but for now I'm assuming it doesn't update when I look because that will be a whole other problem to solve later.
I currently have a simple script in what my client ID's as a trigger, this is set to fire once when I log into the game in question and the script define the tables that hold the value's I'm cross referencing the gmcp tables with to figure out if it's information I want added to my target table, this script also defines the target table as empty, which is meant to ensure that for the duration of the session, both tables exist and are defined.
I then added three separate scripts that parse the three gmcp tables and figure out whether that are on my desired targets table and if so adds it or in the of the case of the remove table checks if its currently on the targets table and if so removes it. below I'll show the current scripts I'm using (which have changed several times over since yesterday and might change again before I get a look at any future replies to this. I will also include a what the gmcp tables in question look like and if I'm currently seeing any error or debug details from my client I'll include that as well.
log on trigger
match on > ^Password correct\. in perl regex
bashtargets = {}
bashlist = {
"a baby rat",
"a young rat",
"a rat",
"an old rat",
"a black rat"
}
(the above trigger appears to be working properly and I can print the tables accurately)
script in the room
event handlers > gmcp.Char.Items.List
for _, v in pairs(gmcp.Char.Items.List.items) do
bashname = v.name
bashid = v.id
for _, v in pairs(bashlist) do
if v == bashname then
table.insert(bashtargets, bashid)
end
end
end
script addcheck
event handlers "gmcp.Char.Items.Add"
for _, v in pairs(gmcp.Char.Items.Add) do
addname = v.name
addid = v.id
for _, v in pairs(bashlist) do
if v == addname then
table.insert(bashtargets, addid)
end
end
end
script removecheck
event handlers "gmcp.Char.Items.Remove"
for _, v in pairs(gmcp.Char.Items.Remove) do
delid = v.id
for _, v in pairs(bashtargets) do
if v == delid then
table.remove(bashtargets, delid)
end
end
end
gmcp table "gmcp.Char.Items"
{
Remove = {
location = "room",
item = {
id = "150558",
name = "a filthy gutter mutt",
attrib = "m"
}
},
Add = {
location = "room",
item = {
id = "150558",
name = "a filthy gutter mutt",
attrib = "m"
}
},
List = {
location = "room",
items = {
{
id = "59689",
name = "a statue of a decaying gargoyle",
icon = "profile"
},
{
id = "84988",
name = "a gas lamp"
},
{
id = "101594",
attrib = "t",
name = "a monolith sigil",
icon = "rune"
},
{
id = "196286",
name = "a wooden sign"
},
{
id = "166410",
name = "Lain, the Lamplighter",
attrib = "m"
}
}
}
}
I have parsed the information successfully several times, so I've got the right tables and syntax and what have you where gmcp is concerned.
using this I have also managed to get it to half work. currently the set up seems to capture single targets at a time even if there are dozens and add that one, sometimes it oddly enough adds the same target 3 - 5 times for some reason, not sure why, haven't been able to figure it out yet.
these two error messages have been output by my client repeatedly, no idea what to do about them though or how to fix them... "left the room" and "entered the room" are the names currently assigned to the scripts for adding and removing data from the tables in my client.
[ERROR:] object:<event handler function> function:<left the room>
<Lua error:[string "return left the room"]:1: '<eof>' expected near 'the'>
[ERROR:] object:<event handler function> function:<entered the room>
<Lua error:[string "return entered the room"]:1: '<eof>' expected near 'the'>
I have no idea what '' means though, or why it's expected near 'the' it's all got my head pounding though...
I can see through the debug feature on my client that all the handlers are being sent by the server so it's not the gmcp... I'm not actually seeing any bugs on the debug feature (which btw is separate from the error feature that keeps putting out those other two errors I mentions.
anyways that's my update... Hopefully that give some people a better handle on what I'm doing wrong so I can get this figured out and learn something new.
Thanks again in advance and extra thanks to you #Piglet for you answer I definitely learned something new from it and thought it was very helpful.
In your first attempt you have 2 separate loops. You overwrite checkvalue for every element in gmcptable. Once you enter your second loop checkvalue will have the value last asigned in your first loop. So you only have 1 checkvalue and you only run across your table once as you only run your second loop once.
for _, v in pairs(gmcptable) do
checkvalue = v
end
for _, v in pairs(mytable) do
if v == checkvalue then
echo("yay")
else
echo("no!")
end
end
Your second attempt should work if I understood your problem.
You iterate over every element of gmcptable and compare it to every element in mytable. So whenever gmcptable contains a value that is also contained in mytable you should get a "yay".
for _, v in pairs(gmcptable) do
checkvalue = v
for _, v in pairs(mytable) do
if v == checkvalue then
echo("yay")
else
echo("no!")
end
end
end
One remark on your third attempt with a function. You should not call arguments table as you will then have no access to the global table functions inside your function. A call to table.sort for example would result in an error as you will index your local parameter table instead.
Okay so I tinkered with it a bit more tonight and kept getting the same error, eventually I googled the error and came to understand that the error was an end of file error, the client I use uses the name of a script as the name for a function if you don't declare it in the actual script, or at least that is my understanding of it, and so I didn't bother defining the function, and if I added an additional end to the script the client threw up another error because it wasn't defined, so I guess the client adds it's own definition to the function but doesn't add it's own end to the function which created the mix up. So long story short, the solution was to define each of the scripts as a function and add another end to close the function and that fixed the problem.
Thanks again to #Piglet for his answer, it was very helpful and dead accurate too! Thanks mate!

Meteor connection count

What would be the best way to record a live count of connections using the Meteor framework? I have the requirement of live sharing users online and have resorted to creating a collection and just replacing a record on initialize for each user, but the count seems to reset, what I have so far below, thanks in advanced.
Counts = new Meteor.Collection "counts"
if Meteor.is_client
if Counts.findOne()
new_count = Counts.findOne().count + 1
Counts.remove {}
Counts.insert count: new_count
Template.visitors.count = ->
Counts.findOne().count
if Meteor.is_server
reset_data = ->
Counts.remove {}
Counts.insert count: 0
Meteor.startup ->
reset_data() if Counts.find().count() is 0
You have a race condition when you trust in "get count value, remove from collection, insert in collection the new count". Clients can get the value X in the same time. It's not the way to go.
Instead it, try to make each client insert "itself" in a collection. Put a unique id and the "time" it was inserted. Use Meteor.Method to implement a heartbeat, refreshing this "time".
Clients with too old time can be deleted from the collection. Use a timer in the server to remove idle clients.
You can check some of this here:
https://github.com/francisbyrne/hangwithme/blob/master/server/game.js

Getting into deadlock while using rowcount delete query

Set rowcount 50000
declare #i int
select #i = 1
WHILE ( #i > 0 )
BEGIN
DELETE table1
FROM table1 (index index1)
WHERE
HIST_Timestamp < '2011/11/26'
select #i = ##rowcount
END
The query sometimes encounters a deadlock situation and terminates.. Not able to figure out what is going wrong .. Please help me!
A deadlock occurs when transaction A locks a record then has to wait for transaction B to unlock a record, while transaction B is waiting on a record already locked by transaction A.
If you really want to know why the deadlock is happening, you can do it with this command:
sp_configure "print deadlock information", 1
Creating a useful index for the query allows the delete statement to use page or row locks, improving concurrent access to the table. If creating an index for the delete transaction is not possible, you can perform the operation in a cursor, with frequent commit transaction statements to reduce the number of page locks.

Resources