Performance Monitor Trigger Another Program - memory-leaks

I am trying to use Windows Server 2008 performance counter to monitor one of the long running process.
I am able to setup a DataCollector >> \Process\Private Bytes to collect the performance data (memory usage). I was wondering whether I can setup a threshold in the performance counter and use it to trigger one my local program. In case the long running program consumes too much memory from the server, it will trigger a windows script to shut it down.
I realize this is lame way of dealing with memory leaking problem, but it is the only feasible solution for now.

1) In Windows Performance Monitor
1.1) Data Collector Sets >> Create a new user defined Data Collector Set (Choose option Created Manually (Advanced))
1.2) What type of data do you want to include ? >> Performance Counter Alert
1.3) Add a performance counter from the list (in this case, process >> private bytes >> choose a currently running process called xxxx)
1.4) Click ok to create such performance monitor
When the condition of this performance counter has met, it will write an event log entry in my case event log id 2031.
1.5) Attach Task to this event, in my case, I choose to run a program when event id 2031 is logged
The following article has helped me :
Perfmon counters to check memory leak

Related

JMeter Thread User Limit - execute 5000 threads concurrently

I tried to test how many limits of VUs that my pc can handle. So far i tried to run with 5000 VUs with GUI mode, its working fine so wondering if this is really the correct way to do it. Here's my laptop specification:
16GB DDR4, 512GB SSD, i5 7th gen
For now I already test with 6 HTTP requests, thread group information as below:
Can anyone verify whether im doing correct way to test the limit? And why does when the test is running, the number of user run at top right is still 47? But at the end of the test, it really shows that it ends with 5000 VUs in the summary report listener (Please refer Jmeter logs in the first screenshot)
You have only 1 loop and 120 seconds ramp-up in the Thread Group so JMeter starts 41 users each second. Once started user starts executing Samplers upside down and when the last sampler is done the thread is being shut down.
So you're running into the situation when some threads have already finished their job and ended and some haven't been yet started. If you want to reach 5000 users then you need to tick Infinite next to Loop Count and specify Thread Lifetime to be greater than your ramp-up time.
More information: JMeter Test Results: Why the Actual Users Number is Lower than Expected
According to JMeter Best Practices you should:
Run your test in command-line non-GUI mode
Disable or delete all the Listeners as they don't add any value, just consume the resources
Once your non-GUI test execution is finished you can either open the .jtl results file using a Listener of your choice or generate HTML Reporting Dashboard out of it and analyze the results

Azure SSAS tabular full process: memory won't release

can anyone tell me if I did anything wrong in my design? Because it seems for some reasons, the memory used for processing tabular model is not released correctly.
The whole story is:
I created a SSIS package to loop all Azure SSAS tabular models, and then process them by pre-defined types, e.g. full, dataOnly + recalculate, clear + full, etc. If any error occurs, the package will log the error and process next model.
In my test, I processed 40+ models in full process mode, most of them are very small except one could be around 400mb. Some of them generated errors as I expected. But then I noticed that the memory usage of the SSAS instance increased dramatically.
I tried to run dmv queries to see sessions and sinkable memory allocations, but couldn't find any clues. At last I have to restart the instance to release the memory. As demonstrated, after restart of the service, memory usage dropped to just 5GB.
To my understanding when I selected full process mode, the memory used to process should be released. But in below demonstration, the memory was not released correctly.
Did anyone encounter this problem before? Is it related to the loop + full process? and if yes what is the correct way to process multiple tabular models?
Thanks in advance.

Perl threads to execute a sybase stored proc parallel

I have written a sybase stored procedure to move data from certain tables[~50] on primary db for given id to archive db. Since it's taking a very long time to archive, I am thinking to execute the same stored procedure in parallel with unique input id for each call.
I manually ran the stored proc twice at same time with different input and it seems to work. Now I want to use Perl threads[maximum 4 threads] and each thread execute the same procedure with different input.
Please advise if this is recommended way or any other efficient way to achieve this. If the experts choice is threads, any pointers or examples would be helpful.
What you do in Perl does not really matter here: what matters is what happens on the side of the Sybase server. Assuming each client task creates its own connection to the database, then it's all fine and how the client achieved this makes no diff for the Sybase server. But do not use a model where the different client tasks will try to use the same client-server connection as that will never happen in parallel.
No 'answer' per se, but some questions/comments:
Can you quantify taking a very long time to archive? Assuming your archive process consists of a mix of insert/select and delete operations, do query plans and MDA data show fast, efficient operations? If you're seeing table scans, sort merges, deferred inserts/deletes, etc ... then it may be worth the effort to address said performance issues.
Can you expand on the comment that running two stored proc invocations at the same time seems to work? Again, any sign of performance issues for the individual proc calls? Any sign of contention (eg, blocking) between the two proc calls? If the archival proc isn't designed properly for parallel/concurrent operations (eg, eliminate blocking), then you may not be gaining much by running multiple procs in parallel.
How many engines does your dataserver have, and are you planning on running your archive process during a period of moderate-to-heavy user activity? If the current archive process runs at/near 100% cpu utilization on a single dataserver engine, then spawning 4 copies of the same process could see your archive process tying up 4 dataserver engines with heavy cpu utilization ... and if your dataserver doesn't have many engines ... combined with moderate-to-heavy user activity at the same time ... you could end up invoking the wrath of your DBA(s) and users. Net result is that you may need to make sure your archive process hog the dataserver.
One other item to consider, and this may require input from the DBAs ... if you're replicating out of either database (source or archive), increasing the volume of transactions per a given time period could have a negative effect on replication throughput (ie, an increase in replication latency); if replication latency needs to be kept at a minimum, then you may want to rethink your entire archive process from the point of view of spreading out transactional activity enough so as to not have an effect on replication latency (eg, single-threaded archive process that does a few insert/select/delete operations, sleeps a bit, then does another batch, then sleeps, ...).
It's been my experience that archive processes are not considered high-priority operations (assuming they're run on a regular basis, and before the source db fills up); this in turn means the archive process is usually designed so that it's efficient while at the same time putting a (relatively) light load on the dataserver (think: running as a trickle in the background) ... ymmv ...

How does Erlang sleep (at night?)

I want to run a small clean up process every few hours on an Erlang server.
I know of the timer module. I saw an example in a tutorial used chained timer:sleep commands to wait for an event that would occur multiple days later, which I found strange. I understand that Erlang process are unique compared to those in other languages, but the idea of a process/thread sleeping for days, weeks, and even months at a time seemed odd.
So I set out to find out the details of what sleeping actually does. The closest I found was a blog post mentioning that sleep is implemented with a receive timeout, but that still left the question:
What do these sleep/sleep-like functions actually do?
Is my process taking up resources as it sleeps? Would having thousands of sleeping process use as many resources, as say, thousands of process servicing a recursive call that did nothing? Is there any performance penalty from repeatedly sleeping within processes, or sleeping for long periods of time? Is the VM constantly expending resources to see if the conditions to end the processes' sleep are up?
And as a side note, I'd appreciate if someone could comment on if there is a better way than sleeping to pause for hours or days at a time?
That is the Karma of any erlang process: it waits or dies :o)
when a process is spawned, it start executing until the last execution line, and die, returning the last evaluation.
To keep a process alive, there is no other solution to recursively loop in a never ending succession of calls.
of course there are several conditions that make it stop or sleep:
end of the loop: the process received a message which tell him to
stop recursion
a receive bloc: the process will wait until a message
matching one entry in the receive bloc is posted in the message
queue.
The VM scheduler stop it temporarily to let access to the CPU
to other processes
in the 2 last cases the execution will restart under the responsibility of the VM scheduler.
while waiting it uses no CPU bandwidth, but keeps the exact same memory layout it had when it started waiting. The Erlang OTP offers some means to reduce this memory layout to the minimum using the hibernate option (see the documentation of gen_serevr or gen_fsm, but it is for advanced usage only in my mind).
a simple way to create a "signal" that will fire a process at regular (or almost regular) interval is effectively to use receive block with timout (The timeout is limited to 65535 ms), for example:
on_tick_sec(Module,Function,Arglist,Period) ->
on_tick(Module,Function,Arglist,1000,Period,0).
on_tick_mn(Module,Function,Arglist,Period) ->
on_tick(Module,Function,Arglist,60000,Period,0).
on_tick_hr(Module,Function,Arglist,Period) ->
on_tick(Module,Function,Arglist,60000,Period*60,0).
on_tick(Module,Function,Arglist,TimeBase,Period,Period) ->
apply(Module,Function,Arglist),
on_tick(Module,Function,Arglist,TimeBase,Period,0);
on_tick(Module,Function,Arglist,TimeBase,Period,CountTimeBase) ->
receive
stop -> stopped
after TimeBase ->
on_tick(Module,Function,Arglist,TimeBase,Period,CountTimeBase+1)
end.
and usage:
1> Pid = spawn(util,on_tick_sec,[io,format,["hello~n"],5]).
<0.40.0>
hello
hello
hello
hello
2> Pid ! stop.
stop
3>
[edit]
The timer module is a standard gen_server running in a separate process. All the function in the timer module are public interfaces that execute a hidden gen_server:call or gen_server:cast to the timer server. This is a common usage to hide the internal of a server and allow further evolutions without impact on existing applications.
The server uses internally a table (ets) to store all the actions it has to do along with each timer reference and it uses its own function to be awaken when needed (at the end, the VM must take care of this ?).
So you can hibernate a process without any effect on the timer server behavior. The hibernation mechanism is
tricky, see documentation at hibernate/3 definition, you will see that yo have to "rebuild" the context by yourself since everything was removed from the process context, and a tuple(Module,Function,Arguments} is stored by the system to restart your process when needed.
cost some time in garbage collecting and process restart
It is why I said that it is really an advance feature that need good reason to be used.
There is also erlang:hibernate/3 that puts a process in "deep sleep", minimizing memory usage for it.

SQLite issue - DB is locked workaround

I have 2 processes that connect to the same DB.
The first one is used to read from the DB and the second is used to write to the DB.
The first process sends write procedures for executing to the second process via message-queue on linux.
Every SQL-statement is taken in the prepare, step, finalize routine; Where the prepare and step are made in loop of 10000 times till it succedd (did this to overcome DB locked issues).
To add a table i do the next procedure:
the first process sends request via msg-q to the second process to add a table and insert garbage in it's rows in a journal_mode=OFF mode.
then the first process checks for the existing table so it could continue in its algorithm. (It checks it in a loop with usleep command between iterations.)
The problem is that the second process is stuck in the step execute of 'PRAGMA journal_mode=OFF;' because it says the DB is locked (Here too, i use a loop of 10000 iterations with usleep to check 10000 time for the DB to be free, as i mentioned before).
When i add to the first process in the 'check for existing table' loop, the operation of closing the connection, the second process is ok. But now when i add tables and values sometime i get 'callback requested query abort' in the Step Statement.
Any help of whats happening here ?
Use WAL mode. It allows one writer and any number of readers without any problems. You don't need to check for the locked state and do retrys etc.
WAL limitation: The DB has to be on the local drive.
Performance: Large transactions (1000s of inserts or similar) are slower than classic rollback journal, but apart of that the speed is very similar, sometimes even better. Perceived performance (UI waiting for DB write to finish) improves dramatically.
WAL is a new technology, but already used in Firefox, Adroid/iOS phones etc. I did tests with 2 threads running at full speed - one writing and the other one reading - and did not encounter a single problem.
You may be able to simplify your app when adopting the WAL mode.

Resources