With Time Range - blueprism

Suppose I have Some Employee Scheduled working hour(9:30 AM- 5.30PM) and He punches in at 9.25AM or 9:45 so I'm trying to find out if there is logic or Code or any kind of help to solve this problem. I want Bot to Identify +30 or -30min for Punch In or Punch-out time. Is there any VBO for Time Manipulations or give time ranges in collection or Data Item?

Why not just use a Decision stage with basic arithmetic to check the lower and upper bounds of your distance from the expected?
Load the expected punch-in time and the actual punch-in time to Time-typed Data Items, then set a third data TimeSpan-typed data item to the deviation you're willing to accept between the scheduled and actual times.
Since Blue Prism's internal expression evaluation engine doesn't support absolute value, you'll have to check that both permutations of the difference fall within the bound of the Acceptable Deviation. Thus, your Decision would look something like the following:
[Scheduled Start Time] - [Punch-in Time] < [Acceptable Deviation] AND [Punch-in Time] - [Scheduled Start Time] < [Acceptable Deviation]

Related

How to speed up a CLOCK_MONOTONIC list of timestamps

I have a list of SECONDS.MICROSECONDS CLOCK_MONOTONIC timestamps like those below:
5795.944152
5795.952708
5795.952708
5795.960820
5795.960820
5795.969092
5795.969092
5795.977502
5795.977502
5795.986061
5795.986061
5795.994075
5795.994075
5796.002382
5796.002382
5796.010860
5796.010860
5796.019241
5796.019241
5796.027452
5796.027452
5796.035709
5796.035709
5796.044158
5796.044158
5796.052453
5796.052453
5796.060785
5796.060785
5796.069053
They each represent a particular action to be made.
What I need to do, in python preferably, but the programming language doesn't really matter, is to speed up the actions - something like being able to do a 2X, 3X, etc., speed increment on this list. So those values need to decrease in such a way to match the speed incrementation of ?X.
I thought of dividing each timestamp with the speed number I want, but it seems it doesn't work this way.
As described and suggested by #RobertDodier I have managed to find a quick and simple solution to my issue:
speed = 2
speedtimestamps = [(t0 + (t - t0)/speed for t in timestamps]
Just make sure to remove the first line containing the first t0 timestamp.

Linux function profiler output

I would like to profile a code flow in kernel in order to understand where is the bottleneck. I found that function profiler does just that for me:
https://lwn.net/Articles/370423/
Unfortunately the output I see doesn't make sense to me.
From the link above, the output of the function profiler is:
Function Hit Time Avg
-------- --- ---- ---
schedule 22943 1994458706 us 86931.03 us
Where "Time" is the total time spend inside this function during the run. So if I have function_A that calls function_B, if I understood the output correctly, the "Time" measured for function_A includes the duration of function_B as well.
When I actualy run this on my pc I see another new column displayed for the output:
Function Hit Time Avg s^2
-------- --- ---- --- ---
__do_page_fault 3077 477270.5us 155.109 us 148746.9us
(more functions..)
What does s^2 stand for? It cant be standratd deviation because it's higher then the average...
I measured the total duration of this code flow from user space and got 400 ms. When summing up the s^2 column it came close to 400 ms. Which makes me think that perhaps is the "pure" time spent in __do_page_fault, that doest include the duration of the nested functions.
Is this correct? I didn't find any documentation of the s^2 column so I'm hesitant regarding my conclusions.
Thank you!
You can see the code that calculates the s^2 column here. It seems like this is the variance (standard deviation squared). If you take the root out of the number in your example, you get 385 us, which is closer to the average in the example.
The standard deviation is still greater than the mean, but that is fine.

How to work with the COUNTER in Nagios or RRD?

I have the following problem:
I want to do the statistics of data that need to be constantly increasing. For example, the number of visits to the link. After some time be restarted these visit and start again from the beginning. To have a continuous increase, want to do the statistics somewhere. For this purpose, use a site that does this. In his condition can be used to COUNTER, GAUGE, AVERAGE, ... a.. I want to use the COUNTER. The system is built on Nagios.
My question is how to use this COUNTER. I guess it is the same as that of the RRD. But I met some strange things in the creation of such a COUNTER.
I submit the values ' 1 ' then ' 2 ' and the chart to come up 3. When I do it doesn't work. After the restart, for example, and submit again 1 to become 4
Anyone dealt with these things tell me briefly how it works with this COUNTER.
I saw that the COUNTER is used for traffic on routers, etc, but I want to apply for a regular graph, which just increases.
The RRD data type COUNTER will convert the input data into a rate, by taking the difference between this sample and the last sample, and dividing by the time interval (note that data normalisation also takes place and this is dependent on the Interval setting of the RRD)
Thus, updating with a constantly increasing count will result in a rate-of-change value to be graphed.
If you want to see your graph actually constantly increasing, IE showing the actual count of packets transferred (for example) rather than the rate of transfer, you would need to use type GAUGE which assumes any rate conversion has already been done.
If you want to submit the rate values (EG, 2 in the last minute), but display the overall constantly increasing total (in other words, the inverse of how the COUNTER data type works), then you would need to store the values as GAUGE, and use a CDEF in your RRDgraph command of the form CDEF:x=y,PREV,+ to obtain the ongoing total. Of course you would only have this relative to the start of the graph time window; maybe a separate call would let you determine what base value to use.
As you use Nagios, you may like to investigate Nagios add-ons such as pnp4nagios which will handle much of the graphing for you.

explain me a difference of how MRTG measures incoming data

Everyone knows that MRTG needs at least one value to be passed on it's input.
In per-target options MRTG has 'gauge', 'absolute' and default (with no options) behavior of 'what to do with incoming data'. Or, how to count it.
Lets look at the elementary, yet popular example :
We pass cumulative data from network interface statistics of 'how much packets were recieved by the interface'.
We take it from '/proc/net/dev' or look at 'ifconfig' output for certain network interface. The number of recieved bytes is increasing every time. Its cumulative.
So as i can imagine there could be two types of possible statistics:
1. How fast this value changes upon the time interval. In oher words - activity.
2. Simple, as-is growing graphic that just draw every new value per every minute (or any other time interwal)
First graphic will be saltatory (activity). Second will just grow up every time.
I read twice rrdtool's and MRTG's docs and can't understand which option mentioned above counts what.
I suppose (i am not sure) that 'gauge' draw values as is, without any differentiation calculations (good for measuring how much memory or cpu is used every 5 minutes). And default or 'absolute' behavior tryes to calculate the speed between nearby measures, but what's the differencr between last two?
Can you, guys, explain in a simple manner which behavior stands after which option of three options possible?
Thanks in advance.
MRTG assumes that everything is being measured as a rate (even if it isnt a rate)
Type 'gauge' assumes that you have already calculated the rate; thus, the provided value is stored as-is (after Data Normalisation). This is appropriate for things like CPU usage.
Type 'absolute' assumes the value passed is the count since the last update. Thus, the value is divided by the number of seconds since the last update to get a rate in thingies per second. This is rarely used, and only for certain unusual data sources that reset their value on being read - eg, a script that counts the number of lines in a log file, then truncates the log file.
Type 'counter' (the default) assumes the value passed is a constantly growing count, possibly that wraps around at 16 or 64 bits. The difference between the value and its previous value is divided by the number of seconds since the last update to get a rate in thingies per second. If it sees the value decrease, it will assume a counter wraparound at 16 or 64 bit. This is appropriate for something like network traffic counters, which is why it is the default behaviour (MRTG was originally written for network traffic graphs)
Type 'derive' is like 'counter', but will allow the counter to decrease (resulting in a negative rate). This is not possible directly in MRTG but you can manually create the necessary RRD if you want.
All types subsequently perform Data Normalisation to adjust the timestamp to a multiple of the Interval. This will be more noticeable for Gauge types where the value is small than for counter types where the value is large.
For information on this, see Alex van der Bogaerdt's excellent tutorial

Processing of sensor data

I am working on a system with laser trip detectors(if something breaks the laser path I get a one on the output of the laser receiver).
I have many of these trip detectors and I want to detect if one is malfunctioning, but I do not know how to go about doing this. The lasers should not trip all that often..maybe a few times a day.
A typical case would be that the laser gets tripped for a .5-2 seconds, or brief intermittent tripping for a short time period, and possibly again after that(within 2-10 seconds)...
Are there any good ways to check the sensor is malfunctioning using a good statistical methodology?
You could just create a "profile" which includes the avg/mean/min/max of how often each sensor is tripped/how long it is tripped/how long is the time between a trip and the next trip etc. for example by using the data of some period of time like the last week/month or similar...
THEN you can compare the current state of a sensor to its profile... when the deviation is "big enough" you can assume an exceptional situation/perhaps a malfunction... the hardest part is to adjust the threshold for the deviation from the profile which in turn if hit triggers for example "malfunction handling"...

Resources