How to implement simple traffic shaping in go - multithreading

I am trying as an exercise to implement simple traffic shaper in go.
The API is:
push(int): puts an int in the shaper
out(): outputs 1 or more int(s) from the shaper.
push is called by the clients and the rate can't be controlled.
out is called roughly every 1ms and can output 1 or more ints and is trying to maintain a constant out put rate of r ints per 1s but can output more if the intrnal buffer of the shaper is in danger of getting filled up. However the output should be as uniform as possible. For example:
Out: 1 1 2 2 2 1 is better than
Out: 1 1 5 1 1
since the second example is bursty (there's an output of 5 ints).
I have an idea of how to do this using leaky bucket algorithm.
My question:
How to implement in Go that output is called semi-regularly roughly at 1ms ticks?

How to implement in Go that output is called semi-regularly roughly at 1ms ticks?
Use the standard time.Ticker, configured to flush output every 1 millisecond.

Related

How to scale data in Python

I am currently working on a Turbidity meter that returns a value of the voltage from 0-5.0 (this number will change depending on how turbid the water so a lower voltage reading indicates a more turbid water).
What I am trying to do is take the voltage reading that I get and convert it to a reading that expresses the turbidity in the water (so a voltage reading of 4.8 would equal 0 to a voltage reading of 1.2 would equal 4000).
I have written some code using MicroLogic and I know that in there there is a box that looks at the incoming reading and will scale the outgoing reading between a Min and Max number that you put in (an example in MicroLogic is that I get a 4-20mA signal in and it will scale the output to mean a water level in a tank based on that 4mA = 0ft and 20mA = 12ft)
Is there a scaling code for python, or how to I go about doing this? Thanks!
You could do something like this:
def turbidity(voltage):
return 4000 - (voltage-1.2)/(4.8-1.2)*4000
print(turbidity(1.2), turbidity(2.4), turbidity(4.8))
Which would print out:
4000.0 2666.6666666666665 0.0
If you want integers instead, add an int() call like so:
4000 - int((voltage-1.2)/(4.8-1.2)*4000)
Good luck with your turbidity!

Creating audio level meter - signal normalization

I have program which tracks audio signal in real time. Every processed sample I am able to read value of it in range between <-1, 1>.
I would like to create(and later display) audio level meter. From what I understand - to do it I need to keep converting my audio signal in real time, on each channel to dB and then display dB values on each channel in some graphical form of bars.
I am a bit lost how to do it and it should be simple matter. Would just normalization from <-1, 1> to <0, 1> (like... [n-sample +1]/2) and then calculating 20*log10 from each upcoming sample make it?
You can't plot the signal directly, as it always varying positive and negative.
Therefore you need to average out the strength of the signal every so many samples.
Say you're sampling at 44.1kHz, perhaps you might choose 4410 samples so you're updating your display 10 times per second.
So you calculate the RMS of your 4410 samples - see http://en.wikipedia.org/wiki/Root_mean_square
The RMS value is always positive.
You can then convert this to Db:
dBV = 20 x log10(Vrms)
This assumes that your maximum signal -1 to +1 corresponds to -1 to +1 volt. You will need to do further adjustments if not.

How to work with the COUNTER in Nagios or RRD?

I have the following problem:
I want to do the statistics of data that need to be constantly increasing. For example, the number of visits to the link. After some time be restarted these visit and start again from the beginning. To have a continuous increase, want to do the statistics somewhere. For this purpose, use a site that does this. In his condition can be used to COUNTER, GAUGE, AVERAGE, ... a.. I want to use the COUNTER. The system is built on Nagios.
My question is how to use this COUNTER. I guess it is the same as that of the RRD. But I met some strange things in the creation of such a COUNTER.
I submit the values ' 1 ' then ' 2 ' and the chart to come up 3. When I do it doesn't work. After the restart, for example, and submit again 1 to become 4
Anyone dealt with these things tell me briefly how it works with this COUNTER.
I saw that the COUNTER is used for traffic on routers, etc, but I want to apply for a regular graph, which just increases.
The RRD data type COUNTER will convert the input data into a rate, by taking the difference between this sample and the last sample, and dividing by the time interval (note that data normalisation also takes place and this is dependent on the Interval setting of the RRD)
Thus, updating with a constantly increasing count will result in a rate-of-change value to be graphed.
If you want to see your graph actually constantly increasing, IE showing the actual count of packets transferred (for example) rather than the rate of transfer, you would need to use type GAUGE which assumes any rate conversion has already been done.
If you want to submit the rate values (EG, 2 in the last minute), but display the overall constantly increasing total (in other words, the inverse of how the COUNTER data type works), then you would need to store the values as GAUGE, and use a CDEF in your RRDgraph command of the form CDEF:x=y,PREV,+ to obtain the ongoing total. Of course you would only have this relative to the start of the graph time window; maybe a separate call would let you determine what base value to use.
As you use Nagios, you may like to investigate Nagios add-ons such as pnp4nagios which will handle much of the graphing for you.

MInimal time to compute the minimal value

I was asked such question, what is the minimal time needed to compute the minimal value of an unsorted array of 32 integers, given that you have 8 cores and each comparison takes 1 minute. My solution is 6 minutes, assuming that each core operates independently. Divide the array into 8 portions, each has 4 integers, 8 cores concurrently compute the local min of each portion, takes 3 minutes, (3 comparisons in each portion). Then 4 cores to compute the local min of those 8 local mins, 1 minute. Then 2 cores to compute the 4 local mins, 1 minute, then 1 core to compute the global min among the remaining 2 mins, 1 minute. Therefore, the total amount is 6 minutes. However, it didn't seem to be the answer that the interviewer was looking for. So what do you guys think about it? Thank you
If you assume that the program is CPU-bound, which is fairly ridiculous, but seems to be where you were going with your analysis, then you need to decide how to divide the work to gain something by multithreading.
8 pieces of 4 integers each seems arbitrary. Interviewers usually like to see a thought process. Being mathematically general, let us compute total orderings over subsets of the problem. How hard is it to compute a total ordering, and what is the payoff?
Total ordering of N items, picking arbitrarily when two items are equal, requires N*(N-1)/2 comparisons and eliminates (N-1) items. Let's make a table.
N = 2: 1 comparison, 1 elimination.
N = 3: 3 comparisons, 2 eliminations.
N = 4: 6 comparisons, 3 eliminations.
Clearly it's most efficient to work with pairs (N = 2), but the other operations are useful if resources would otherwise be idle.
Minute 1-3: Eliminate 24 candidates using operations with N = 2, 8 at a time.
Minute 4: Now there are 8 candidates. Keeping N = 2 would leave 4 cores idle. Setting N = 3 uses 2 more cores per operation and yields 1 more elimination. So do two operations with N = 3 and one with N = 2, eliminating 2+2+1 = 5 candidates. Or, use 6 cores with N = 4 and two with N = 1 to eliminate 3+1+1 = 5. The result is the same.
Minute 5: Only 3 candidates remain, so set N = 3 for the last round.
If you keep the CPUs busy, it takes 5 minutes using a mix of two higher-level abstractions. More energy is spent because this isn't the most efficient way to solve the problem, but it is faster.
I'm going to assume that comparing two "integers" is a black box that takes 1 minute to complete, but we can cache those comparisons and only do any particular comparison once.
There's not much you can do until you're down to 8 candidates (3 minutes). But you don't want to leave cores sitting idle if you can help it. Let's say that the candidates are numbered 1 through 8. Then in minutes 4 you can compare:
1v2 3v4 5v6 7v8 AND 1v5 2v6 3v7 4v8
If we're lucky, this eliminates 6 candidates, and we can use minute 5 to to pick the winner.
If we're not lucky, this leaves 4 candidares (for example, 1, 3, 6, and 8), and that step didn't gain us anything over the original approach. In minute 5, we need to throw everything at it (to beat the original approach). But there are 8 cores, and C(4,2)=6 possible pairings. So we can make every possible comparison (and leave 2 cores idle), and get our winner in 5 minutes.
Those are really big integers, too big to fit into CPU cache, so multithreading doesn't really help you — this problem is I/O bound. (I suppose it depends on the specifics of the I/O bottleneck, but let's not pick nits.)
Since you need exactly N-1 comparisons, the answer is 31.

FIO Flexible IO tester for repetitive data access patterns

I am currently working on a project and I need to test my prototype with repetitive data access patterns. I came across fio which is a flexible I/O tester for Linux (1).
Fio has many options and I want it to produce a workload which accesses the same blocks of a file, the same number of times over and over again. I also need those accesses to not be equal among these blocks. For instance, if fio creates a file named "test.txt"
and this file is divided on 10 blocks, I need the workload to read a specific number of these blocks, with different number of IOs each, over and over again. Let's say that it chooses to access block 3, 7 and 9. Then I want to access these in a specific order and a specific number of times each, over and over again. If this workload can be described by N passes, then I want to be something like this:
1st pass: read block 3 10 times, read block 7 5 times, read block 9 2 times.
2nd pass: read block 3 10 times, read block 7 5 times, read block 9 2 times.
...
N-pass: read block 3 10 times, read block 7 5 times, read block 9 2 times.
Question 1: Can the above workload be produced with Fio? If yes, How?
Question 2: Is there a mailing list, forum, website, community for Fio users?
Thank you,
Nick
http://www.spinics.net/lists/fio/index.html This is the website you can follow mailing list.
http://www.bluestop.org/fio/HOWTO.txt link will also help you.
This is actually quite a tricky thing to do. The closest you'll get with parameters is using one of the non-uniform distributions (see random_distribution in the HOWTO) but you'll be saying re-read blocks A, B, C more than blocks X, Y, Z and you won't be able to control the exact counts.
An alternative is to write an iolog that can be replayed that has the exact sequence you're looking for (see Trace file format v2 in the HOWTO).

Resources