Algorithm for random numbers generation - node.js

I don't know which algorithm should I use. If it's necessary the platform is Node.js.
Volume = 100; Range = 5...15; N = 10.
I need to generate N numbers from Range that in sum will give Volume. What is the best way to do it?

Here's a quick, naive implementation.
There might be a numerical trick that allows making this faster, but this works. :)
Note that it's of course possible to pass in a combination of parameters that makes this function never complete, but for total = 100, min = 5, max = 15, n = 10 it does find solutions, e.g.
[ 14, 5, 11, 6, 10, 13, 9, 13, 14, 5 ]
[ 15, 13, 14, 10, 9, 8, 10, 8, 6, 7 ]
[ 8, 11, 11, 9, 12, 15, 5, 9, 6, 14 ]
[ 9, 12, 11, 6, 14, 12, 10, 7, 11, 8 ]
[ 9, 8, 9, 10, 11, 10, 12, 7, 14, 10 ]
[ 9, 9, 5, 14, 10, 13, 11, 13, 9, 7 ]
function generateNumbers(total, min, max, n) {
while (true) {
const numbers = [];
let sum = 0;
while (true) {
let num;
if (numbers.length === n - 1) {
// Fill in the last number with the remainder to satisfy the problem
num = total - sum;
if (num < min || num === 0 || num > max) {
// Out of bounds, this solution is invalid
break; // Try again.
}
} else {
num = Math.floor(min + Math.random() * (max + 1 - min));
}
numbers.push(num);
sum += num;
if (numbers.length === n && sum === total) {
// Solution found.
return numbers;
}
if (sum > total) {
// Overshot, retry.
break;
}
}
}
}
console.log(generateNumbers(100, 5, 15, 10));

Related

Range function in M?

Is it possible to create a numerical range in M? For example something like:
let
x = range(1,10) // {1,2,3,4,5,6,7,8,9,10}, from 1 to 10, increment by 1
x = range(1,10,2) // {1,3,5,7,9}, from 1 to 10, increment by 2
For simple scenarios, a..b might be appropriate. Some examples:
let
firstList = {1..10}, // {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
secondList = {1, 5, 12, 14..17, 18..20}, // {1, 5, 12, 14, 15, 16, 17, 18, 19, 20}
thirdList = {Character.ToNumber("a")..100, 110..112}, // {97, 98, 99, 100, 110, 111, 112}
fourthList = {95..(10 * 10)} // {95, 96, 97, 98, 99, 100}
in
fourthList
Otherwise, maybe try a custom function which internally uses List.Generate:
let
range = (inclusiveStart as number, inclusiveEnd as number, optional step as number) as list =>
let
interval = if step = null then 1 else step,
conditionFunc =
if (interval > 0) then each _ <= inclusiveEnd
else if (interval < 0) then each _ >= inclusiveEnd
else each false,
generated = List.Generate(() => inclusiveStart, conditionFunc, each _ + interval)
in generated,
firstList = range(1, 10), // {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
secondList = range(1, 10, 2), // {1, 3, 5, 7, 9}
thirdList = range(1, 10 , -1), // {} due to the combination of negative "step", but "inclusiveEnd" > "inclusiveStart"
fourthList = range(10, 1, 0), // {} current behaviour is to return an empty list if 0 is passed as "step" argument.
fifthList = range(10, 1, -1), // {10, 9, 8, 7, 6, 5, 4, 3, 2, 1}
sixthList = range(10, 1, 1) // {} due to the combination of positive "step", but "inclusiveEnd" < "inclusiveStart"
in
sixthList
The above range function should be able to generate both ascending and descending sequences (see examples in code).
Alternatively, you could create a custom function which uses List.Numbers internally by first calculating what the count argument needs to be. But I chose to go with List.Generate.

How to find larger numbers of any selected number in a series in an ascending order in NumPy?

I have a series of randomly scrabbled numbers. I want to pick a number (say X), and then find and write larger numbers than X in an ascending order. I’m using Python and NumPy.
EXAMPLE:
Series of random numbers:
4, 8, 5, 9, 3, 11, 17, 19, 9, 15, 16
X=4, Then:
4, 8, 9, 11, 17, 19
X=8, Then:
8, 9, 11, 17, 19
X=3, Then:
3, 11, 17, 19
Please note that when we pick X, our desire is to put X at the beginning of the ascending series, meaning that the count should start from X.
Also note that we don’t want to sort the numbers in terms of their position. No position change in the numbers. Only reading and writing the numbers in an ascending order. Next numbers in the sequence that are smaller than X should be ignored. Thank you.
EDIT:
def get_elements(get_from,get_by):
return [ (get_from[i], i ) for i in range(len(get_from)) if get_by[i] == 0 ]
def ordered_position():
ordered_lst = [0] *len(data_arr)
new_val = 1
while True:
print(new_val)
ge = get_elements(data_arr,ordered_lst)
if new_val >= len(data_arr) or not ge: break
first_val, idx_fist_val = ge.pop(0)
ordered_lst[idx_fist_val] = (first_val,new_val)
for item, idx in ge:
if data_arr[idx] >= first_val:
ordered_lst[idx] = (first_val,new_val)
first_val = item
new_val += 1
return ordered_lst
You can use np.maximum.accumulate like so::
a = np.array([4, 8, 5, 9, 3, 11, 17, 19, 9, 15, 16])
X = 4
withreps = np.maximum.accumulate(a[np.argmax(a==X):])
result = withreps[np.where(np.diff(withreps, prepend=withreps[0]-1))]
result
# array([ 4, 8, 9, 11, 17, 19])

How would I add numbers between 1 and 20 to a list using a for loop without using range()

Is there a replacement for the range command that can be used in for loop to add the numbers between 1 and 20 to a list.
Simple for loop
i = 1
l = []
while i <= 20:
l.append(i)
i = i + 1
Outputs
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]
You can use np.linspace(1, 20, num=20)
If we are to avoid using a builtin as crucial as range, we might as well have some fun doing so.
l = []
while len(l) < 20:
l.append(len(l) + 1)
print(l)
Output
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]

Selection Sort using Groovy

Here's my code for performing a selection sort using Groovy:
class SelectionSorting {
void sorting() {
def sortmethod = {
List data = [ 1, 5, 2, 3, 7, 4, 6, 8, 9, ]
def n = data.size()
println "Before sort : " + data
for(def i in 0..n) {
def position=i
for(def j in i+1..n) {
if(data[position] > data[i])
position=i
}
if(position!=i) {
swap(data[i],data[position])
}
}
println "After sort : " + data
}
sortmethod()
}
}
SelectionSorting s = new SelectionSorting()
s.sorting()
However, the output I see is still an unsorted array:
Before sort : [1, 5, 2, 3, 7, 4, 6, 8, 9]
After sort : [1, 5, 2, 3, 7, 4, 6, 8, 9]
I am very new to Groovy. I'm supposed to insert the logic within a closure only. I'm not sure what I need to change in the closure I have created in my code above. Please help.
You are using the wrong index when calculating the position of the minimum value; you should use j instead of i (added println to show iterations):
def selectionSort = { data ->
int n = data.size()
for (int i = 0; i < n - 1; i++) {
// Find the index (position) of the minimum value
position = i
for(int j = i + 1; j < n; j++) {
if(data[j] < data[position]) {
position = j
}
}
// Swap
if (position != i) {
temp = data[position]
data[position] = data[i]
data[i] = temp
}
println data
}
println "result: $data"
}
So that selectionSort([1,5,2,4,3,8,7,9]) yields:
[1, 5, 2, 4, 3, 8, 7, 9]
[1, 2, 5, 4, 3, 8, 7, 9]
[1, 2, 3, 4, 5, 8, 7, 9]
[1, 2, 3, 4, 5, 8, 7, 9]
[1, 2, 3, 4, 5, 8, 7, 9]
[1, 2, 3, 4, 5, 7, 8, 9]
[1, 2, 3, 4, 5, 7, 8, 9]
result: [1, 2, 3, 4, 5, 7, 8, 9]

OS X GCD multi-thread concurrency uses more CPU but executes slower than single thread

I have a method which does a series of calculations which take quite a bit of time to complete. The objects that this method does computations on are generated at runtime and can range from a few to a few thousand. Obviously it would be better if I could run these computations across several threads concurrently, but when I try that, my program uses more CPU yet takes longer than running them one-by-one. Any ideas why?
let itemsPerThread = (dataArray.count / 4) + 1
for var i = 0; i < dataArray.count; i += itemsPerThread
{
let name = "ComputationQueue\(i)".bridgeToObjectiveC().cString()
let compQueue = dispatch_queue_create(name, DISPATCH_QUEUE_CONCURRENT)
dispatch_async(compQueue,
{
let itemCount = i + itemsPerThread < dataArray.count ? itemsPerThread : dataArray.count - i - 1
let subArray = dataArray.bridgeToObjectiveC().subarrayWithRange(NSMakeRange(i, dataCount)) as MyItem[]
self.reallyLongComputation(subArray, increment: increment, outputIndex: self.runningThreads-1)
})
NSThread.sleepForTimeInterval(1)
}
Alternatively:
If I run this same thing, but a single dispatch_async call and on the whole dataArray rather than the subarrays, it completes much faster while using less CPU.
what you (it is my guess) want to do should looks like
//
// main.swift
// test
//
// Created by user3441734 on 12/11/15.
// Copyright © 2015 user3441734. All rights reserved.
//
import Foundation
let computationGroup = dispatch_group_create()
var arr: Array<Int> = []
for i in 0..<48 {
arr.append(i)
}
print("arr \(arr)")
func job(inout arr: Array<Int>, workers: Int) {
let count = arr.count
let chunk = count / workers
guard chunk * workers == count else {
print("array.cout divided by workers must by integer !!!")
return
}
let compQueue = dispatch_queue_create("test", DISPATCH_QUEUE_CONCURRENT)
let syncQueue = dispatch_queue_create("aupdate", DISPATCH_QUEUE_SERIAL)
for var i = 0; i < count; i += chunk
{
let j = i
var tarr = arr[j..<j+chunk]
dispatch_group_enter(computationGroup)
dispatch_async(compQueue) { () -> Void in
for k in j..<j+chunk {
// long time computation
var z = 100000000
repeat {
z--
} while z > 0
// update with chunk
tarr[k] = j
}
dispatch_async(syncQueue, { () -> Void in
for k in j..<j+chunk {
arr[k] = tarr[k]
}
dispatch_group_leave(computationGroup)
})
}
}
dispatch_group_wait(computationGroup, DISPATCH_TIME_FOREVER)
}
var stamp: Double {
return NSDate.timeIntervalSinceReferenceDate()
}
print("running on dual core ...\n")
var start = stamp
job(&arr, workers: 1)
print("job done by 1 worker in \(stamp-start) seconds")
print("arr \(arr)\n")
start = stamp
job(&arr, workers: 2)
print("job done by 2 workers in \(stamp-start) seconds")
print("arr \(arr)\n")
start = stamp
job(&arr, workers: 4)
print("job done by 4 workers in \(stamp-start) seconds")
print("arr \(arr)\n")
start = stamp
job(&arr, workers: 6)
print("job done by 6 workers in \(stamp-start) seconds")
print("arr \(arr)\n")
with results
arr [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]
running on dual core ...
job done by 1 worker in 5.16312199831009 seconds
arr [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
job done by 2 workers in 2.49235796928406 seconds
arr [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24]
job done by 4 workers in 3.18479603528976 seconds
arr [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36]
job done by 6 workers in 2.51704299449921 seconds
arr [0, 0, 0, 0, 0, 0, 0, 0, 8, 8, 8, 8, 8, 8, 8, 8, 16, 16, 16, 16, 16, 16, 16, 16, 24, 24, 24, 24, 24, 24, 24, 24, 32, 32, 32, 32, 32, 32, 32, 32, 40, 40, 40, 40, 40, 40, 40, 40]
Program ended with exit code: 0
... you can use next pattern for distributing job between any number of workers (the number of workers which give you the best performance depends on worker definition and sources which are available in your environment). generally for any kind of long time calculation ( transformation ) you can expect some performance gain. in two core environment up to 50%. if your worker use highly optimized functions using more cores 'by default', the performance gain can be close to nothing :-)
// generic implementation
// 1) job distribute data between workers as fair, as possible
// 2) workers do their task in parallel
// 3) the order in resulting array reflect the input array
// 4) there is no requiremets of worker block, to return
// the same type as result of yor 'calculation'
func job<T,U>(arr: [T], workers: Int, worker: T->U)->[U] {
guard workers > 0 else { return [U]() }
var res: Dictionary<Int,[U]> = [:]
let workersQueue = dispatch_queue_create("workers", DISPATCH_QUEUE_CONCURRENT)
let syncQueue = dispatch_queue_create("sync", DISPATCH_QUEUE_SERIAL)
let group = dispatch_group_create()
var j = min(workers, arr.count)
var i = (0, 0, arr.count)
var chunk: ArraySlice<T> = []
repeat {
let a = (i.1, i.1 + i.2 / j, i.2 - i.2 / j)
i = a
chunk = arr[i.0..<i.1]
dispatch_group_async(group, workersQueue) { [i, chunk] in
let arrs = chunk.map{ worker($0) }
dispatch_sync(syncQueue) {[i,arrs] in
res[i.0] = arrs
}
}
j--
} while j != 0
dispatch_group_wait(group, DISPATCH_TIME_FOREVER)
let idx = res.keys.sort()
var results = [U]()
idx.forEach { (idx) -> () in
results.appendContentsOf(res[idx]!)
}
return results
}
You need to
Get rid of the 1 second sleep. This is artifically reducing the degree to which you get parallel execution, because you're waiting before starting the next thread. You are starting 4 threads - and you are therefore artifically delaying the start (and potentially the finish) of the final thread by 3 seconds.
Use a single concurrent queue, not one per dispatch block. A concurrent queue will start blocks in the order in which they are dispatched, but does not wait for one block to finish before starting the next one - i.e. it will run blocks in parallel.
NSArray is a thread-safe class. I presume that it uses a multiple-reader/single-writer lock internally, which means there is probably no advantage to be obtained from creating a set of subarrays. You are, however, incurring the overhead of creating the subArray
Multiple threads running on different cores cannot talk to the same cache line at the same time.Typical cache line size is 64 bytes, which seems unlikely to cause a problem here.

Resources