Subset Leetcode, size of a List - subset

I am really curios about one thing when compiling, in the given code below, where I am creating nested for loop, I am giving as a limit
subsetArr.size()
but when compiling it is saying memory exceeded, and if I define size just before the for loop
int size = subsetArr.size()
and then passing limit as size
i<size;
it is working fine. What can be the cause?
class Solution {
public List<List<Integer>> subsets(int[] nums) {
List<List<Integer>> subsetArr = new ArrayList<>();
subsetArr.add(new ArrayList());
for(int num: nums){
for(int i=0; i<subsetArr.size(); i++){
List<Integer> takenList = new ArrayList<>(subsetArr.get(i));
takenList.add(num);
subsetArr.add(takenList);
}
}
return subsetArr;
}
}

Look at what this loop does:
for(int i=0; i<subsetArr.size(); i++){
List<Integer> takenList = new ArrayList<>(subsetArr.get(i));
takenList.add(num);
subsetArr.add(takenList); // <-- here
}
Each iteration adds to the collection. So in the next iteration, subsetArr.size() will be larger. Thus you have a loop which indefinitely increases the size of the collection until it runs out of resources.
Contrast that to when you store the value:
int size = subsetArr.size();
In this case, while subsetArr.size() may change, size won't unless you update it. So as long as you don't update size then you have a finite loop.

Related

Generate 50 random numbers and store them into an array c++

this is what i have of the function so far. This is only the beginning of the problem, it is asking to generate the random numbers in a 10 by 5 group of numbers for the output, then after this it is to be sorted by number size, but i am just trying to get this first part down.
/* Populate the array with 50 randomly generated integer values
* in the range 1-50. */
void populateArray(int ar[], const int n) {
int n;
for (int i = 1; i <= length - 1; i++){
for (int i = 1; i <= ARRAY_SIZE; i++) {
i = rand() % 10 + 1;
ar[n]++;
}
}
}
First of all we want to use std::array; It has some nice property, one of which is that it doesn't decay as a pointer. Another is that it knows its size. In this case we are going to use templates to make populateArray a generic enough algorithm.
template<std::size_t N>
void populateArray(std::array<int, N>& array) { ... }
Then, we would like to remove all "raw" for loops. std::generate_n in combination with some random generator seems a good option.
For the number generator we can use <random>. Specifically std::uniform_int_distribution. For that we need to get some generator up and running:
std::random_device device;
std::mt19937 generator(device());
std::uniform_int_distribution<> dist(1, N);
and use it in our std::generate_n algorithm:
std::generate_n(array.begin(), N, [&dist, &generator](){
return dist(generator);
});
Live demo

ArgumentException while reading using readblock streamreader

I am trying to calculate row count from a large file based on presence of a certain character and would like to use StreamReader and ReadBlock - below is my code.
protected virtual long CalculateRowCount(FileStream inStream, int bufferSize)
{
long rowCount=0;
String line;
inStream.Position = 0;
TextReader reader = new StreamReader(inStream);
char[] block = new char[4096];
const int blockSize = 4096;
int indexer = 0;
int charsRead = 0;
long numberOfLines = 0;
int count = 1;
do
{
charsRead = reader.ReadBlock(block, indexer, block.Length * count);
indexer += blockSize ;
numberOfLines = numberOfLines + string.Join("", block).Split(new string[] { "&ENDE" }, StringSplitOptions.None).Length;
count ++;
} while (charsRead == block.Length);//charsRead !=0
reader.Close();
fileRowCount = rowCount;
return rowCount;
}
But I get error
Offset and length were out of bounds for the array or count is greater than the number of elements from index to the end of the source collection.
I am not sure what is wrong... Can you help. Thanks ahead!
For one, read the StreamReader.ReadBlock() documentation carefully http://msdn.microsoft.com/en-us/library/system.io.streamreader.readblock.aspx and compare with what you're doing:
The 2nd argument (indexer) should be within the range of the block you're passing in, but you're passing something that will probably exceed it after one iteration. Since it looks like you want to reuse the memory block, pass 0 here.
The 3rd argument (count) indicates how many bytes to read into your memory block; passing something larger than the block size might not work (depends on implementation)
ReadBlock() returns the number of bytes actually read, but you increment indexer as if it will always return the size of the block exactly (most of the time, it won't)

Weird memory usage on Node.js

This simple code stores 1 million strings (100 chars length) in an array.
function makestring(len) {
var s = '';
while (len--) s = s+'1';
return s;
}
var s = '';
var arr = [];
for (var i=0; i<1000000; i++) {
s = makestring(100);
arr.push(s);
if (i%1000 == 0) console.log(i+' - '+s);
}
When I run it, I get this error:
(...)
408000 - 1111111111111111111 (...)
409000 - 1111111111111111111 (...)
FATAL ERROR: JS Allocation failed - process out of memory
That's strange 1 million * 100 are just 100 megabytes.
But if I move the s = makestring(100); outside the loop...
var s = makestring(100);
var arr = [];
for (var i=0; i<1000000; i++) {
arr.push(s);
if (i%1000 == 0) {
console.log(i+' - '+s);
}
}
This executes without errors!
Why? How can I store 1 Million objects in node?
In the moment you move the String generation outside the loop, you basically just create one String and push it a million times into the array.
Inside the array, however, just pointers to the original String are used, which is far less memory consuming then saving the String a million times.
Your first example builds 1000000 strings.
In your second example, you're taking the same string object and adding it to your array 1000000 times. (it's not copying the string; each entry of the array points to the same object)
V8 does a lot of things to optimize string use. For example, string concatenation is less expensive (in most cases) than you think. Rather than building a whole new string, it will typically opt to connect them i a linked list fashion under the covers.

How do I create a multidimensional array of objects in c#

I am trying to make a script that dynamically generates world chunks by making a height map then filling out the terrain blocks from there. My problem is creating a 2 dimensional array of objects.
public class Chunk
{
public Block[,] blocks;
Generate(){
//code that makes a height map as a 2 dimensional array as hightmap[x,y]=z
//convert heightmap to blocks
for (int hmX = 0; hmX < size; hmX++)
{
for (int hmY = 0; hmY < size; hmY++)
{
blocks[hmX, hmY] = new Block(hmX, hmY, heightmap.Heights[hmX, hmY], 1);
}
}
}
}
this is giving me the error:
NullReferenceException was unhandled, Object reference not set to an
instance of an object.
You just need to add new before the loop:
Block[,] blocks = new Block[size,size];
Or rather, within the generate function (all else the same):
blocks = new Block[size,size];
Otherwise you'll be shadowing the original 'blocks' variable.

Convert For loop into Parallel.For loop

public void DoSomething(byte[] array, byte[] array2, int start, int counter)
{
int length = array.Length;
int index = 0;
while (count >= needleLen)
{
index = Array.IndexOf(array, array2[0], start, count - length + 1);
int i = 0;
int p = 0;
for (i = 0, p = index; i < length; i++, p++)
{
if (array[p] != array2[i])
{
break;
}
}
Given that your for loop appears to be using a loop body dependent on ordering, it's most likely not a candidate for parallelization.
However, you aren't showing the "work" involved here, so it's difficult to tell what it's doing. Since the loop relies on both i and p, and it appears that they would vary independently, it's unlikely to be rewritten using a simple Parallel.For without reworking or rethinking your algorithm.
In order for a loop body to be a good candidate for parallelization, it typically needs to be order independent, and have no ordering constraints. The fact that you're basing your loop on two independent variables suggests that these requirements are not valid in this algorithm.

Resources