I'm using a point quadtree and i need know how i can count the number of quadrants generated after insert the points on a quadtree?
thanks.
Either raise a counter in the insert method of the quadtree, or count it afterwards, by iterating through the quad tree.
Related
When reading on methods of textual analysis, some eliminate documents with "10% lowest density score", that is, documents that are relatively long compared to the occurrence of a certain keyword. How can I achieve a similar result in quanteda?
I've created a corpus using a query of the words "refugee" and "asylum seeker". Now I would like to remove all documents where the count frequency of refugee|asylum_seeker is below 3. However, I imagine it is also possible to use the relative frequency if document length is to be taken into account.
Could someone help me? The solution in my head looks like this, however I don't know how to implement it.
For count frequency: Add counts of occurrences of refugee|asylum_seeker per document and remove documents with an added count below 3.
For relative frequency: Inspect the overall average relative frequency of both words refugee and asylum_seeker, to then calculate the per row relative frequencies of the features and apply a function to remove all documents with a relative frequency of both features below X.
Create a dfm from your tokenised corpus, using dfmat <- dfm(your tokens).
Remove the documents features this way:
dfm_remove(dfmat,
as.logical(dfmat[, c("refugee")] < 3 |
dfmat[, c("asylum_seeker")] < 3)
)
We are given with a string and an integer. We have to tell what character would be at that integer position in the string if the characters were to be placed into sort order.
For Example
String = LALIT
Index = 3
Sorted string AILLT and the character at position 3 is L
Is it possible to solve this problem without sorting?
if yes then can someone provide a pseudo code.
Yes, it's possible to do this. You're looking for something called a selection algorithm which, given a list of elements and a number k, returns what element would be in position k if the elements were to be in sorted order. Amazingly enough, it's possible to do this without sorting the entire list!
The simplest non-sorting algorithm for selection is called quickselect, which runs in expected time O(n) and, provided you're allowed to modify the original array, uses only O(1) auxiliary storage space. The idea behind quickselect is to do a single step of quicksort - pick a pivot element, partition the elements into elements less than the pivot, elements equal to the pivot, and elements greater than the pivot - then to see what happens based on that. If the pivot element ends up in position k after this step, then you're done - that's the element that would be at position k in the final sequence. If the pivot is at a position higher than k, recursively look to the left of the pivot (the kth smallest element is somewhere in there), and if the pivot is at a position lower than k, recursively look to the right of the pivot (the kth smallest element is somewhere in there).
Other approaches exist as well, such as the median-of-medians algorithm that always runs in worst-case O(n) time but is a classic "tricky algorithm to wrap your head around."
I have a grid with 200 lines and 200 columns.
I want to generate random pairs of coordinates i,j by using a numeric seed. This seed is a value i am incrementing each time I generate a pair of numbers.
After 40,000 values have been generated, all pairs of coordinates are unique amongst themselves, as there is no i,j and m,n where i=m and j=n.
For example:
seed 0: generates 43,12
seed 1: generates 154, 62
and so forth...
The seed implies the same input with the same function generates the same result, I am fine with that.
I am aware I need some sort of pseudorandom algorithm, as using the computer time or something might generate two identical pairs, but where do I start?
If you want every seed to return a random point, and all of those points to be unique, the easiest way to do it is to put the points in an array, shuffle the array, and then use integer seeds to index the shuffled array. For example, seed=0 would get whatever element happened to be shuffled into the first position.
It seems a bit easier to me to let integers represent the pairs, so make an array from 0 to 40000 (ie, 200x200), shuffle this, and then use seeds in the range 0 to 40000. To convert the integer, n, to a point pair use, i=n%200 and j=(n-i)/200.
Of course, since you want each seed to return a unique point, you must have equal or fewer seeds than the number of points,
You need a random number generator that you can set the seed value for. Seems like you're aware of that. You can't set the seed for Math.random() but there's plenty of pseudorandom number generators out there. I suggest you take a look at seedrandom.js.
For example when the pivot is the highest or lowest value in the array.
For quicksort that uses 2 pointers, 1 goes Left end to right, the other goes right end to left, a pointer stops when it finds an element out of place in respect to the pivot, when both have stopped, they swap the elements and continue on from that position. But, why and how does a bad pivot choice make Quicksort O(n^2)?
how does a bad pivot choice make Quicksort O(n^2)?
Let's say you always pick the smallest element as your pivot. The top-level iteration of quicksort will require n-1 comparisons and will split the array into two subarrays: one of size 1 and one of size n-1. The first one is already sorted, and you apply the quicksort recursively to the second one. Splitting the second one will require n-2 comparisons. And so on.
In total, you have (n-1) + (n-2) + ... + 1 = n * (n-1) / 2 = O(n^2) comparisons.
If your chosen pivot happened to be the maximal value in your subset on every recursion, the algorithm would simply move every record read into the subset below the pivot, and continue on with only one non-empty partition. This new subset's size would be only one less.
In that case, the quicksorts operation would be similar to a selection sort. I would find a maximal value, put it where it goes, and move on to the rest of the data in the next iteration. The difference being that the selection sort searches for the maximal (or minimal) data point, where the worst-case quicksort would happen to select the maximal value and then discover that it is, indeed, the maximum.
This is a quite rare case, to my knowledge.
Try a list with n times the same number.
Choose any way to find a privot.
Look whats happening.
(Edit: To make some hints:
The pivot does not depend on the way to find a pivot, because it is always the same.
So in every iteration, for the current list with n elements you will need n comparisons and you will split the list with n current elements in two sublists with 1 and n-1 elements.
You can quickly calculate the number of operations overall. You need n, n-1, n-2, ..., 2, 1 operations.
Formally, it is the sum from i=1 to n over i, for which you should know a formula to see it is O(n*n))
I am on an interview ride here. One more interview question I had difficulties with.
“A rose is a rose is a rose” Write an
algorithm that prints the number of
times a character/word occurs. E.g.
A – 3 Rose – 3 Is – 2
Also ensure that when you are printing
the results, they are in order of
what was present in the original
sentence. All this in order n.
I did get solution to count number of occurrences of each word in sentence in the order as present in the original sentence. I used Dictionary<string,int> to do it. However I did not understand what is meant by order of n. That is something I need an explanation from you guys.
There are 26 characters, So you can use counting sort to sort them, in your counting sort you can have an index which determines when specific character visited first time to save order of occurrence. [They can be sorted by their count and their occurrence with sort like radix sort].
Edit: by words first thing every one can think about it, is using Hash table and insert words in hash, and in this way count them, and They can be sorted in O(n), because all numbers are within 1..n steel you can sort them by counting sort in O(n), also for their occurrence you can traverse string and change position of same values.
Order of n means you traverse the string only once or some lesser multiple of n ,where n is number of characters in the string.
So your solution to store the String and number of its occurences is O(n) , order of n, as you loop through the complete string only once.
However it uses extra space in form of the list you created.
Order N refers to the Big O computational complexity analysis where you get a good upper bound on algorithms. It is a theory we cover early in a Data Structures class, so we can torment, I mean help the student gain facility with it as we traverse in a balanced way, heaps of different trees of knowledge, all different. In your case they want your algorithm to grow in compute time proportional to the size of the text as it grows.
It's a reference to Big O notation. Basically the interviewer means that you have to complete the task with an O(N) algorithm.
"Order n" is referring to Big O notation. Big O is a way for mathematicians and computer scientists to describe the behavior of a function. When someone specifies searching a string "in order n", that means that the time it takes for the function to execute grows linearly as the length of that string increases. In other words, if you plotted time of execution vs length of input, you would see a straight line.
Saying that your function must be of Order n does not mean that your function must equal O(n), a function with a Big O less than O(n) would also be considered acceptable. In your problems case, this would not be possible (because in order to count a letter, you must "touch" that letter, thus there must be some operation dependent on the input size).
One possible method is to traverse the string linearly. Then create a hash and list. The idea is to use the word as the hash key and increment the value for each occurance. If the value is non-existent in the hash, add the word to the end of the list. After traversing the string, go through the list in order using the hash values as the count.
The order of the algorithm is O(n). The hash lookup and list add operations are O(1) (or very close to it).