What is the fastest method for searching an unsorted vector?
Does it take longer to just search the vector from start to end or use std::sort then use a binary search algorithm?
If you need to check if a vector is sorted then the fastest method would be to use std::is_sorted(). It's linear time.
If you need to sort it, you std::sort() which is the most efficient. O(n * log n)
So, checking is faster then sorting itself.
Related
Generally to check if a string is in slice I write a function with for loop and if statement. but it's really inefficient in cases of large slices of string or struct types. is it possible to make this check concurrent?
The concurrent search on sequential data is usually not a great idea, simply because we already have a binary search that scales really well for even billions of records. All you have to do to utilize it is build indexing on top of the slice you are searching in. To build the most trivial indexing, you have to save keys into another slice along with the index of data they are pointing to. Once you have the slice, just sort it by strings, and indexing is done.
You have to perform the binary search on the indexing you just created to be more efficient. This way you have the complexity of O(log N).
Another much simpler option you have is creating the map[string]int and inserting all keys along with the indexes. Then find the index inside the map. Which can be O(1) best case.
The important thing to note is that if you have to perform just one search on a given slice, this is not worth it as creating indexing is a lot heavier than linear search.
I am wondering why QuickSelect is supposed to be such a good performing algorithm for finding
an arbitrary element out of an n-sized, unsorted set. I mean, when you go through all elements one by one, until you find the desired one it took O(n) comparisions - That's as much the quickselect's best case and much easier.
Am I missing something essential about this? Is there a case the QiuckSelect is performing better, than linear search?
QuickSelect in Average is better in finding the k-th smallest (largest) number(item) in not sorted array
Complexity for search in a linked list is O(n).
It is also O(n) for search in an queue or stack, is that right ?
If they are unordered then yes, it's O(n) since in the worst case you have to examine every element.
Not exactly.
That depends on your implementation. For example, If you have a hash table inside the queue/stack, you get an "super" queue/stack with O(1) push/pop/search.
I want to check if an element exists in a list (a very big one in 10,000,000 order) in a O(1) instead of O(n). Lists with elem x ys take O(n)
So i want to use another data type/constructor but it has to be in Prelude(not Array); any suggestions? And if i have to build me data type what it would be like?
Also to sort a big list of numbers in the same order (10,000,000)and indexing an element in the shortest time possible.
The only way to search for an item in a data set in O(1) time is if you already know where it is, but then you don't need to search for it. For unsorted data, search is O(n) time. For sorted data, search is O(log n) time.
You should use either Bloom filter or Hashtable. Neither of them is in Prelude; moreover, both rely on Array to be available.
The only left option is some kind of tree; I would suggest heap. It’s not hard to implement and it also gives you sorting for free.
UPDATE: oops! I have forgotten that heap doesn’t provide lookup. BST is your choice, then.
I would like to know the best way to sort a long list of strings wrt the time and space efficiency. I prefer time efficiency over space efficiency.
The strings can be numeric, alpha, alphanumeric etc. I am not interested in the sort behavior like alphanumeric sort v/s alphabetic sort just the sort itself.
Some ways below that I can think of.
Using code ex: .Net framework's Arrays.Sort() function. I think the way this works is that the hashcodes for the strings are calculated and the string is inserted at the proper position using a binary search.
Using the database (ex: MS-sql). I have not done this. I do not know how efficient this would be though.
Using a prefix tree data structure like a trie. Sorting requires traversing all the trieNodes of the trie tree using DFS (depth first search) - O(|V| + |E|) time. (Searching takes O(l) time where l is the length of the string to compare).
Any other ways or data structures?
You say that you have a database, and presumably the strings are stored in the database. Then you should get the database to do the work for you. It may be able to take advantage of an index and therefore not need to actually sort the list, but just read it from the index in sorted order.
If there is no index the database might still be able to help you. If you only fetch the first k rows for some small constant number k, for example 100. When you use ORDER BY with a LIMIT clause it allows SQL Server to use a special optimization called TOP N SORT which runs in linear time instead of O(n log(n)) time.
If your strings are not in the database already then you should use the features provided by .NET instead. I think it is unlikely you will be able to write custom code that will be much faster than the default sort.
I found this paper that uses trie data structure to efficiently sort large sets of strings. I have not looked into it in detail though.
Radix sort could also be good option if strings are not very long e.g. list of names
Let us suppose you have a large list of strings and that the length of the List is N.
Using a comparison based sorting algorithm like MergeSort, HeapSort or Quicksort will give you an
where n is the size of the list and d is the maximum length for all strings in the list.
We can try to use Radix sort in this case. Let b be the base and let d be the length of the maximum string then we can show that the running time using radix sort is .
Furthermore, if the strings are say the lower case English Alphabets the running time is
Source: MIT Opencourse Algorithms lecture by prof. Eric Demaine.