Searching for simple problems naturally solved using stacks [closed] - haskell

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I would like to know about simple problems that can be naturally solved using stacks with the usual interface (emptyS, isEmptyS, push, pop, top).
The complexity asociated to the context of the problem should be null. I can't touch topics like parsing, compiling or search algorithms at this moment. This discards many classical examples.
The most beautiful example I found so far is checking for balanced parenthesis in strings. In very few lines, without any other background, the exercise shows the utility of the data structure:
Another good example is procesing a string where the asterisk means to pop an item from the stack and a letter means to push it into the stack. The function must return the stack after the operations described in the string are applied to an empty stack.
If you can share some others problems, I will apreciate it very much.
Thank you in advance.

Though this question is too broad, I am going to give some other applications. Some of other common applications are -
Parsing
Recursive Function
Calling Function
Expression Evaluation
Expression Conversion
Infix to Postfix
Infix to Prefix
Postfix to Infix
Prefix to Infix
Towers of Hanoi
Some details can be found here.

Related

what is the relationship between safe & pure & referential transparency in functional programming? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
pure / impure: appears when we talk about the different between Haskell and lisp-family.
safe / unsafe: appears when we name functions like unsafePerformIO, unsafeCoerce.
referential transparency / referential opacity: appears when we emphasize the benefit of purely functional programming.
The difference between these words are very subtle, I find there is some post talking about them individually, but I'm still hoping there is a clear comparison between them, and I can't find such a post here yet.
I've always been fond of Amr Sabry's 1998 paper that explored a similar question with the rigor it deserved: https://www.cs.indiana.edu/~sabry/papers/purelyFunctional.ps
A sample quote:
A language is purely functional if (i) it includes every simply typed
lambda-calculus term, and (ii) its call-by-name, call-by-need, and
call-by-value implementations are equivalent modulo divergence and
errors.
While this question can generate a lot of "opinion" based answers (which I am carefully avoiding!), reading through Amr's paper can put you in the right mindset about how to think about this question; regardless whether you end up agreeing with him or not.

Is there any advantage using high-order functions (filter, map, fold) instead of pattern matching in Haskell? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have some code in Haskell that is using pattern matching. However, I think that I can use folds and filters. For me this would be more readable, but I want to know it there is any advantage in terms of complexity.
The main reason to use higher-order functions instead of pattern matching and manual recursion is that it makes your code more concise and way easier to read.
Once you get the hang of them, you'll find that reading source code suddenly became way easier as those functions are amongst the most popular in all of Haskell.
It's also considered a good practice, and many people appreciate that you abstract your code.

which one is best for parsing between Left corner Parsing algorithm and CYK parsing algorithm ? and Why? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
which one is best for parsing between Left corner Parsing algorithm and CYK parsing algorithm ? and Why ?
Generally speaking, CYK is a maximum-likelihood parse tree. It never gives you the best performance because of this reason and the fact that it ignores contextual information when assigns the probabilities. You need to modify it to consider more contexts, or integrate it into something else. For example, Left-Corner parser can use a CYK procedure, inside. So the answer to your question is, LC is more powerful than CYK, though it's computationally more expensive. Have a look at Mark Johnson's paper.

Why use advanced compiler techniques over simple string manipulation in small compilers? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
What advantage is using (what I understand as) advanced compiler techniques like special grammar, AST, etc over simple string manipulation for making very small programming languages? I'm interested in compiler design and don't know wether I should learn all of this compiler theory if I'm only going to make small and simple languages. I know that as I start to make bigger languages I will probably have to use parser generators and the like, but until then should I bother?
You should definitely know what an AST is and how to build one. Even if you use parser generators later on. I mean, how can you be interested in compiler design but not in grammar and syntax trees? It always pays off to learn how stuff works under the hood, rather than taking it as magic.
And seriously, parsing anything else than Whitespace or Brainf*ck is awful with string manipulation as soon as it gets more complicated...

Effects of sound multiplication [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
What are the effects of multiplication of two different sound? An neither of them are constant, like two different songs, or one track of instrumental and one of vocals.
A simple Google search came up with this:
http://crca.ucsd.edu/~msp/techniques/v0.11/book-html/node77.html
Did you search for it at all?
But basically what happens is you end up creating an envelope where the second acts as a "coefficient" of sorts.
You also end up with a reduction of sound levels (since a decimal times a decimal is less both of them), so you'll need to amplify the signal a bit to retain volume.
The page I linked gives a lot more explanation and has a lot of the algebra needed to write up a code to implement it. Look there if you have any more questions.

Resources