Related
Somewhere some guy said (I honestly do not know where I got this from), that one should learn one programming language per year. I can see where that might be a good idea, because you learn new patterns and ways to look at the same problems by solving them in different languages. Typically, when learning a new language, I look at how certain problems are supposed to be solved in that language. My question now is, what, in you experience, are good, simple, and clearly defined tasks that demostrate the differences between programming languages.
The Idea here is to have a set of tasks, that, when I solve all of them in the language I am learning, gives me a good overview of how things are supposed to be done in that language. I do not know if that is even possible, but it sure would be a useful thing to have.
A typical example one often sees especially in tutorials for functional languages is the implementation of quicksort.
Search for "Code Kata" for some resources.
Pick a problem. Solve it in different languages.
http://slott-softwarearchitect.blogspot.com/2009/08/code-kata-resources.html
In today's world, I don't think simple tasks like implementing a bubble sort will really give you a taste for that language. The reason being that several of them have C at their core (java, c#, php, javascript, etc).
Instead, go for small apps like a simple contact manager. This will allow you to work with the chosen language's UI, Database, and logic features.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
There is a lot of hype around Haskell, however, it is hard to get information on how it is used in the real world applications. What are the most popular projects / usages of Haskell and why it excels at solving these problems?
What are some common uses for this
language?
Rapid application development.
If you want to know "why Haskell?", then you need to consider advantages of functional programming languages (taken from https://c2.com/cgi/wiki?AdvantagesOfFunctionalProgramming):
Functional programs tend to be much more terse than their ImperativeLanguage counterparts. Often this leads to enhanced
programmer productivity
FP encourages quick prototyping. As such, I think it is the best software design paradigm for ExtremeProgrammers... but what do I know?
FP is modular in the dimension of functionality, where ObjectOrientedProgramming is modular in the dimension of different
components.
The ability to have your cake and eat it. Imagine you have a complex OO system processing messages - every component might make state
changes depending on the message and then forward the message to some
objects it has links to. Wouldn't it be just too cool to be able to
easily roll back every change if some object deep in the call
hierarchy decided the message is flawed? How about having a history of
different states?
Many housekeeping tasks made for you: deconstructing data structures (PatternMatching), storing variable bindings (LexicalScope with
closures), strong typing (TypeInference), GarbageCollection, storage
allocation, whether to use boxed (pointer-to-value) or unboxed (value
directly) representation...
Safe multithreading! Immutable data structures are not subject to data race conditions, and consequently don't have to be protected by
locks. If you are always allocating new objects, rather than
destructively manipulating existing ones, the locking can be hidden in
the allocation and GarbageCollection system.
Apart from this Haskell has its own advantages such as:
Clear, intuitive syntax inspired by mathematical notation.
List comprehensions to create a list based on existing lists.
Lambda expressions: create functions without giving them explicit names. So it's easier to handle big formulas.
Haskell is completely referentially transparent. Any code that uses I/O must be marked as such. This way, it encourages you to separate code with side effects (e.g. putting text on the screen) from code without (calculations).
Lazy evaluation is a really nice feature:
Even if something would usually cause an error, it will still work as long as you don't use the result. For example, you could put 1 / 0 as the first item of a list and it will still work if you only used the second item.
It is easier to write search programs such as this sudoku solver because it doesn't load every combination at once—it just generates them as it goes along. You can do this in other languages, but only Haskell does this by default.
You can check out following links:
https://c2.com/cgi/wiki?AdvantagesOfFunctionalProgramming
https://learn.microsoft.com/archive/blogs/wesdyer/why-functional-programming-is-important-in-a-mixed-environment
https://web.archive.org/web/20160626145828/http://blog.kickino.org/archives/2007/05/22/T22_34_16/
https://useless-factor.blogspot.com/2007/05/advantage-of-functional-programming.html
I think people in this post are missing the most important point for anyone who has never used a functional programming language: expanding your mind. If you are new to functional programming then Haskell will make you think in ways you've never thought before. As a result your programming in other areas and other languages will improve. How much? Hard to quantify.
There is one good answer for what a general purpose language like Haskell is good for: writing programs in general.
For what it is used for in practice, I've three approaches to establishing that:
A tag cloud of Haskell library and app areas, weighted by frequency on Hackage.
Indicates that it is good for graphics, networking, systems programming, data structures, databases, development, text processing ...
Areas it is used in industry - a lot of DSLs, web apps, compiler design, networking, analysis, systems programming , ...
And finally, my opinion on what it is really strong at:
Problems where correctness matters, domain specific languages, and parallel and concurrent programming
I hope that gives you a sense on how broad your question is, if it is to be answered with any specificity.
One example of Haskell in action is xmonad, a "featureful window manager in less than 1200 lines of code".
From the Haskell Wiki:
Haskell has a diverse range of use
commercially, from aerospace and
defense, to finance, to web startups,
hardware design firms and lawnmower
manufacturers. This page collects
resources on the industrial use of
Haskell.
According to Wikipedia, the Haskell language was created out of the need to consolidate existing functional languages into a common one which could be used for future research in functional-language design.
It is apparent based on the information available that it has outgrown it's original purpose and is used for much more than research. It is now considered a general purpose functional programming language.
If you're still asking yourself, "Why should I use it?", then read the Why use it? section of the Haskell Wiki Introduction.
Haskell is a general purpose programming language. It can be used for anything you use any other language to do. You aren't limited by anything but your own imagination. As for what it's suited for? Well, pretty much everything. There are few tasks in which a functional language does not excel.
And yes, I'm the Rayne from Dreamincode. :)
I would also like to mention that, in case you haven't read the Wikipedia page, functional programming is a paradigm like Object Oriented programming is a paradigm. Just in case you didn't know. Haskell is also functional in the sense that it works; it works quite well at that.
Just because a language isn't an Object Oriented language doesn't mean the language is limited by anything. Haskell is a general-purpose programming language, and is just as general purpose as Java.
I have a cool one, facebook created a automated tool for rewriting PHP code. They parse the source into an abstract syntax tree, do some transformations:
if ($f == false) -> if (false == $f)
I don't know why, but that seems to be their particular style and then they pretty print it.
https://github.com/facebook/lex-pass
We use haskell for making small domain specific languages. Huge amounts of data processing. Web development. Web spiders. Testing applications. Writing system administration scripts. Backend scripts, which communicate with other parties. Monitoring scripts (we have a DSL which works nicely together with munin, makes it much easier to write correct monitor code for your applications.)
All kind of stuff actually. It is just a everyday general purpose language with some very powerful and useful features, if you are somewhat mathematically inclined.
From Haskell:
Haskell is a standardized, general-purpose purely functional
programming language, with
non-strict semantics and strong static
typing. It is named after logician
Haskell Curry.
Basically Haskell can be used to create pretty much anything you would normally create using other general-purpose languages (e.g. C#, Java, C, C++, etc.).
For example, for developing interactive, realtime HTML5 web applications. See Elm, the compiler of which is implemented in Haskell and the syntax of which borrows a lot from Haskell's.
This is a pretty good source for info about Haskell and its uses:
Open Source Haskell Releases and Growth
Why do most languages seem to only exhibit fairly basic control structures from a logic point of view? Stuff like If ... then, Else..., loops, For each, switch statement, etc. The standard list seems fairly basic from a logic point of view.
Why is there not much more in the way of logic syntactical sugar? Perhaps something like a proposition engine, where you could feed an array of premises or functions that return complicated self referential interdependent functions and results. Something where you could chain together a complex array of conditions, but represented in a way that was easy and clear to read in the code.
Premise 1
Premise 2 if and only if Premise 1
Premise 3
Premise 4 if Premise 2 and Premise 3
Premise 5 if and only if Premise 4
etc...
Conclusion
I realize that this kind of logic this can be constructed in functions and/or nested conditional statements. But why are there not generally more syntax options for structuring these kind of logical propositions without resulting in hairy looking conditional statements that can be hard to read and debug?
Is there an explanation for the kinds of control structures we typically see in mainstream programming languages? Are there specific control structures you would like to see directly supported by a language's syntax? Does this just add unnecessary complexity to the language?
Have you looked a Prolog? A Prolog program is basically a set of rules that is turned into one big evaluation engine.
From my personal experience Prolog is a bit too weird and I actually prefer ifs, whiles and so on but YMMV.
Boolean algebra is not difficult, and provides a solution for any conditionals you can think of, plus an infinite number of other variants.
You might as well ask for special syntax for "commonly-used" arithmetic expressions. Who is to say what qualifies as commonly-used? And where do you stop adding special-case syntax?
Adding to the complexity of a language parser is not preferable to using constructive expression syntax, combined with extensibility through defining functions.
It's been a long time since my Logic class in college but I would guess it's a mixture of difficulty in writing them into the language vs. the frequency with which they'd be used. I can't say I've ever had the need for them (not that I can recall). For those times that you would require something of that ilk the language designers probably figure you can work out the logic yourself using just the basic structures.
Just my wild guess though.
Because most programming languages don't provide sufficient tools for users to implement them, it is not seen as an important enough feature for the implementer to provide as an extension, and it isn't demanded enough or used enough to be added to the standard.
If you really want it, use a language that provides it, or provides the tools to implement it (for instance, lisp macros).
It sounds as though you are describing a rules engine.
The basic control algorithms we use mirror what processor can do efficiently. Basicly this boils down to simple test-and-branches.
It may seem limiting to you, but many people don't like the idea of writing a simple-looking line of code that requires hundreds or thousands (or millions) of processor cycles to complete. Among these people are systems software folks, who write things like Operating Systems and compilers. Naturally most compilers are going to reflect their own writer's concerns.
It relates to the concern regarding atomicity. If you can express A,B,C,D in simpler structures Y, Z, why not simply not supply A,B,C,D but supply Y, Z instead?
The existing languages reflect 60 years of the tension between atomicity and usability. The modern approach is "small language, large libraries". (C#, Java, C++, etc).
Because computers are binary, all decisions must come down to a 1/0, yes/no, true/false, etc.
To be efficient, the language constructs must reflect this.
Eventually all your code goes down to a micro-code that is executed one instruction at a time. Until the micro-code and accompanying CPU can describe something more colorful, we are stuck with a very plain language.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I have read numerous time that learning a language such as Haskell, Lisp or Smalltalk will somehow make you a better programmer while you program in other languages.
Is there more than just anecdotal evidence for that claim?
Or is it just the way people rationalize having spend a lot of time learning a programming language that they will never use?
IMHO, it is all about learning a new programming paradigm. If you know Java and then C#, there's not much gain, once both of them have almost the same "type of programming".
But if you get to learn a functional language or dynamic, for instance, you're forced to think another way, and that will probably help you to program better in your favorite language.
It is something like: "It is so easy doing this in {different language you learned}. There must be a better way to do this in {language you already know}". And then you rethink, and build up a more elegant way to do this in {language you already know}.
I don't have any hard evidence, but I have really appreciated the different way of looking at problems that I have since learning lisp (the same goes for python and c).
The key isn't necessarily learning different languages though, I believe that the key is actually the different viewpoints that you gain by learning different programming styles.
Good examples are functional, imperative, object-oriented, etc. Also, there are common design differences is interpreted vs compiled languages; static vs dynamic typing, etc.
Although most people do a majority of their programming using a single style (most commonly OOP over the past few years), I think that all programmers should know multiple styles so that they are better able to see the shortfalls of their own style.
Can't shed much light on this in terms of programming languages, but it seems very similar to the "why learn a dead language?" argument that surrounds Latin, and much of the reasoning there can be applied here.
Programming is a way of thinking, not writing code in programming language X: that is "coding", not "programming".
By knowing at least something about more than just one programming language - preferably across different paradigms, so imperative/OOP/functional/logical - you train that way of thinking about problems outside the context of the specific details and quirks of language X.
I think this always improves your abilities to be(come) a better programmer tremendously.
A great side-effect of learning new languages is the potential for application in your existing language.
For instance, I'm a Java programmer and I took the time to learn my first functional language (Haskell). I was recently asked to learn Scala for an upcoming project. This is extremely easy since I understand the comcepts of guards, recursion, etc. from Haskell.
Deeply learning language just for learning language has too little benefits. If you have a lot of tasks and you don't know language that ideal for solving it then it is make sense to learn that language. Otherwise it is make sense to spend the time to become expert in languages you already know.
I don't know that there will have been much rigorous study regarding the benefits of multi-programming language exposure on overall programming ability, but I would argue that the studies regarding why learning a foreign human language (which you may never use in practice) is beneficial would in general hold equally well for studying foreign programming languages. The benefits ascribed often include improved cognitive abilities as well as improved understanding of one's native language.
here's some links to studies
anecdotally, I complained a great deal about taking COBOL, and have never really used it but was able to apply things I learned in that class at my first job.
If you give any credence to the Pragmatic Programming guys, consider their advice from page 14 of their first book:
Learn at least one new language every year. Different languages
solve the same problems in different ways. By learning several
different approaches, you can help broaden your thinking and avoid
getting stuck in a rut.
Some examples that come to mind:
Knowing C and having to deal with memory management and do-it-yourself data structures can help you understand performance issues when programming in a higher level language where those details are hidden from you.
Conversely, learning an OO language can affect your C programming - with, for example, the concept of Polymorphism prompting you to use function pointers in ways you might not have otherwise.
Learning a language where functions are first class objects that can be passed around can make you think of similar techniques in other languages, even if, in those other languages, you have to make the functions methods in objects that get passed around.
Learning about the way Erlang handles concurrency can make you rethink how much shared state you use between threads in other languages.
Any language that has a built-in feature you find useful can prompt you to implement your own version of that feature in another language that doesn't have it, and thus allow you to solve problems in ways you might not have thought of if you hadn't been exposed to the feature in the language that has it built-in.
Learning about Interfaces in Java can make you think about the benefits of precisely specifying your (small "i") interfaces in other languages that don't have them as a formal construct in a type system.
No doubt there are others.
Learning a language is not a binary event. If you are a decent programmer, you should be able to trust your own instincts as to whether a language offers you a new take on your craft.
Virtually every language worth considering these days can be downloaded and test-driven in a couple of minutes. So do it -- pick one and try it out.
There are a limited number of cases where this "laissez-faire" approach falls short. If you're a complete beginner, of course it doesn't work. When I first learned C, I had to have it beaten into me, but it did turn out to be worth it because it made me understand pointers, memory reference and dynamic allocation in a way I hadn't previously.
But if you know that much already, just poke around and look for a language that makes your lightbulb go on.
Different languages have different ways of implementing the same ideas. By learning new languages, you get a different perspective on how things can be accomplished, and can then use that knowledge to approach how you program in your current environment. Think about object oriented and functional programming. OO Programmers can learn a lot about parrellization from languages like C.
Learning a language, especially one that practices a new paradigm, is very beneficial for every programmer. For example learning Scheme will help someone understand functional programming. The programmer can later practice what he/she learned with other languages like C#. She can think of new ways of doing things.
Also, as languages evolve, it's high likely that the language you use will adopt some features of other languages. Having taught myself Ruby, I was able to grasp the changes in C# 3.0 much easier.
I think learning languages will always benefit you even if you don't use them again. I started playing with Ioke as an attempt to learn something experimental and because of it my JavaScript has improved because certain ideas have been cemented.
learning a new language will possibly give you new insights that you will try translate to your main language.
I don't think there will be any hard evidence--I think this is more of an intuitive thing. Learning a totally different language will help you look at things totally different. Or maybe it won't. In any case, what's the harm in learning something?
It's entirely subjective, but way back when, after taking an undergraduate course in Haskell, I did notice that my programming style in C became more 'Haskell-like' for a while; I used a lot of simple, recursive functions. I also noticed that this programming style seemed to yield some of the same benefits programming in Haskell had; bugs were fewer, code was easier to understand (albeit slower).
So, while learning another programming language may not make everyone a better programmer, it definitely was a learning experience for me, personally.
What are the benefits of learning mathematics or physics that you won't use, or the benefits of studying philosophy or dead tongues?
It's the intellectual achievement and the enlightenment what matters, you will be a wiser person with any new thing that you learn, no matter if they are programming languages, literature, role playing games... of course if it's related to your working field, then you'll actually find a use, sooner or later :-)
I spent some time studying clojure even though I knew I wouldn't use it in the near-term (mostly because I can't really deploy on the JVM).
It has concepts that aren't supported by the languages I use (C#/C/C++/Python/Perl) and I wanted to know what I was missing and also if it would be worth looking into libraries that purport to add these features.
Specifically, I'm very interested in understanding Lisp-style macros and the direct concurrency support. I also spent some time reading the implementation, specifically the datastructures, which was very educational -- good to see a quality implementation of persistent datastructures to learn how they work (and give you immutability without sacrificing much performance).
Bryond what has already been said, I really like new languages just because it can bring new interest to programming. You learn different ways to approach problems and the strengths/weaknesses of certain languages. It is something new to learn and any good programmer should be striving to always be learning new things. It mixes up your daily routine of possibly programming in the same language for years.
I also like what everyone has said about programming perspective.
Some good points have been made.
I would add that learning languages you won't use in production work can be of value
To better appreciate and absorb the arguments and methods in texts and papers that will improve programming ability in languages I do use for production work (e.g. MIX/MMIX for Knuth's Art of Computer Programming; RATFOR for Kernighan and Plauger's Software Tools; I still use some ALGOL-based syntax for some pseudocode although I never wrote runnable code in ALGOL outside University)
To be able to check or prototype programs that will be written in a different language (e.g. some routines for numerical computing in C can be quickly checked or scaled using languages that have appropriate functionality built in such as Fortran, Python or Haskell)
Learning a new language can give insight as to how it could be used to more easily solve problems that were put to one side because of time or complexity constraints.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What is the real benefit of creating a new programming language? It is highly unlikely that you are going to actually use it.
In short, how will the process of creating a new language make you a better programmer?
You will understand the decisions behind language design and garner a better overall understanding of the compromises made between readability, performance, and reliability.
Your familiarity with concepts such as recursion, closures, garbage collection, reference management, typing, data structures and how these things actually work will increase. Most programmers will utilize resources and language features better.
Similar to the way we learn new ways to code solutions when we use other languages, when we write our own languages, we explore new ways to create solutions. See Metaprogramming. Contrary to the what the question suggests, Domain Specific Languages are used in many environments.
If you're writing a compiler, you'll learn more about how computers work than you ever did before. (Depending on your goal, perhaps more than you intended to learn)
When I wrote my own sort routines in school, even re-implementations of good ones, it really drove home some of the weaknesses of some of the algorithms.
In short, there's an order of magnitude of difference in a programmer who knows how to use tools, and a programmer who knows how to make tools.
I can speak from experience here ...
Fun, Domain specific problem solving, Complexity in context
I love creating new languages for fun, and for tackling domain specific problems. A very simple example might be Wikipedia markup or something as complex as Erlang which specializes in concurrent processing.
Many general purpose languages are similar, because they are general purpose. Sometimes you need a more accurate abstraction of the mechanics of the problem you are solving. Another example would be the M4 macro language.
Remember a language is not magic, it is just a collection of defined grammatical structures with implied semantics. SQL is a good example of a language for a purpose, with that purpose defined in it's syntax and semantics.
Learning how languages work, what makes a language parsable, what makes semantics sensible and the implementation of this, I think can make you a better programmer.
compilers embody alot of theory that underpins computer science:
Translation, abstraction, interpretation, data structures, state .... the list goes on. Learning these things will make you understand the implications of your program and what goes on under the hood. You can of course learn things independently but compilers are a great context to learn complex topics such as DFA/NDFA automata, stack-based parsers, abstract syntax trees ....
compilers are beautiful machines I think :)
Multiple reasons:
bragging rights
economic incentives
extreme boredom
dissatisfaction with the hundreds of existing languages
untreated insanity
desire to implement language that facilitates new design concepts (like languages that make design patterns more straightforward to incorporate)
other reasons, perhaps
I think Jeff Attwood answers this well in this Coding Horror post -- though he's talking about a more general issue (why create any new library, framework, etc, when other artifacts in the same design space already exist), I suspect that exactly said broader viewpoint gives him a different and interesting perspective.
I will add that if you write a semantics, so that your language is an actual language and not merely what happens to be accepted by some particular implementation, you will learn an enormous amount about how to describe computational behaviors precisely:
You will learn what kinds of behaviors are and are not easy to describe—and prove correct.
You will learn how to trade off different kinds of formalisms for describing different kinds of features.
You will ultimately be a better programmer because the formalism and proof techniques you will learn will apply to all kinds of problems: locking techniques, safety properties in kernels, lock-free data structures, network protocols, and information security, to name just a few. All these areas are amenable to the same kind of formal treatment that is given to a programming language.
To pick just one example, if you give your language a static type system and you then prove that a well-type program is guaranteed to be memory-safe, you will learn just as much (on a different dimension) as you will by writing an interpreter or compiler.
EDIT: If you want to learn this stuff I think the easiest starting point is Benjamin Pierce's series of two books on Types and Programming Languages. There is also a graduate textbook by Glynn Winskel which is a little harder but more oriented toward semantics and proof techniques.
Creating Domain Specific Languages is very valuable. Instead of thinking only about general purpose languages, consider creating so-called "little languages" that clearly express abstractions in your project.
For example, in a recent project I decided to use a Command Pattern to drive a Service Layer. I found some repetition in my command code, so I wrote a little compiler that accepts a simple language that expresses commands and emits command implementations in the "underlying" language.
For the same reason that taking a Compiler Construction course at university will benefit you even if you never write a single compiler in your whole life. It's a look under the hood, if you may.
In addition to what altCognito said, which is a theoretical/academic perspective, some highly specialized languages are created to solve specific problems efficiently when existing "general-purpose" languages are either extremely inefficient for your task or there just isn't an easy-to-use existing alternative.
Granted, that such cases tend to be rare and if your first instinct on encountering a problem is "I need a new language for this.", then it is most likely you're missing something. There needs to be a fairly substantial gap in "available" tech and and your needs to warrant such an undertaking.
I think there are really two conceptually different answers to this. First, you gain an understanding of how compilers transform your code into executable code. This can help you make better decisions about how to structure your code to optimize (or allow it to be optimized) better. If, for instance, you knew that a certain construct would prohibit the compiler from inlining a code block or unrolling a loop, then you could avoid that if performance became a real concern.
Second, all current languages were invented (or derived) at some point in history. For each one of these, the likelihood that it would actually be used was potentially small, yet here they are. They all found their reason for being in the fact that someone wanted to do something that wasn't possible or easy to do in an existing language and decided to do something about it. Laziness (or the desire to let the computer do the work for you) is the mother of invention.
Just for fun... and then you'll realize that you cannot make anything better than all the languages that you thought they sucked xD (so you stop complaining about them).
how will the process of creating a new language make you a better programmer?
You're right, you may or may not use the language, but at the least the experience you will gain from doing it will benefit you to understand the implementation of programming languages and of certain things that you will be able to apply to future computation problems that you run into.
Writing a compiler or interpreter requires a very firm understanding in computer science theory. And if you're compiling to machine code instead of to another language, it requires a firm understanding in hardware design as well.
In addition to that, knowing how to design a compiler means you will have a better understanding of languages in general, and the languages you work with specifically. You will have a better appreciation for syntax and trade-offs the language designers took when they wrote their specification.
It's not that writing compilers makes you a better programmer. It's the deep understanding of language theory and compiler design that makes you better.
Mostly you do this for fun or to broaden your comprehension of a subject.
I disagree that creating new language influences performance - performance of what? IMHO execution speed should not depend on the language constructs but what the language is translated to - which is something different: like creating a syntax for a language and writting a compiler/virtual machine for it.
Because a talking frog is pretty neat.
I want a managed language that permits tinkering with its internals as standard practice. Kind of like Ruby's duck punching on a wider scale.
I should, as the client of a library, be able to swap out library functions that don't do what I want.
That's what drives me crazy with .NET. There are bugs in the framework Microsoft will not fix and thanks to GAC signing I cannot. And even if it were not for GAC signing, hotpatching a global library is a bad idea (might break some other application).
I for one don't care about how compilers work, don't care about learning new languages, and don't care about using scripting languages like perl and javascript. I'm much more interested in the ways big programs are constructed (or should be constructed). There are still no good solutions for making LARGE software as easy to use as prototyped code. Programming languages are not helping with that. They solve trivial problems like sorting and memory deallocation, and leave you struggling alone with problems that really matter (that keep you or your firm from losing money).