Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Is it still worthwhile to learn ASM?
I know a little of it, but I haven't really used it or learned it properly because everything I learn to do in assembler I can do in 1/10th the time with some language like C or C++. So, should I really learn and use ASM? Will it do me any good professionally? Will it increase my resourcefulness? In short, would it make me a better programmer?
Note: I am talking about low-level assembly like FASM or NASM and not something like HLA (High-Level Assembler).
I learned from Kip Irvine's book. If you ignore the (fair) criticisms of his (irrelevant) libraries, I can recommend it as a good introduction to the language itself -- although for the really interesting stuff you have to hunt out obsessives on the net.
I think it's useful to understand what happens at the lower levels. As you research assembler you will learn about cpu pipelining, branch prediction, cache alignment, SIMD, instruction reordering and so on. Knowledge of these will help you write better high-level code.
Furthermore, the conventional wisdom is to not try to hand-optimise assembly most of the time but let the compiler worry about it. When you see some examples of the twisted things that compilers generate, you will better understand why the conventional wisdom holds.
Example: LFSRs run fast with the rotate-with-carry instruction, for specific cases like this it's just as easy to write the assembler version as it is to discover whether or not the compiler is smart enough to figure it out. Sometimes you just know something that the compiler doesn't.
It also increases you understanding of security issues -- write-or-execute, stack overruns, etc.
Some concurrency issues only become apparent when you are aware of what is happening at the per-instruction level.
It can be useful sometimes when debugging if you don't have the complete source code.
There's the curiousity value. How are virtual functions implemented anyway? Ever try to write DirectX or COM programs in assembler? How do large structures get returned, does the calling function offer a space for them or vice-versa?
Then there are special assembly languages for graphics hardware, although shader languages went high-level a few years ago, anything which lets you think about a problem a different way is good.
I find it interesting that so many people jump to say that yes, you need/should learn assembly. To me the question is how much assembly do you need to know? I don't think you have to know assembly like a programming language, that is I don't believe that everyone should be able to write a program in assembly, but on the other hand, being able to read it and understand what it actually means (which might require more knowledge of the architecture than the assembler) is enough.
I for sure cannot write assembly (i.e. write any non trivial piece of code in assembly), but I can read it and that together with knowledge of the actual hardware architecture, and the calling conventions that are being used is enough to analyze performance, and identify what piece of C++ code was the source of that assembly.
Yes - the primary reason to learn assembly for C and C++ developers is it helps understanding what's going on under the hood of C and C++ code. It's not that you will actually write code in assembly, but you will be able to look at code disassembly to assess its efficiency and you will understand how different C and C++ features work much better.
It's worthwhile to learn lots of different languages, from lots of different paradigms. Learning Java, C++, C#, and Python doesn't count, since they are all instances of the same paradigm.
As assembly is at the root (well, close to the root) of all languages, I for one say that it is worthwhile to learn assembly.
Then again, it's worthwhile to learn a functional programming language, logic programming, scripting languages, math-based languages. You only have so much time, so you do have to pick and choose.
Knowing ASM is also useful when debugging, as sometimes all you have is "ASM dump of the error".
Depend of which programming level you wish to reach.
If you need to work with debuggers then YES.
If you need to know how compilers works then YES.
Any assembler/debugger is CPU dependent, so there is a lot of work, just check x86 family how big and old is it.
Do you have any use for it in what you plan to do? is it going to aid you in any way in what you currently do or plan to do? those are the two questions you should ask yourself, the answer to those is the answer to your question.
In a more general sense, yes, I'd say in my opinion is well worth learning asm (something like x86 or arm), how well it serves you depends on what you programming and how your debugging it.
Related
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Disclaimer: Yes I know this will take 3 years, at least.
I am looking forward to writing a new interpreted programming language. I have a quite solid idea of what I want in terms of dynamicness, syntax, object model, etc, etc.
Now that I have the idea, I have a few questions before I start:
Should I begin writing the full specification and then implement, or write them both all along?
I'm still doubting between C and C++. C++ would allow for more clean design and faster development while C would (maybe) ensure portability to more platforms (microprocessors?). Performance is a must.
Should I try to interest people for the project before the first working prototype so they can cooperate (the end product will be a liberal license anyway), or keep working alone until I have something that runs?
How modular should it be? I am sure that I won't immediately start working on a bytecode interpreter but something easier to implement but slower thing first, so modularity is a must in order to be able to extend later, but I guess overdoing it will hamper performance and clearity.
The answers to your questions depend largely on why you're doing this- the primary reason. Are you trying to create the next Ruby, or is this a learning exercise?
Specification: If this is a personal project, this is not as important. PHP gets a bad rap for having been developed "on the fly," yet many people use it every day. A more complete spec will probably help get people involved if/when you want help.
If you want cross-platform and performance, C is the way to go.
If you want people to join in, prove something first. Write a killer-cool application with your language and blog/talk about why your language is different/special/better.
Modularity of what, the language itself or the compiler? If you want to extend the language, a good spec will help (see #1.) The compiler should be designed with all the best practices in mind, which should help make it extensible.
I hear the Dragon Book is good for learning to develop compilers.
Your specification will be broken unless you write it hand-in-hand with the implementation.
If you think C++ would give you cleaner design and faster development, you should probably use it.
You will have difficulty getting anyone interested in a project unless there is something that runs and demonstrates what is unique about your language.
If you think your language will ever require a byte-code interpreter (and you do say "Performance is a must") you should investigate the capabilities of existing byte-code interpreters before you finalize your language design.
I think you have set yourself too many goals. You say "performance is a must" but in a comment reply you say your goal is "to learn a lot about language design" and that it is "pretty unlikely" that you'll use it in a real project. New programming languages are created to solve problems; more precisely, they're created to help people express solutions to problems in better ways. Designing a language without using it seriously, intensely, continually is like writing software without any test cases: you're likely to wind up with something unusable.
If you want to try your hand at language design, then find a problem---one that you care about---that existing languages won't let you solve the way you want. Then do whatever you can to get a working implementation and start writing and running programs using it. You don't need a hand-crafted JIT compiler with a runtime written in highly bummed assembly code. If you target the JVM or .NET, you get a very high-performance GC, scalable threading system, libraries, and lots of other good stuff for free, even if it interferes with that awesome idea you had for ______.
On the other hand, if you just want to make something run fast, don't try to design a language at the same time. Just find one that you like, learn about implementation strategies, and see if you can do better.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm currently working on the topic of programming-languages and interpreter-design. I have already created several programming languages but couldn't reach my goal so far:
Create a programming-language which focuses on giving the programmer a good feeling when writing code in it. It should just be fun and/or interesting and in no case annoying to write something in it.
I get this feeling when writing code in Python. I sometimes get the opposite with PHP and in rare cases when having to reinvent some wheel in C++.
So I've tried to figure out some syntactical features to make programming in my new language fun, but I just can't find any.
Which concrete features, maybe mainly in terms of syntax, do/could make programming in a language fun?
Examples:
I find it enjoyable to program in Ruby because of it's use of code blocks.
It would be nice if you could include exactly one example in your answer
Those features do not have to already exist in any language!
I'm doing this because I have experienced extreme rises in (my own) productivity when programming in languages I love (because of particular features).
You mentioned Ruby in your question. AFAIK, Ruby is the only programming language, for which Joy is an actual, stated, explicit design goal. (In fact, it is the only design goal.)
The reason that Yukihiro Matsumoto was able to design Ruby this way, is that he already knew and used tons of programming languages before he started designing Ruby and learned tons more in order to design Ruby. (Interestingly, he didn't know Python, and has said that he probably wouldn't have created Ruby if he did.)
Here's just a tiny fraction of the languages that matz has either used himself, or looked at for inspiration (or in some cases for inspiration what not to do):
CLU
Sather
Lisp
Scheme
Smalltalk
Perl
Python
Haskell
Scala
PHP
C
C++
Java
C#
Objective-C
Erlang
And I believe that this is one way that good programming languages can be designed (what Larry Wall calls postmodernist language design): Throw away everything that didn't work in the past, take everything that worked and combine that tastefully.
Of course, this requires that you actually know all those languages from which you want to "steal" and in particular, it requires that you know lots of very different languages with different paradigms, different concepts and different "feels", otherwise the idea pool from which you steal is rather small and inbred.
Consistency.
Its the feeling that you already know something when you use an API or feature you've never used before. It also makes you more productive as you don't have to learn something new for the sake of it.
I think this is also one of the Ruby 'likes', in that if you follow the naming convention, things start to 'just work' without bindings and glue and suchlike.
For example, using the STL in C++, many of the algorithms are the same for all containers - even strings. That makes it nice to use... except for those parts that do not follow the same API (eg vector of bools) then the difference is more noticable.
Two things to keep in mind are orthogonality and the principle of least surprise.
A programming language should make it easy to write correct programs and difficult (if not impossible) to write incorrect programs. For instance, in Java
long x = 2000000000 + 2000000000;
overflows, while
long x = 2000000000L + 2000000000;
doesn't. Is this obvious? I don't think so. Does anyone ever want something to overflow? I don't think so.
Hilarity.
http://lolcode.com/
Follow common practices (like using + for addition, & for bitwise/logical and)
Group logicaly-similar code in namespaces
Have an extensive string processing library
Incorporate debugging facilities
For a cross-platform language, try to minimize platform differences as much as possible
A language feature that appears simple and easy to learn surprises and delights the programmer with its unexpected power. I nominate Haskell type classes :-)
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What is the real benefit of creating a new programming language? It is highly unlikely that you are going to actually use it.
In short, how will the process of creating a new language make you a better programmer?
You will understand the decisions behind language design and garner a better overall understanding of the compromises made between readability, performance, and reliability.
Your familiarity with concepts such as recursion, closures, garbage collection, reference management, typing, data structures and how these things actually work will increase. Most programmers will utilize resources and language features better.
Similar to the way we learn new ways to code solutions when we use other languages, when we write our own languages, we explore new ways to create solutions. See Metaprogramming. Contrary to the what the question suggests, Domain Specific Languages are used in many environments.
If you're writing a compiler, you'll learn more about how computers work than you ever did before. (Depending on your goal, perhaps more than you intended to learn)
When I wrote my own sort routines in school, even re-implementations of good ones, it really drove home some of the weaknesses of some of the algorithms.
In short, there's an order of magnitude of difference in a programmer who knows how to use tools, and a programmer who knows how to make tools.
I can speak from experience here ...
Fun, Domain specific problem solving, Complexity in context
I love creating new languages for fun, and for tackling domain specific problems. A very simple example might be Wikipedia markup or something as complex as Erlang which specializes in concurrent processing.
Many general purpose languages are similar, because they are general purpose. Sometimes you need a more accurate abstraction of the mechanics of the problem you are solving. Another example would be the M4 macro language.
Remember a language is not magic, it is just a collection of defined grammatical structures with implied semantics. SQL is a good example of a language for a purpose, with that purpose defined in it's syntax and semantics.
Learning how languages work, what makes a language parsable, what makes semantics sensible and the implementation of this, I think can make you a better programmer.
compilers embody alot of theory that underpins computer science:
Translation, abstraction, interpretation, data structures, state .... the list goes on. Learning these things will make you understand the implications of your program and what goes on under the hood. You can of course learn things independently but compilers are a great context to learn complex topics such as DFA/NDFA automata, stack-based parsers, abstract syntax trees ....
compilers are beautiful machines I think :)
Multiple reasons:
bragging rights
economic incentives
extreme boredom
dissatisfaction with the hundreds of existing languages
untreated insanity
desire to implement language that facilitates new design concepts (like languages that make design patterns more straightforward to incorporate)
other reasons, perhaps
I think Jeff Attwood answers this well in this Coding Horror post -- though he's talking about a more general issue (why create any new library, framework, etc, when other artifacts in the same design space already exist), I suspect that exactly said broader viewpoint gives him a different and interesting perspective.
I will add that if you write a semantics, so that your language is an actual language and not merely what happens to be accepted by some particular implementation, you will learn an enormous amount about how to describe computational behaviors precisely:
You will learn what kinds of behaviors are and are not easy to describe—and prove correct.
You will learn how to trade off different kinds of formalisms for describing different kinds of features.
You will ultimately be a better programmer because the formalism and proof techniques you will learn will apply to all kinds of problems: locking techniques, safety properties in kernels, lock-free data structures, network protocols, and information security, to name just a few. All these areas are amenable to the same kind of formal treatment that is given to a programming language.
To pick just one example, if you give your language a static type system and you then prove that a well-type program is guaranteed to be memory-safe, you will learn just as much (on a different dimension) as you will by writing an interpreter or compiler.
EDIT: If you want to learn this stuff I think the easiest starting point is Benjamin Pierce's series of two books on Types and Programming Languages. There is also a graduate textbook by Glynn Winskel which is a little harder but more oriented toward semantics and proof techniques.
Creating Domain Specific Languages is very valuable. Instead of thinking only about general purpose languages, consider creating so-called "little languages" that clearly express abstractions in your project.
For example, in a recent project I decided to use a Command Pattern to drive a Service Layer. I found some repetition in my command code, so I wrote a little compiler that accepts a simple language that expresses commands and emits command implementations in the "underlying" language.
For the same reason that taking a Compiler Construction course at university will benefit you even if you never write a single compiler in your whole life. It's a look under the hood, if you may.
In addition to what altCognito said, which is a theoretical/academic perspective, some highly specialized languages are created to solve specific problems efficiently when existing "general-purpose" languages are either extremely inefficient for your task or there just isn't an easy-to-use existing alternative.
Granted, that such cases tend to be rare and if your first instinct on encountering a problem is "I need a new language for this.", then it is most likely you're missing something. There needs to be a fairly substantial gap in "available" tech and and your needs to warrant such an undertaking.
I think there are really two conceptually different answers to this. First, you gain an understanding of how compilers transform your code into executable code. This can help you make better decisions about how to structure your code to optimize (or allow it to be optimized) better. If, for instance, you knew that a certain construct would prohibit the compiler from inlining a code block or unrolling a loop, then you could avoid that if performance became a real concern.
Second, all current languages were invented (or derived) at some point in history. For each one of these, the likelihood that it would actually be used was potentially small, yet here they are. They all found their reason for being in the fact that someone wanted to do something that wasn't possible or easy to do in an existing language and decided to do something about it. Laziness (or the desire to let the computer do the work for you) is the mother of invention.
Just for fun... and then you'll realize that you cannot make anything better than all the languages that you thought they sucked xD (so you stop complaining about them).
how will the process of creating a new language make you a better programmer?
You're right, you may or may not use the language, but at the least the experience you will gain from doing it will benefit you to understand the implementation of programming languages and of certain things that you will be able to apply to future computation problems that you run into.
Writing a compiler or interpreter requires a very firm understanding in computer science theory. And if you're compiling to machine code instead of to another language, it requires a firm understanding in hardware design as well.
In addition to that, knowing how to design a compiler means you will have a better understanding of languages in general, and the languages you work with specifically. You will have a better appreciation for syntax and trade-offs the language designers took when they wrote their specification.
It's not that writing compilers makes you a better programmer. It's the deep understanding of language theory and compiler design that makes you better.
Mostly you do this for fun or to broaden your comprehension of a subject.
I disagree that creating new language influences performance - performance of what? IMHO execution speed should not depend on the language constructs but what the language is translated to - which is something different: like creating a syntax for a language and writting a compiler/virtual machine for it.
Because a talking frog is pretty neat.
I want a managed language that permits tinkering with its internals as standard practice. Kind of like Ruby's duck punching on a wider scale.
I should, as the client of a library, be able to swap out library functions that don't do what I want.
That's what drives me crazy with .NET. There are bugs in the framework Microsoft will not fix and thanks to GAC signing I cannot. And even if it were not for GAC signing, hotpatching a global library is a bad idea (might break some other application).
I for one don't care about how compilers work, don't care about learning new languages, and don't care about using scripting languages like perl and javascript. I'm much more interested in the ways big programs are constructed (or should be constructed). There are still no good solutions for making LARGE software as easy to use as prototyped code. Programming languages are not helping with that. They solve trivial problems like sorting and memory deallocation, and leave you struggling alone with problems that really matter (that keep you or your firm from losing money).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I've been programming for several years now and since then I have learned several concepts and techniques that have made me a better programmer (i.e. OOP, MVC, regex, hashing, etc). I feel too that by been able to learn several languages (basic, pascal, C/C++, lisp, prolog, python) I have widen my horizons in a very possitive way. But since some time ago I feel like I'm not learning any new good "trick". Can you suggest some interesting concept/technique/trick that could make me retake the learning flow?
A good paradigm shift always allows you to see things differently and become a better developer. I would suggest you read up on functional programming and maybe learn a functional language like Haskell or Scheme.
YAGNI (You Ain't Gonna Need It) and DTSTTCPW (Do The Simplest Thing That Could Possibly Work)
It's easy to spend a lot of time thinking about edge cases, and find that you've implemented something that's completely useless. I believe that a far better approach is to knock out a simple prototype, and then poke and prod it until you understand the domain well enough to create production code.
Recognize, however, that your prototype is going to evolve into production code whether you like it or not. So write it with that in mind.
Learning how to use your IDE and tools. This to me resulted in a far greater productivity increase.
For examples:
learning how to use a source level debugger
using tools like purify/boundschecker
fxcop
etc. I realize I am dating myself, but those were big steps. There are many more.
Any time you can change the way you think about a problem or solve a problem without having to undo previous work is HUGE gain. Process, tools, etc all can help with that. Don't limit yourself to finding silver bullet techniques for productivity gains.
Watching productive people work and getting them to tell you what they are doing and why is also invaluable.
If I'm honest, using, and learning a great framework like .NET has really increased my productivity.
I'm often amazed what people are willing to reinvent due to their ignorance that the very same function already exists in the framework.
AGILE and especially Test Driven Development. Best thing to happen to software development since the invention of Object Oriented Design.
Concerning coding, I'd say design patterns and architecture patterns are always nice to look at and can help you write cleaner/better code.
For methodology I would advice Agile development that is great. There are a numerous number of techniques and methods (I'm personally fan of extreme programming) and reading that can keep you busy and improve your general approach.
Finally I'd say learn new languages like Ruby
Design patterns
SCRUM process
DiSC assessment (and understanding of how it applies to collaborative s/w development)
StackOverflow.com (of course!)
Google
... other stuff too, I'm sure
Design Patterns. Learning how to break dependence upon implementation and inheritance, and depending on interfaces (contracts) instead changed the way I think about programming.
Debugging. Once I figured out how to actually step through the code and go line-by-line, examining the underlying state, it revolutionized how I troubleshoot code.
Practice, practice practice: I didn't realize how important it is to keep working on my skills apart from work until a relatively short time ago. Mistakes and solutions I make at home make me a better programmer at work, and vice a versa. Learning should never stop if you want to be good at something, and programming isn't an exception.
If I had to pick just one, I'd say Test-Driven Design, aka TDD: write unit tests (and check that they fail) before you incrementally add features.
Try to learn to see things from the user's standpoint.
For example:
learn how to write meaningful error messages
learn how to produce usable applications
learn some basic speed-optimization techniques
Remember that the user sees your application, not your code.
VIM Quick Reference Card. After I started using advanced vim (macros, plugins) I have stopped doing any repetitive actions during coding manually.
Apart from that, Scrum and working at night, when noone interrupts You gave me the highest benefit.
If you want to expand your experience into web programming, you should try and get a good handle on the HTTP Request/Response paradigm. This will make creating web apps much easier on you because you understand the underlying framework.
(http)://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol
I would look at some of the newer languages that combine OO and functional elements, like C# or Scala.
Learning Smalltalk has helped me become more productive. It is an easy language to learn and things can be built extremely quickly. For a stunning productivity aid check out Seaside, it's a framework for building web applications. Moreover, if you have only been used to curly brace languages Smalltalk will also make you smile!
I was helped by the following paradigms in this order:
1) bottom-up programming
2) top-down programming (C, Pascal)
3) object-oriented programming (Smalltalk, Java)
4) functional programming (lisp, Mathematica)
with some logic programming thrown in (prolog).
nHibernate hands down. The fact that I dont need to write database functionality for my business objects is very useful and time saving.
High level understanding, creating good abstractions with proper dependencies, is what pays off in long term. For example, Law of Demeter is an important guideline. I recommend also reading Eric Evan's Domain Driven Design
Code generators. They're the best thing in software engineering.
Would you like to write all your projects in asm? Nope, let's generate it from C++. Or from something sat above the JVM which diligently generates the necessary machine code.
Duplicating the same source code all over the place, but stuck with a language that insists on the line noise? Use macros.
Want to use lambdas in a language that doesn't have them? Work out how to fake the anonymous name and variable scoping required then generate the boilerplate.
None of the readily available languages quite fit your pattern of thought, desired syntax or even semantics? Write a compiler for a new one.
Better languages are nice. Better design patterns are nice. Emacs is awesome. But compilers are where all the power lies in our field. I suspect the only reason they aren't mentioned in any of the other answers is that we can't imagine programming without any.
Copy/paste technique
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Functional languages are good because they avoid bugs by eliminating state, but also because they can be easily parallelized automatically for you, without you having to worry about the thread count.
As a Win32 developer though, can I use Haskell for some DLL files of my application? And if I do, is there a real advantage that would be taken automatically for me? If so, what gives me this advantage? The compiler?
Does F# parallelize functions you write across multiple cores and CPUs automatically for you? Would you ever see the thread count in Task Manager increase?
How can I start using Haskell in a practical way, and will I really see some benefits if I do?
It seems like the book Real World Haskell is just what you're looking for. You can read it free online.
F# does not contain any magic pixie dust that will pass functions off to different CPUs or machines. F#/Haskell and other functional programming languages make it easier for you to write functions that can be processed independent of the thread or CPU they were created on.
I don't feel right posting a link here to a podcast I participate in. It seems a little off, but in the Herding Code episode, where we talked with Matt Podwysocki, we asked the same question and he gave some interesting answers. There are also a lot of good links relating to functional programming in that episode. I found one link titled "Why Functional Programming Matters". That may provide some answers for you.
This might also be interesting:
"Real World Functional Programming"
Examples are in F# and C#, but the theory is fairly generic.
From what I've read (pre-release) it is definitely interesting, but so far I think it is making me want to stick more and more with C#, using libraries like Parallel Extensions.
You didn't mention, but I'm assuming, that you're using C++. One potentially easy way to get into functional is via C++/CLI to F#. C++ contains "magic pixie dust" (called IJW: It Just Works) to allow you to call into and out of managed code. With this, calling F# code is almost as simple as it is from C#.
I've used this in one program (FreeSWITCH), which is written entirely in C/C++. With a single managed C++/CLI (use the /clr switch), it magically transitions into managed code, and from there, I can go load my F# plugins and execute them. To make things even easier for deployment, F# can statically link all its dependencies, so you don't need to deploy the F# runtime files. One other thing that makes CLR code attractive is that you can pass managed code (delegates) to C code, and the runtime automatically makes a thunk for you.
If you decide to go the Haskell way, the feature you'll be looking for is FFI: Foreign Function Interface. However, I don't think it'll give you the same level of integration as C++/CLI with F#.
I'm currently learning Haskell myself. When you start out learning it, it doesn't seem very intriguing, because the learning experience is nothing like learning a language like C#.
It's a whole new world, but I noticed I could write very very complex expressions in just a few lines of code. When I looked back at the code, it was much more concise; it was small and tight.
I'm absolutely loving it! You can indeed write real-world programs that will be smaller, easier to maintain, and much more complex than most other languages allow. I vote for you to learn it!!
Since you mention Win32 and DLLs, I presume you're working with unmanaged code. In that case, GHC will work very well for you. Late last year I wrote a DDE server under Windows using FFI to talk to the Microsoft DDE libraries, and, surprisingly, it was an extremely pleasant experience (especially given that I'm a Unix guy).
Haskell's FFI is powerful (even supporting, e.g., callbacks into Haskell functions from C or other libraries), and having Haskell's type checking when writing C-level code is like a dream come true.
That last point is one of the major advantages of Haskell: the type system is amazing. That said, it's like any powerful tool; it needs time and effort to make good use of it.
So yes, it is possible to start out writing small bits of code in Haskell that link into the rest of your code (though you may find it easier to start with small Haskell programs that link to your other code), and it's well worth spending a fair amount of time learning about this and using it wherever you can. You may end up like me, planning a fairly major project tightly integrated with Windows code (in my case, a sophisticated Excel add-in) in Haskell.