Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have lately been delving into F# and the new DSL stuff as in the Microsoft SQL Server Modelling CTP, and have some concerns.
Will this new idea that will come about be bad for skilled programmers?
Is code going to be dumbed down?
I know I sound like a luddite, but this does worry me, after spending years of time practising in my craft, and now might be scuttled by genius from within.
I am afraid, very afraid.
Will I be now trapped in a job that only programs against a DSL and therefore every job that I work on, I have to learn a whole new DSL based on top of a Framework (.net Java), that I will only be allowed to touch certain parts of.
I don't think the world is ready for DSL, but the sales pitch is deafening!
DSLs will liberate programmers from doing a non-programming stuff. Cleaner separation of responsibilities is always a good idea. Programmers should program (e.g., implement DSLs), and other domain experts should do what they are good in. I am personally doing whatever I can to turn this industry towards a wider use of DSLs. I do not want to code business logic. I love to implement compilers. So I'm not afraid. I am looking forward for a time when every little task will require its own little DSL.
There will always be a need for real, knowledgeable programmers to construct those DSLs and to add new capabilities to them as requirements change. No problem domain is static.
I wouldn't worry, DSL's are a long, long, long ways away from being embraced by your average corporate IT department.
I am developer and I frequently design my own DSLs and DSL2Text transformations to speed up many boring parts of my job. To this end I use the Eclipse plugins (i.e. EMF, XText, ACceleo, etc.) as explained in this site http://lowcoupling.com/dslengineering
I think DSLs in reality will elevate the levels of software engineering work. Here are the reasons:
Think of ORM or SQL DSLs. It models SQL access in your native programming language. It reduces complexity, allows higher level constructs, is more readable and is less error prone (than SQL string manipulation). A good SQL DSL is composable. Trying to achieve what a SQL DSL does with Ad Hoc code on the fly is way too difficult and error prone. In this case, a DSL reduces mundane complexity and elevate the level of your code into higher level abstractions. (Same concept as we don't want to code every TCP/IP handshake every time we open a TCP connection).
Many DSLs are not meant for software programmers. When we design DSLs, often they are handed off to domain experts (who are non-programmers) to use. These easier DSLs will not require programmers, so you won't be stuck using them. The software field is currently dumbed down because we have schools churning out programmers that code simple ASP.Net pages. Shifting these type of mundane work to non-programmers will force schools to produce higher quality engineers.
Good DSLs hide complexity, but they are flexible, and allow users to customize/fine tune behaviors when needed. This means they won't confine programmers into rigid/mundane coding behaviors.
When a problem is highly complex, it's usually better to reduce it into a domain specific problem and construct a mini DSL for it. DSL code is simply more relevant and easier to maintain, and the business logic will be shorter, easier to verify and easier to reason.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
How do you work with someone when they haven't been able to see that there is a range of other languages out there beyond "The One True Path"?
I mean someone who hasn't realised that the modern software professional has a range of tools in his toolbox. The person whose knee jerk reaction is, for example, "We must do this is C++!" "Everything must be done in C++!"
What's the best approach to open people up to the fact that "not everything is a nail"? How may I introduce them to having a well-equipped toolbox, selecting the best tool for the job at hand?
As long as there are valid reasons for it to be done in C++, I don't see anything wrong with this monolithic approach.
Of course a good programmer must have many different tools in his/hers toolbox, but these tools don't need to be a new language, it can simply be about learning new programming paradigms.
As much as I've experienced actually, learning many different languages doesn't make you much of a better programmer at all.
This is also true with finding the right language for the job. Yeah ok, if you're doing concurrency you might want a functional language rather than an Object Oriented language, but what are the gains of using another programming language?
At the end of the day; "Maintenance".
If it can be maintained without undue problems then the debate may well be moot and comes down to preference or at least company policy/adopted technology.
If that is satisfied then the debate becomes "Can it be built efficiently to be cost effective and not cause integration problems?"
Beyond that it's simply the screwdriver/build a house argument.
Give them a task which can be done much easily in some other language/technology and also its hard to do it the language/technology that he/she is suggesting for everything.
This way they will eventually search for alternatives as it gets harder and harder for them to accomplish the task using the language/technology that they know.
Lead by example, give them projects that play to their strengths, and encourage them to learn.
If they are given a task that is obviously better suited for some other technology and they choose to use a less effective language, don't accept the work. Tell them it's not an appropriate solution to the problem. Think of it as no different then them choosing Cobol to take the replace of a shell script -- maybe it works, but it will be hard to maintain over time, take too long to develop, require expensive tools, etc.
You also need to take a hard look at the work they do and decide if it's really a big deal or not if it's done in C++. For example, if you have plenty of staff that knows that language and they finished the task in a decent amount of time, what's the harm? On the other hand, if the language they choose slows them down or will lead to long term maintenance problems they need to be aware of that.
There are plenty of good programmers who only know one language well. That fact in and of itself can't be used to determine if they are a valuable member of a team. I've known one-language guys who were out of this word, and some that I wouldn't have on a team if they worked for free.
Don't hire them.
Put them in charge of a team of COBOL programmers.
Ask them to produce a binary that outputs an infinite Fibonacci sequence.
Then show them the few lines (or 1 line, depending on the implementation) it takes in Haskell, and that it too can be compiled into a binary so there are better ways forward.
How may I introduce them to having a
well-equipped toolbox, selecting the
best tool for the job at hand?
I believe that the opposite of "one true language" is "polyglot programming", and I will then refer to another answer of mine:
Is polyglot programming important?
I actually doubt that anybody can nowadays realize a project in one and only one language (even though there might be exceptions). The easiest way to show them the usefulness of specific tools and languages, is then to show them that they are already using several ones, e.g. SQL, build file, various XML dialect, etc.
Though I embrace the polyglot perspective, I do also believe that in many area "less is more". There is a balance to find between the number of language/tools, the learning curve, and the overall productivity.
The challenge is to decide which small set of languages/tools fit nicely together in your domain and will push productivity and creativity to new limits.
Give them a screwdriver and tell them to build a house?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am a long time agile advocated but one of the things that bothers me about Agile is that a lot of agile practitioners, especially the younger ones, have thrown out or are missing a whole lot of good (non Scrum, non XP) practices. Alistair Cockburn's style of writing Use Cases springs to mind; orthogonal arrays (pairwise testing) is another.
I read mostly Agile related books and articles and work with mostly Agile folk ... is there anything I'm missing?
It might be interesting in 5-10 years time to see how maintainable these systems are when nobody wrote down why a particular decision was made and all the people involved have left.
is there anything I'm missing?
Yes, I think a lot, but only if you are interested in Softawre Development Processes.
I like this paraphrase:
Each project should be as agile as possible but not more agile.
Not every project can be agile... but I think 80%+ can.
I see Agile as "car of the year". It is very well suited for most of the people, but if you need/want something special, for example car able to speed 300KM/H or car able to carry 20 tons of goods you need something else.
There is also so many cases when one may want something else than "car of the year" that requires a book to write them down :-) I recommend you Agility and Discipline Made Easy: Practices from OpenUP and RUP. In this book you'll find many "missing parts" very well illustrated. The key to understanding is that Agility is only a (requested) property of software development process which sometimes cannot be achieved. The book describes several Key Development Principles (which are basis for RUP) and explains which level of "ceremony" and "iterativeness" follows from using them on different levels of adoption.
An example
Practice: Automate change management and change propagation
In your project you may require very advanced and strict change management and decide to "Automate change management and change propagation" by implementing custom or re-configuring existing tools and by using Change and Control Board.
Effect: This most probably increase level of "ceremony" in your project.
(...) have thrown out or are missing a whole lot of good (non Scrum, non XP) practices.
Scrum is not prescriptive, it's up to you to choose how to do things. In other words, nothing forces you to use User Stories for example (even if User Stories work for lots of teams, there is no consensus) so feel free to use (light) use-cases if you think they are more appropriate in your context. To illustrate this, Jeff Sutherland reported he would never use User Stories again for PDA device projects (they use some kind of "light specifications" in his current company). And the same applies for testing, use whatever works for you. To summarize, if you find XP not flexible enough, use something else... and inspect and adapt.
Iterative development.
In practice, agile teams may do iterations (or anything for that matter, agile is a kind of "true scotsman"), but agile processes don't require or define iterative development sufficiently.
Take RUP, for example - clumsy and bloated, it does compile a few good methods for long-term development that agile misses.
On a general note, agile is a way to steer clear of problems: how to avoid long term planning, how to keep teams small, tasks short, customers involved, etc. It works more often than not, but sometimes you have to face and solve problems: how to reach strict deadline, make big team work, achieve distant and complex goals, make customer refine requirements. That's when one needs to look beyond agile.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
How do you prevent agile methodology with monthly sprints/iterations causing a fragmented design. For ex. take the case of design of Manhattan Streets vs Design of Boston streets. The blueprints for Manhattan Streets were designed as a whole resulting in easy manoeverability and driving. Boston streets were designed in a piece meal approach and its a nightmare getting around.
Streets are a bad analogy to use for software. Streets cannot be easily moved, rerouted, or changed without significant effort. Well-written software is an composition of orthogonal components that are easily reorganized and modified as needed.
The reason that agile development works is that both the developers and the software itself are agile. The developers respond quickly to change and the software they write is written in a manner that makes it responsive to change as well.
Refactor, refactor, refactor.
Agile development is not a mandate to throw away over-arching design. It makes a lot of sense to plan 'the big picture' - what the responsibilities of your major components are, and how they will interact, for example. In my own team, I find a reasonable compromise is to agree a broad public API up front, but defer detailed implementation decisions where possible. This allows individual developers the freedom to modify the design as the implementation evolves, while also gaining the benefit of hashing out different approaches at the design level, when changes are much cheaper. Modularity is also key - keep components specific in function and as decoupled from one another as possible. This minimises your overheads when you find you need to make changes to your implementation, and should increase the chances that individual components you write remain useful.
You can have a design phase that decides on the Streets. What buildings get built is determined in each phase.
I think that there is a tendency to do too little design/architecture in scrum projects. You can do constant refactoring but that can be alot of work, if something has not been designed. You can get into the same problem if there is an error in the design.
Use of SOLID principles would result in code which is loosely coupled & highly cohesive.
Remember design is the king.
IMHO analogies of software components with Buildings, Streets are just illogical. None of this is even remotely similar to software development or software design.
There is always a risk that Agile users will suggest adjusting the principals of agile to the situation at hand, however when this subsequently fails this leaves those who support the move to Agile open to criticism because Agile is not being adopted fully.
Fragmented design is a continuous risk with Agile because there is no value on architecture. What is needed is a methodology that uses the advantageous aspects of Agile such as Continuous Delivery but solves problems like the un-scalability of Agile. A possible approach is The Game of Thrones Methodology which does just this.
"Agile methodology" (and I assume you mean SCRUM, here) has little to do with the architecture of your design; the architecture being a property of the software or the system your creating (in the UML world you would maybe call it "artefacts").
"Agile" does not tell you anything about what kind of software you write, it can be used for any and all types of software or even easily for projects that have nothing to do with software at all.
So if you have the feeling that there is a bad software architecture in your agile project(s), then you should be looking around at what the actual cause is. Only because you are agile does not mean that everybody does what he wants without a plan, or that you need no documentation at all. While you won't specifiy every class down to the bit level before starting your work, you still will want a high-level architecture written down, even before the first sprint. Depending on your team size and constituency, you could imagine using your first sprint to even create your architecture.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
One of the key properties to designing comprehensible software (and, indeed, designing anything at all) is to develop a good set of abstractions. These days, those abstractions include things like functions, classes, interfaces, recursion, and higher-order functions. But what else is there? How can we further abstract our designs, so that I needn't think about anything but my immediate, direct goal? What novel abstractions have yet to be leveraged by existing technologies?
Also note that most of the items on my list (with the exception, perhaps, of recursion) are also tools used for code reuse. Code reuse is not the subject of this question, and is not what I see as a necessary aspect of a good abstraction. Functions are useful as abstractions because they hide what they are doing behind a descriptive name, not because I can call them from several different places.
A poorly-formed idea: Is a driver function that only calls a sequence of other functions, without maintaining any state of its own, really the same as a function? We write it as a function, and call it as a function, but perhaps it represents a different concept? This is reflected in some languages by making a distinctions between procedures returning values and procedures not returning values. But maybe there's a better way to view that difference, some different way to abstract the sequence of relatively unrelated steps?
So to reiterate, how can future programming languages better facilitate abstraction?
A powerful absraction tool, Lisp macros. Why not look into the past and present? :)
They can use self-exposing semantics to better allow metaprogramming of the environment/language presented as the end-user interface. Mutable language semantics.
Some areas that I think are potentially fruitful:
Intentional Programming, or something similar. Charles Simonyi's company Intentional Software has been keeping pretty quiet for a while but is now starting to show some promising early demonstrations.
Functional Programming: ideas from functional programming are increasingly finding their way into more mainstream languages like Python, C# (Linq, lambdas, etc.) and even C++ (lambdas in C++ 0x). F# is becoming a first class .NET language with full support in Visual Studio. The rise of multi core development is another factor driving the wider adoption of functional concepts.
Domain Specific Languages (DSLs): closely related to the ideas behind Intentional Programming, Microsoft seem to be putting some effort into supporting DSLs as part of the .NET ecosystem.
Much more sophisticated IDEs. There are already some positive developments with refactoring tools in IDEs like Visual Studio and IntelliJ but I think there's a lot of room for progress in this area. Moving away from dumb text source files towards something more like an abstract syntax tree representation could make it much easier to work at a higher level of abstraction. Again, this connects with many of the ideas behind Intentional Programming.
By having built in detection of stupid ideas that, when tripped, lock the developer out of the IDE and refuse to let them code ever again.
OOP facilitates abstraction quite nicely. It's developers that come up with poorly formed ideas.
Let's see, how about if we make abstraction mandatory for every data type, and then provide ways of generalizing our abstractions over type parameters? Wait! I've just reinvented CLU. Do I get a Turing Award?
Anyone interested in the role of abstraction in programming should study CLU.
Eiffel code proofs. (warning: link to PDF!)
Functional programming, aspect oriented programming, design by contract and generally everything that takes us away from the dark age of imperative programming.
Also, I hope non - managed software development will cease to exist. C++ and other low level stuff makes me sad. :-(
I like my LINQ, my lambda operator, my extension methods and my fluent interfaces. Oh, and I love PostSharp.NET. And F#, but I guess it's very hard NOT to love F#. :-)
I will give an indirect answer. Before we can develop better constructions in programming languages, we must first understand the theory of abstraction.
Oh yes, there is an actual theory which predates modern computing, it is called category theory.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I've been programming for several years now and since then I have learned several concepts and techniques that have made me a better programmer (i.e. OOP, MVC, regex, hashing, etc). I feel too that by been able to learn several languages (basic, pascal, C/C++, lisp, prolog, python) I have widen my horizons in a very possitive way. But since some time ago I feel like I'm not learning any new good "trick". Can you suggest some interesting concept/technique/trick that could make me retake the learning flow?
A good paradigm shift always allows you to see things differently and become a better developer. I would suggest you read up on functional programming and maybe learn a functional language like Haskell or Scheme.
YAGNI (You Ain't Gonna Need It) and DTSTTCPW (Do The Simplest Thing That Could Possibly Work)
It's easy to spend a lot of time thinking about edge cases, and find that you've implemented something that's completely useless. I believe that a far better approach is to knock out a simple prototype, and then poke and prod it until you understand the domain well enough to create production code.
Recognize, however, that your prototype is going to evolve into production code whether you like it or not. So write it with that in mind.
Learning how to use your IDE and tools. This to me resulted in a far greater productivity increase.
For examples:
learning how to use a source level debugger
using tools like purify/boundschecker
fxcop
etc. I realize I am dating myself, but those were big steps. There are many more.
Any time you can change the way you think about a problem or solve a problem without having to undo previous work is HUGE gain. Process, tools, etc all can help with that. Don't limit yourself to finding silver bullet techniques for productivity gains.
Watching productive people work and getting them to tell you what they are doing and why is also invaluable.
If I'm honest, using, and learning a great framework like .NET has really increased my productivity.
I'm often amazed what people are willing to reinvent due to their ignorance that the very same function already exists in the framework.
AGILE and especially Test Driven Development. Best thing to happen to software development since the invention of Object Oriented Design.
Concerning coding, I'd say design patterns and architecture patterns are always nice to look at and can help you write cleaner/better code.
For methodology I would advice Agile development that is great. There are a numerous number of techniques and methods (I'm personally fan of extreme programming) and reading that can keep you busy and improve your general approach.
Finally I'd say learn new languages like Ruby
Design patterns
SCRUM process
DiSC assessment (and understanding of how it applies to collaborative s/w development)
StackOverflow.com (of course!)
Google
... other stuff too, I'm sure
Design Patterns. Learning how to break dependence upon implementation and inheritance, and depending on interfaces (contracts) instead changed the way I think about programming.
Debugging. Once I figured out how to actually step through the code and go line-by-line, examining the underlying state, it revolutionized how I troubleshoot code.
Practice, practice practice: I didn't realize how important it is to keep working on my skills apart from work until a relatively short time ago. Mistakes and solutions I make at home make me a better programmer at work, and vice a versa. Learning should never stop if you want to be good at something, and programming isn't an exception.
If I had to pick just one, I'd say Test-Driven Design, aka TDD: write unit tests (and check that they fail) before you incrementally add features.
Try to learn to see things from the user's standpoint.
For example:
learn how to write meaningful error messages
learn how to produce usable applications
learn some basic speed-optimization techniques
Remember that the user sees your application, not your code.
VIM Quick Reference Card. After I started using advanced vim (macros, plugins) I have stopped doing any repetitive actions during coding manually.
Apart from that, Scrum and working at night, when noone interrupts You gave me the highest benefit.
If you want to expand your experience into web programming, you should try and get a good handle on the HTTP Request/Response paradigm. This will make creating web apps much easier on you because you understand the underlying framework.
(http)://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol
I would look at some of the newer languages that combine OO and functional elements, like C# or Scala.
Learning Smalltalk has helped me become more productive. It is an easy language to learn and things can be built extremely quickly. For a stunning productivity aid check out Seaside, it's a framework for building web applications. Moreover, if you have only been used to curly brace languages Smalltalk will also make you smile!
I was helped by the following paradigms in this order:
1) bottom-up programming
2) top-down programming (C, Pascal)
3) object-oriented programming (Smalltalk, Java)
4) functional programming (lisp, Mathematica)
with some logic programming thrown in (prolog).
nHibernate hands down. The fact that I dont need to write database functionality for my business objects is very useful and time saving.
High level understanding, creating good abstractions with proper dependencies, is what pays off in long term. For example, Law of Demeter is an important guideline. I recommend also reading Eric Evan's Domain Driven Design
Code generators. They're the best thing in software engineering.
Would you like to write all your projects in asm? Nope, let's generate it from C++. Or from something sat above the JVM which diligently generates the necessary machine code.
Duplicating the same source code all over the place, but stuck with a language that insists on the line noise? Use macros.
Want to use lambdas in a language that doesn't have them? Work out how to fake the anonymous name and variable scoping required then generate the boilerplate.
None of the readily available languages quite fit your pattern of thought, desired syntax or even semantics? Write a compiler for a new one.
Better languages are nice. Better design patterns are nice. Emacs is awesome. But compilers are where all the power lies in our field. I suspect the only reason they aren't mentioned in any of the other answers is that we can't imagine programming without any.
Copy/paste technique