Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
When is code a hack?
People seem to define a hack as ugly coding to solve a problem but how is that different from writing messy code.
Also is the only difference between a problem coded badly and a problem hacked the mindset of the programmer?
When I say hack I mean in the programming/development sense not the illegal sense.
A hack is a section of code you write to overcome a technology deficiency such as your programming language, communications protocol, hardware, or some other programmer's bug. You usually tag your code as a hack to let other people know you could have done it the "right" way if you just didn't have this limitation.
That being said, it is often misused and simply refers to a section of code where the programmer was too lazy to do it the "right" way, or where the code seems to work for what they designed, but they are aren't sure of the unintended consequences. For example: one may "hack" code if it was poorly designed and they don't understand what the change is going to really do to the entire system. That isn't really a hack, it is just a lack of understanding.
A hack is an ugly solution which may be implemented in well documented, perfectly formatted code with exquisitely named variables and all that. As they say, you can put lipstick on a pig, but it's still a pig.
On the other hand, you can also have a messy, hard to read implementation of a beautiful algorithm. Poorly chosen names, bad formatting, and poor documentation make code harder to understand, but the underlying idea may still be sound. This sort of thing isn't lipstick on a pig, it's a diamond in the rough.
A hack implies that the programmer is using a system in a way it was not designed for. For a hack to take place, the system must have a clear way it was designed. This design is then violated by the hack.
Ugly coding usually implies there is no clear design to hack. A hack is no longer a hack if it is just as ugly as the surrounding code.
Normally hacks are quick and dirty solutions which the programmer intents to rectify some day. Ugly programming has a way of never getting fixed.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Can anybody tel me the difference by considering all the factors like Execution time,Efficiency.etc
Which is effective ?
VB.NET is a "friendly" programming language. It supports dynamic programming right out of the box, no need to explicitly type your variables for example. Data conversions are automatic. Overflow checking is on by default. Passing properties by reference just works. You can assign an int to a byte without a cast. You can create a multi-window Winforms app without ever really understanding object-oriented programming. The compiler auto-generates a bunch of code.
None of this comes for free. In some cases, the extra overhead can be very substantial. Simply adding two numbers can be three times more expensive than needed, the overflow checking is pretty deer. Automatic conversions between a string and a number are a frequent wart in a VB.NET program, very expensive. You don't stand much of a chance to identify such a bottleneck from just looking at the source code.
C# is much stricter, it (almost) never generates code that hides execution cost under the floor mat. It thus makes it automatically easier to write performant code. This does not otherwise completely avoid having to use a profiler to identify a bottle-neck.
I'd like to expand upon both answers given so far. They are both correct. The problem with VB.NET is typically the developer's mindset AND the flexibility of the VB.NET language.
If you use Option Explicit On, Option Strict On (Option Strict On enables Option Explicit) and do not use Option Infer you will get better results at the expense of more complicated code. By complicated I mean you have to correctly cast your variables and objects, something that maybe considered complicated for a BASIC developer.
Option Strict Information: http://support.microsoft.com/kb/311329
Option Infer On, should not be used 99.99% of the time when writing new applications. I would say 100% of the time, but someone will have a legitimate reason, I just can not think of any.
Option Infer Information: http://msdn.microsoft.com/en-us/library/bb384665.aspx
There should be none, because they both compile down to the same language. The biggest variable factor is the programmer - they may do things in a more roundabout or inefficient way (for example, I can imagine that VB.NET programmers coming from a VB(A) background tend to solve problems differently from C# programmers coming from a C(++) background).
If you want to be sure, take a piece of code and inspect the IL.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I use the term all the time... but I was just sort of thinking that I don't really have a solid denotational sense behind the term (or at least the term in the sense I want to discuss here). I'm interested in the sense of the word related to code, not the anthropomorphic idea. I'm also not interested here in the sense of the word related to intentional malicious computing (i.e. a hack to unlock secret powers in a game). What I want to explore is what it means to 'hack' in terms of writing software to solve a problem
wikipedia's def of 'hack' to me is a bit vague, but a decent starting point. It considers a hack
can refer to a solution or method which functions correctly but which is "ugly" in its concepion
works outside the accepted structures and norms of the environment
is not easily extendable or maintainable
can be slang for "copy", "imitation" or "rip-off."
These traits of a hack conform to my usage of the word--when applied to code it is always a term of derision. To my mind, a hack
Is likely to be difficult to maintain & hard to understand in the context of the rest of the code.
Is likely to cause failure of the app.
tends to indicate a poor understanding by the coder either of the problem space, usage of the language or both
tends to be the byproduct of aggressive schedules
suggests potential changes in requirements that have not been fully incorporated into the architecture of the solution (requiring an 'inorganic' workaround).
smells
all bad, bad, bad. To me, a hack in this sense is always negative, indicating either lack of time, incompetence, or sloth on the part of the developer, though a decent percentage of hacks must be written to compensate for ill-conceived designs or systems that have gained requirements which their original design cannot handle 'organically'.
I don't think I've really captured it totally though--it's like pornography a bit: I can't really define it, but I know it when I see it. So I ask you: what does it mean to 'hack' when you are trying to solve a problem in software?
I've always preferred Paul Graham's definition:
To add to the confusion, the noun "hack" also has two senses. It can be either a compliment or an insult. It's called a hack when you do something in an ugly way. But when you do something so clever that you somehow beat the system, that's also called a hack. The word is used more often in the former than the latter sense, probably because ugly solutions are more common than brilliant ones.
From the Jargon File, the glossary of hacker slang:
The Meaning of ‘Hack’
“The word hack doesn't really have 69 different meanings”, according to MIT hacker Phil Agre. “In fact, hack has only one meaning, an extremely subtle and profound one which defies articulation. Which connotation is implied by a given use of the word depends in similarly profound ways on the context. Similar remarks apply to a couple of other hacker words, most notably random.”
Hacking might be characterized as ‘an appropriate application of ingenuity’. Whether the result is a quick-and-dirty patchwork job or a carefully crafted work of art, you have to admire the cleverness that went into it.
An important secondary meaning of hack is ‘a creative practical joke’. This kind of hack is easier to explain to non-hackers than the programming kind.
When I think of "hack", I think of it as being a non-expected workaround to solve a problem, not necessarily a bad thing. Creative, innovative, and well-placed. "Hack" can apply to more than just computers, though I seldom hear it used that way.
Too often "hack" simply means: "Not the way I would do it."
This topic will turn into something like a question about Love. Everyone's gonna have their own definition. The best way to know the proper (default) definition is in the dictionary
It's when you've stepped out of the idiomatic, natural, sensible and (sometimes) supported ways of doing something in a given language/framework/etc.
Sometimes that's a stroke of genius, usually it's an act of idiocy, occasionally it's one disguised as the other, and on rare occasions it's both.
(Incidentally, the judge who coined that statement about pornography you quote later retracted in making another ruling).
When I use the term 'hack' it usually refers to a solution to a problem that was done usually in response to a pressing issue, and so not a lot of thought went into it in regards to the overall design of the application. Sometimes it works out, sometimes not so much, and sometimes it turns out to be a work of genius. But mainly, it's an admitted temporary solution that (hopefully) gets refactored and refined when possible.
Here's a great sentence I saw about the difference between hacking and scamming and it says, "Hacking attacks are successful when the criminal knows how a particular computer system works. Scams are successful when the perpetrator knows how the human brain works.", which brings the idea out that to hack into something, you need to have a deep understanding of how it works.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm currently working on the topic of programming-languages and interpreter-design. I have already created several programming languages but couldn't reach my goal so far:
Create a programming-language which focuses on giving the programmer a good feeling when writing code in it. It should just be fun and/or interesting and in no case annoying to write something in it.
I get this feeling when writing code in Python. I sometimes get the opposite with PHP and in rare cases when having to reinvent some wheel in C++.
So I've tried to figure out some syntactical features to make programming in my new language fun, but I just can't find any.
Which concrete features, maybe mainly in terms of syntax, do/could make programming in a language fun?
Examples:
I find it enjoyable to program in Ruby because of it's use of code blocks.
It would be nice if you could include exactly one example in your answer
Those features do not have to already exist in any language!
I'm doing this because I have experienced extreme rises in (my own) productivity when programming in languages I love (because of particular features).
You mentioned Ruby in your question. AFAIK, Ruby is the only programming language, for which Joy is an actual, stated, explicit design goal. (In fact, it is the only design goal.)
The reason that Yukihiro Matsumoto was able to design Ruby this way, is that he already knew and used tons of programming languages before he started designing Ruby and learned tons more in order to design Ruby. (Interestingly, he didn't know Python, and has said that he probably wouldn't have created Ruby if he did.)
Here's just a tiny fraction of the languages that matz has either used himself, or looked at for inspiration (or in some cases for inspiration what not to do):
CLU
Sather
Lisp
Scheme
Smalltalk
Perl
Python
Haskell
Scala
PHP
C
C++
Java
C#
Objective-C
Erlang
And I believe that this is one way that good programming languages can be designed (what Larry Wall calls postmodernist language design): Throw away everything that didn't work in the past, take everything that worked and combine that tastefully.
Of course, this requires that you actually know all those languages from which you want to "steal" and in particular, it requires that you know lots of very different languages with different paradigms, different concepts and different "feels", otherwise the idea pool from which you steal is rather small and inbred.
Consistency.
Its the feeling that you already know something when you use an API or feature you've never used before. It also makes you more productive as you don't have to learn something new for the sake of it.
I think this is also one of the Ruby 'likes', in that if you follow the naming convention, things start to 'just work' without bindings and glue and suchlike.
For example, using the STL in C++, many of the algorithms are the same for all containers - even strings. That makes it nice to use... except for those parts that do not follow the same API (eg vector of bools) then the difference is more noticable.
Two things to keep in mind are orthogonality and the principle of least surprise.
A programming language should make it easy to write correct programs and difficult (if not impossible) to write incorrect programs. For instance, in Java
long x = 2000000000 + 2000000000;
overflows, while
long x = 2000000000L + 2000000000;
doesn't. Is this obvious? I don't think so. Does anyone ever want something to overflow? I don't think so.
Hilarity.
http://lolcode.com/
Follow common practices (like using + for addition, & for bitwise/logical and)
Group logicaly-similar code in namespaces
Have an extensive string processing library
Incorporate debugging facilities
For a cross-platform language, try to minimize platform differences as much as possible
A language feature that appears simple and easy to learn surprises and delights the programmer with its unexpected power. I nominate Haskell type classes :-)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
How do you work with someone when they haven't been able to see that there is a range of other languages out there beyond "The One True Path"?
I mean someone who hasn't realised that the modern software professional has a range of tools in his toolbox. The person whose knee jerk reaction is, for example, "We must do this is C++!" "Everything must be done in C++!"
What's the best approach to open people up to the fact that "not everything is a nail"? How may I introduce them to having a well-equipped toolbox, selecting the best tool for the job at hand?
As long as there are valid reasons for it to be done in C++, I don't see anything wrong with this monolithic approach.
Of course a good programmer must have many different tools in his/hers toolbox, but these tools don't need to be a new language, it can simply be about learning new programming paradigms.
As much as I've experienced actually, learning many different languages doesn't make you much of a better programmer at all.
This is also true with finding the right language for the job. Yeah ok, if you're doing concurrency you might want a functional language rather than an Object Oriented language, but what are the gains of using another programming language?
At the end of the day; "Maintenance".
If it can be maintained without undue problems then the debate may well be moot and comes down to preference or at least company policy/adopted technology.
If that is satisfied then the debate becomes "Can it be built efficiently to be cost effective and not cause integration problems?"
Beyond that it's simply the screwdriver/build a house argument.
Give them a task which can be done much easily in some other language/technology and also its hard to do it the language/technology that he/she is suggesting for everything.
This way they will eventually search for alternatives as it gets harder and harder for them to accomplish the task using the language/technology that they know.
Lead by example, give them projects that play to their strengths, and encourage them to learn.
If they are given a task that is obviously better suited for some other technology and they choose to use a less effective language, don't accept the work. Tell them it's not an appropriate solution to the problem. Think of it as no different then them choosing Cobol to take the replace of a shell script -- maybe it works, but it will be hard to maintain over time, take too long to develop, require expensive tools, etc.
You also need to take a hard look at the work they do and decide if it's really a big deal or not if it's done in C++. For example, if you have plenty of staff that knows that language and they finished the task in a decent amount of time, what's the harm? On the other hand, if the language they choose slows them down or will lead to long term maintenance problems they need to be aware of that.
There are plenty of good programmers who only know one language well. That fact in and of itself can't be used to determine if they are a valuable member of a team. I've known one-language guys who were out of this word, and some that I wouldn't have on a team if they worked for free.
Don't hire them.
Put them in charge of a team of COBOL programmers.
Ask them to produce a binary that outputs an infinite Fibonacci sequence.
Then show them the few lines (or 1 line, depending on the implementation) it takes in Haskell, and that it too can be compiled into a binary so there are better ways forward.
How may I introduce them to having a
well-equipped toolbox, selecting the
best tool for the job at hand?
I believe that the opposite of "one true language" is "polyglot programming", and I will then refer to another answer of mine:
Is polyglot programming important?
I actually doubt that anybody can nowadays realize a project in one and only one language (even though there might be exceptions). The easiest way to show them the usefulness of specific tools and languages, is then to show them that they are already using several ones, e.g. SQL, build file, various XML dialect, etc.
Though I embrace the polyglot perspective, I do also believe that in many area "less is more". There is a balance to find between the number of language/tools, the learning curve, and the overall productivity.
The challenge is to decide which small set of languages/tools fit nicely together in your domain and will push productivity and creativity to new limits.
Give them a screwdriver and tell them to build a house?
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Are there any statistics for this? I realize it must vary from person to person, but it seems like there should be a general average.
The reason I ask is that the company I contract for has multiple software products, totaling ~75,000 lines of code - and they seemed disappointed and shocked when they ask me a question about a specific portion that I don't immediately know the answer to (I am the only programmer they have, and did not author the majority of the systems) They think I should just know it all from memory. So I wanted something like a statistic to show them that an average programmer couldn't possibly have all that in his head at one time. Or should I?
You should remember where to find the needed stuff not remember it itself.
You should also be familiar with code structure and architecture enough to make an educated guess where a problem might originate and where you could possibly find the stuff you know exist but not sure where exactly.
You brain works like cache. The stuff you used recently is kept there, more older entries are erased. But there will never be enough memory to remember the code all at once. Because then you will want to remember all API functions, then all specs, then something else. This all is not feasible.
And being surprised with you that you don't remember all the code is probably one more instance of those perversed notions of how programmers do things. Ignore them.
It depends not only on your memorization skills, but also a lot on the code. Obviously, clean, idiomatic code is much easier to memorize than a badly written inconsistent mess.
Probably because clean code can be broken down into much larger "abstract tokens".
Indeed interesting question but I am in doubt if there is adequate answer at all. Here are only obvious factors I see right from the start:
Overall design quality. Even if you are new in well designed code you can very quickly identify where you should look to get answers.
Project documentation quality. For poor documented projects even developers that are in project from the start can't say anything about some parts.
Implementation quality. OK. You have good general architecture, good documentation for interfaces but even one really bad programmer could break all of this. This is because many companies are very strict about code reviews and I think it is the only one technique to prevent such situation.
Programmer experience. As you move ahead you see number of 'already known' code "bricks" in software new to you and experience is great help in this so contractors are often very experienced specialists familiar with various approaches and this gives average contractor ability to move much faster then full time programmer which is brilliant but worked 10 years in only 1 project context.
General person smartness. My opinion this is really not so important as most of others factors but it is really important.
... but the common problem is often companies hire contractors for some existing software improvement and they simply think this is only about to hang picture on the wall. You should perform some negotiation to force them to understand part of work is to understand what really should be done to meet their requirements at all. And such "learning" requires resources and is part of work itself. But I think it is slightly off-topic for StackOverflow (despite I voted up ;) ). Is it more for Startups discussion?
Even if you have written all that code you might forget portions of it. But you'll be able to recall it once you review it.
I think its natural for a programmer to forget some portions of his/her code after a long time.
Ask them how they want you to spend your time: surveying vast amounts of code you didn't write and perhaps writing up internal documentation, or whatever currently keeps you occupied It's not a facetious question. If they want quicker response to new issues, they need to invest in research.
I don't think there's a meaningful answer to this measured in LOC. As a manager, what I want to know is that someone in your situation can answer a question in a reasonable amount of time -- and unless I know you're in the middle of something, I wouldn't expect that 'reasonable amount of time' to be 'instantaneously'.
You should be able to understand all the components within the system and how they interact so that when there is a problem you can isolate one or two likely components and drill down.
I find it helpful to draw a few diagrams and keep them handy so I can use them to communicate with my boss\customer as well as jog my memory.