Antonym for "optimize" in software development context - programming-languages

In the context of "optimizing" a set of data to a smaller, more compact form, what would be a word that would best describe its opposite (expanding data into a more usable form).
The only one I could come up with was "expand", but that's a better antonym of "compress", not "optimize"... any better suggestions?

For this data context, "unfolding" may work, I'm just not 100% sure if it is commonly used to describe this.
Anyway, I would say it is still optimization, you're just not optimizing the same thing (size vs readability/usability). So if you can't find the proper antonym you're looking for, you can still try to express things this way.

Related

Functional approaches to designing the discrete side of hybrid systems

I'm working on developing controllers for hybrid systems in Haskell.
FRP libraries (right now I'm using netwire, but there are several good ones and a lot of interesting research on future ones) provide a great solution for the continuous-time side of the problem. Augmenting them with signal names, dimensions, preferred units, and so forth gets you a system that has modularity, is self-describing, and has a straightforward path to confidence in correctness.
I'm looking for information, folklore, or papers that provide similar properties for the discrete-time side. In some sense the problem is much easier, state machines are well-studied and simple. In other senses it's more difficult, I'll briefly explain how.
Correctness is obviously the most important thing, and thankfully it's also straightforward.
Self-description is more of a problem. You'd like the controller not just to be in the correct state, but to be capable of telling you what state it's in. Also how it got there. And where it might go next. So you can tack names on to everything, and it works, but it conflicts somewhat with modularity. You'd also like to be able to build complex discrete time behaviors from simpler ones. But when you ask the system what state it's in, generally the high-level answer is more interesting (or at least, as interesting) as the low level answer. How do you get this cleanly? I've tried a few naive approaches and have wrapped myself in spaghetti a few different ways, but it seems like there must be elegant solutions?
Another problem I've had with self-description is that I'd like to have a list of self-describing conditions (generally comparisons: has it been 10 seconds? am I within 3 feet of the next waypoint? has the battery power fallen below 15%? etc) that are being monitored which might trigger the next state transition. There are tricky questions of what even are the desirable semantics here, since it seems like some of these events are better handled "from the bottom up" (e.g. expected termination conditions of whatever low level step you are performing) and some "from the top down" (e.g. equipment failure detection, geofencing, ...). This can lead to spaghetti of its own even if you relax the goal of self-description.
In addition to diagnostics, accurate self-description information here could also be very useful for abstract interpretation, projecting the state of the system into the future by guessing which events are likely to occur when. Many of the event conditions lead themselves to fairly simple guesses (e.g. using velocity made good, fuel consumption rate, timers). Others are more complicated but might still be worth the effort to develop projections for some applications (e.g. expected orders from operators, weather forecasts, projected tracks for moving objects of interest). It would be nice to find a design that annotates conditions not only with names, but also with functions for this sort of stuff.
Does anyone have experience with this that they are willing to share?
Okay, so I would say the "real" answer to your question is that some of things that you are asking for are open areas of research --- in particular I think some of the self-describing features you desire may necessitate some degree of "spaghetti" simply because the problem you are trying to solve is inherently complicated.
That being said, your focus on modularity is exactly the right approach. I would say, take a look at Keymaera as I believe it has the features you are looking for despite being in Java. I would also recommend looking at the publications page on the Keymaera website as this should provide you valuable insight to the problem in general.
If you do not like Keymaera's approach you can also look into using Timed Automata which is another direction modeling-wise that should be sufficient for your problem description.

Why modeling/diagramming?

I'm thinking about general modeling (diagramming) standards and tools such as BPMN, UML, data modeling, etc.
Why do we need modeling?
Usually on what phase is modeling essential for you? communication? document and report?
How can you benefit from them in work or daily life?
Thank you!
You coined the most important word already: communication
Not only with others, but also with your future self. If you write an application now, and return to it after a few months pause, it will be hard to find your way around. Certainly, this depends on the size and complexity of the application in question. But having some visual representation will always be helpful.
Also, diagramming the application before actual development is a great way to wrap you head around everything. Normally, you will detect problems in you initial idea that are easy to fix at that stage. And as said, these diagrams will become helpful in the future.
Showing others how the application works internally is equally important. Everybody has a different way to approach problems. So something that is intuitive for you might seem weird in someone else's eyes. This makes it hard to grasp the concepts of an application someone else has written.
Also remember that only few of us (is any) write perfect code, and do a perfect architecture. We are human, we make errors.
Let's say for example that there is this nifty design pattern you heard of called "decorator", and you want to use it. Naturally, the word "decorator" will appear in your code, and people reading it will think: "Hey, I know this pattern. No need to read the gory details. I'll just handle it as a black-box and use it." But what if you misinterpreted the pattern and implemented it incorrectly. Now the person using it as a black-box will run into problems. These may range from small bugs to non-compiling API-calls. If you have a diagram of the whole thing, it will be much easier to pinpoint the cause.
The biggest problem with modelling is to keep the diagrams in-sync with your code. During the project life-time you will make changes. Small ones and big ones. In a perfect world, you would update the diagrams, think about it, let it sink into your brain, and then implement it. But the world's hardly perfect. So you may end up implementing something before diagramming it (if it's actually diagrammed at all). This whole process is cumbersome. Be it because the diagram software you use is crap or just simply because of time constraints.
Personally, I like to create an initial diagram. And once I'm happy with it, I dive into implementation. I won't revisit the diagram for small changes. Yes, after a while the code will deviate from the diagram, but the Big Picture is still there. If I need to make a bigger architectural change, I will revisit the diagram for sure.
What I will do however is keep the in-code documentation up-to-date. Most documentation extraction tools (like javadoc) give you the possibility to use markup which is useful to make the generated docs readable and usable. Especially when using hyperlinks.
So, coming back to one of your points, where you ask what the benefit is during the daily developer life, I think that the larger diagrams don't make that much of a difference there. Simply because once you grasped the concepts of the project, you will not need the diagram anymore as you peruse the codebase on a daily basis. What comes in handy though are proper code docstrings. Primarily because many IDEs display them while coding, or at least make it easy to jump to them. With diagrams, that's not so much the case.
Diagrams however are useful to get started quickly with a project.
One more thought about flow-charts/activity-diagrams: I find these mostly useless. except for complex algorithms, as it helps you visualize the cyclomatic complexity. But quite honestly, I have never needed to write a complex algorithm myself. I always found ready-to-use implementation in either the standard library or in a third-party library.
And one final note: This question should have been posted on https://softwareengineering.stackexchange.com/ ;)
I can only speak for flowcharts and diagrams for state machines, but the answer might apply also here: If you chart something, you are forced to think about your underlying stuctures. Usually bad structures tend to be more hard / ugly to draw as a diagram.

What is a hack? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I use the term all the time... but I was just sort of thinking that I don't really have a solid denotational sense behind the term (or at least the term in the sense I want to discuss here). I'm interested in the sense of the word related to code, not the anthropomorphic idea. I'm also not interested here in the sense of the word related to intentional malicious computing (i.e. a hack to unlock secret powers in a game). What I want to explore is what it means to 'hack' in terms of writing software to solve a problem
wikipedia's def of 'hack' to me is a bit vague, but a decent starting point. It considers a hack
can refer to a solution or method which functions correctly but which is "ugly" in its concepion
works outside the accepted structures and norms of the environment
is not easily extendable or maintainable
can be slang for "copy", "imitation" or "rip-off."
These traits of a hack conform to my usage of the word--when applied to code it is always a term of derision. To my mind, a hack
Is likely to be difficult to maintain & hard to understand in the context of the rest of the code.
Is likely to cause failure of the app.
tends to indicate a poor understanding by the coder either of the problem space, usage of the language or both
tends to be the byproduct of aggressive schedules
suggests potential changes in requirements that have not been fully incorporated into the architecture of the solution (requiring an 'inorganic' workaround).
smells
all bad, bad, bad. To me, a hack in this sense is always negative, indicating either lack of time, incompetence, or sloth on the part of the developer, though a decent percentage of hacks must be written to compensate for ill-conceived designs or systems that have gained requirements which their original design cannot handle 'organically'.
I don't think I've really captured it totally though--it's like pornography a bit: I can't really define it, but I know it when I see it. So I ask you: what does it mean to 'hack' when you are trying to solve a problem in software?
I've always preferred Paul Graham's definition:
To add to the confusion, the noun "hack" also has two senses. It can be either a compliment or an insult. It's called a hack when you do something in an ugly way. But when you do something so clever that you somehow beat the system, that's also called a hack. The word is used more often in the former than the latter sense, probably because ugly solutions are more common than brilliant ones.
From the Jargon File, the glossary of hacker slang:
The Meaning of ‘Hack’
“The word hack doesn't really have 69 different meanings”, according to MIT hacker Phil Agre. “In fact, hack has only one meaning, an extremely subtle and profound one which defies articulation. Which connotation is implied by a given use of the word depends in similarly profound ways on the context. Similar remarks apply to a couple of other hacker words, most notably random.”
Hacking might be characterized as ‘an appropriate application of ingenuity’. Whether the result is a quick-and-dirty patchwork job or a carefully crafted work of art, you have to admire the cleverness that went into it.
An important secondary meaning of hack is ‘a creative practical joke’. This kind of hack is easier to explain to non-hackers than the programming kind.
When I think of "hack", I think of it as being a non-expected workaround to solve a problem, not necessarily a bad thing. Creative, innovative, and well-placed. "Hack" can apply to more than just computers, though I seldom hear it used that way.
Too often "hack" simply means: "Not the way I would do it."
This topic will turn into something like a question about Love. Everyone's gonna have their own definition. The best way to know the proper (default) definition is in the dictionary
It's when you've stepped out of the idiomatic, natural, sensible and (sometimes) supported ways of doing something in a given language/framework/etc.
Sometimes that's a stroke of genius, usually it's an act of idiocy, occasionally it's one disguised as the other, and on rare occasions it's both.
(Incidentally, the judge who coined that statement about pornography you quote later retracted in making another ruling).
When I use the term 'hack' it usually refers to a solution to a problem that was done usually in response to a pressing issue, and so not a lot of thought went into it in regards to the overall design of the application. Sometimes it works out, sometimes not so much, and sometimes it turns out to be a work of genius. But mainly, it's an admitted temporary solution that (hopefully) gets refactored and refined when possible.
Here's a great sentence I saw about the difference between hacking and scamming and it says, "Hacking attacks are successful when the criminal knows how a particular computer system works. Scams are successful when the perpetrator knows how the human brain works.", which brings the idea out that to hack into something, you need to have a deep understanding of how it works.

Using Polymorphic Code for Legitimate Purposes?

I recently came across the term Polymorphic Code, and was wondering if anyone could suggest a legitimate (i.e. in legal and business appropriate software) reason to use it in a computer program? Links to real world examples would be appreciated!
Before someone answers, telling us all about the benefits of polymorphism in object oriented programming, please read the following definition for polymorphic code (taken from Wikipedia):
"Polymorphic code is code that uses a polymorphic engine to mutate while keeping the original algorithm intact. That is, the code changes itself each time it runs, but the function of the code in whole will not change at all."
Thanks, MagicAndi.
Update
Summary of answers so far:
Runtime optimization of the original code
Assigning a "DNA fingerprint" to each individual copy of an application
Obfuscate a program to prevent reverse-engineering
I was also introduced to the term 'metamorphic code'.
Runtime optimization of the original code, based on actual performance statistics gathered when running the application in its real environment and real inputs.
Digitally watermarking music is something often done to determine who was responsible for leaking a track, for example. It makes each copy of the music unique so that copies can be traced back to the original owner, but doesn't affect the audible qualities of the track.
Something similar could be done for compiled software by running each individual copy through a polymorphic engine before distributing it. Then if a cracked version of this software is released onto the Internet, the developer might be able to tell who cracked it by looking for specific variations produced the polymorphic engine (a sort of DNA test). As far as I know, this technique has never been used in practice.
It's not exactly what you were looking for I guess, since the polymorphic engine is not distributed with the code, but I think it's the closest to a legitimate business use you will find for this kind of technique.
Polymorphic code is a nice thing, but metamorphic is even nicer. To the legitimate uses: well, I can't think of anything other than anti-cracking and copy protection. Look at vx.org.ua if you wan't real world uses (not that legitimate though)
As Sami notes, on-the-fly optimisation is an excellent application of polymorphic code. A great example of this is the Fastest Fourier Transform in the West. It has a number of solvers at its disposal, which it combines with self-profiling to adjust the code path and solver parameters on subsequent executions. The result is the program optimises itself for your computing environment, getting faster with subsequent runs!
A related idea that may possibly be of interest is computational steering. This is the practice of altering the execution path of large simulations as the run proceeds, to focus on areas of interest to the researcher. The overall purpose of the simulation is not changed, but the feedback cycle acts to optimise the calculation. In this case the executable code is not being explicitly rewritten, but the effect from a user perspective is similar.
Polymorph code can be used to obfuscate weak or proprietary algorithms - that may use encryption e. g.. There're many "legitimate" uses for that. The term legitimate these days is kind of narrow-minded when it comes to IT. The core-paradigms of IT contain security. Whether you use polymorph shellcode in exploits or detect such code with an AV scanner. You have to know about it.
Obfuscate a program i.e. prevent reverse-engineering: goal being to protect IP (Intellectual Property).

Decoupling vs YAGNI

Do they contradict?
Decoupling is something great and quite hard to achieve. However in most of the applications we don't really need it, so I can design highly coupled applications and it almost will not change anything other than obvious side effects such as "you can not separate components", "unit testing is pain in the arse" etc.
What do you think? Do you always try to decouple and deal with the overhead?
It seems to me decoupling and YAGNI are very much complementary virtues. (I just noticed Rob's answer, and it seems like we're on the same page here.) The question is how much decoupling you should do, and YAGNI is a good principle to help determine the answer. (For those who speak of unit testing -- if you need to decouple to do your unit test, then YAGNI obviously doesn't apply.)
I really sincerely doubt the people who say they "always" decouple. Maybe they always do every time they think of it. But I have never seen a program where additional layers of abstraction couldn't be added somewhere, and I sincerely doubt there is a non-trivial example of such a program out there. Everyone draws a line somewhere.
From my experience, I've deoupled code and then never taken advantage of the additional flexibility about as often as I've left code coupled and then had to go back and change it later. I'm not sure if that means I'm well-balanced or equally broken in both directions.
YAGNI is a rule of thumb (not a religion). Decoupling is more or less a technique (also not a religion). So they're not really related, nor do they contradict each other.
YAGNI is about pragmatism. Assume you don't need something, until you do.
Usually assuming YAGNI results in decoupling. If you don't apply that knife at all, you end up assuming that you need to have classes that know all about each other's behavior before you have demonstrated that to be true.
"Decouple" (or "loosely couple") is a verb, so it requires work. YAGNI is a presumption, for which you adjust when you find that it's no longer true.
I'd say they don't. Decoupling is about reducing unnecessary dependencies within code and tightening up accesses through clean, well-defined interfaces. "You ain't gonna need it" is a useful principle which generally advises against over-extensibility and overly broad architecture of a solution where there's no obvious and current use case.
The practical upshot of these is that you have a system where it's much easier to refactor and maintain individual components without inadvertently causing a ripple effect across the entire application, and where there are no unnecessarily complicated aspects to the design - it's as simple as is required to meet the current requirements.
I (almost) always decouple. Every time I did this I found it useful, and (almost) every time I didn't I had to do it later. I've also found it a good way to decrease the number of bugs.
Decoupling for the sake of decoupling can be bad.
Building testable components is very important though.
The hard part of the story is to know when and how much decoupling you need.
If "unit testing is a pain in the arse" then I would say that you do need it. Most of the time decoupling can be achieved with virtually zero cost as well, so why wouldn't you want to do it?
Furthermore, one of my biggest bugbears when working on a new codebase is having to decouple the code before I can start writing unit tests when the introduction of an interface somewhere or use of dependency injection could save alot of time
As your tag says, it's highly subjective. It rests entirely upon your own engineering wisdom to decide what you "ain't gonna need". You may need coupling in one case, but you won't in another. Who is to tell? You, of course.
For a decision so subjective, then, there cannot be a guideline to prescribe.
Well, YAGNI is little more than a bogus simplistic phrase people throw around. Decoupling, however, is a fairly well understood concept. YAGNI seems to imply that one is some sort of psychic. It's just programming by cliche, which is never a good idea. To be honest, there is a case to be made that YAGNI is probably not related to decoupling at all. Coupling is typically "quicker" and "who knows if you are you are going to need a decoupled solution; you aren't gonna change X component anyway!"
YAGNI the mess :) ... really, we don't need to have all the code mixed to go "faster".
The unit tests really help on feeling when it is coupled (given one understand well what is an unit test vs. other types of tests). If you instead do it with the "you can not separate components" mindset, you can easily get to add stuff that you ain't gonna need :)
I would say YAGNI comes in when you start twisting and changing logic all around beyond the actual usage scenarios the current implementation calls for. Lets say you have some code that uses a couple external payment providers that both work with redirects to an external site. It is ok to have a design that keeps everything clean, but I don't think it is ok to start thinking of providers that we don't know if will be ever supported that have plenty of different way of handling the integration and the related workflow.

Resources