Related
With the advancement in technology, industry has been moving towards automation and intelligence. In this regards artificial intelligence and machine learning has played a vital role. Natural language processing (NLP) is a field of computer science and linguistics which focuses on methods to process the natural languages. So, which one is more reliable and efficient in natural language processing, Finite state machine [FSM] or Push down Automata?
Even though there are many techniques to do NLP, the state of art way is to use deep learning. Many significant improvements are shown in NLP using Deep Learning Techniques. This has happened because of the enormous amount of processing power which is available at low cost. If you want to read cutting edge techniques used in NLP domain or any other research domain, Go to google scholar (https://scholar.google.com/).
It seems like the real question you want to be asking are: "What are some efficient techniques in natural language processing?" But I will address your question first.
First of all, neither FSA (Finite State Automata) not PDA (Push Down Automata) are sufficient techniques to model language. FSAs can handle regular languages. They cannot, however, even answer the question of whether a word is a palindrome. PDAs are a little more powerful, and can answer such questions. Turing machines give universal computation and are useful for writing programs of arbitrary complexity.
Now to start bridging this gap. Natural languages are not regular languages. They thus cannot be handled by FSAs. Some context-free grammars such as LR(k) grammars are handled by PDAs, however natural human language is not context-free. As an example. The following three statements. "Jill drove to the grocery store to meet her friend Sally before she picked up her kids. Sally bought three boxes of cereal. Then she drove to the school." While this is poor grammar, it is "natural" in that they are utterances that people make and they are generally parseable by other people. The antecedent to the pronoun "She" in the third sentence clearly refers to Jill as she is the one with children. However, it is ambiguous and we have to infer that association.
The amount of ambiguity in context in natural human language makes it impossible to parse deterministically. Instead, we turn towards the fields of statistics and decision theory to make our inferences about the maximally likely model for the communication.
The locality but non-determinism in speech and writing are one of the things that make the application of machine learning techniques such as the utilization of deep recurrent neural networks so immensely effective by comparison to their classical rule-based counterparts.
While the term Neural Network is a bit of a misnomer as ultimately the human brain is far, far more complex than these rudimentary models from a neurological perspective, the general learning through approximate inference is ostensibly close to reality. We might better call these methods "Differentiable Computing" but that is a digression for another time.
In summary. The answer to your question you actually asked would be PDAs are going to produce better models than FSAs but both are going to be absolutely worthless by comparison to even rudimentary statistical methods.
If you are curious about NLP, I would actually recommend a course in machine learning and a follow up in deep learning.
Andrew Ng has a good series of courses that are targeted toward beginners. After that, I would follow up with Sirajs course on deep learning in Tensorflow.
I'm a student of software engineering. My lecturer of "Software Architecture and Design" has told us that we can generate source code from all the UML diagrams (or most).
I already can / have generated code from class diagram. I'm unable to generate code from other diagrams.
Do I have to someway connect those diagrams with class diagrams to do that?
This is simply nonsense. You can not generate code from any diagram at all. You can however generate code from a UML model. This can (but not must) have a couple of diagrams to help visualization for humans.
Now, code is related to classes. That means you need at least some classes defined in your model. A use case helps understanding why classes will do things they are supposed to do. But in no case can you create code from a use case.
There are other model elements which help support creating more detailed code. Those are e.g. state machines which can translate into equivalent code sections.
Activity and sequence diagram also help visualizing how certain code sections run during execution. But you will not (seriously) use them to create code.
Yes, you can, but it's not as simple as what you're describing. Model-Driven Architecture is an active area of research right now, but it hasn't really "caught on" yet. Its proponents argue that it allows for a higher level of abstraction in much the same way that C offered a higher level of abstraction than assembly language and Java offered a higher level of abstraction than C. I think that this could be very useful in the future if they can get the tooling right.
Actually, this isn't even an entirely new idea - the idea of graphical programming in general (which, if you think about it, is basically a generalization of UML-derived programming) has been around at least since the 1980s that I know of (and probably a lot earlier). In fact, Frederick Brooks Jr. talks about it in No Silver Bullet – Essence and Accident in Software Engineering (which was originally published in 1986 and appears in current editions of The Mythical Man-Month):
A favorite subject for Ph.D. dissertations in software engineering is graphical, or visual, programming, the application of computer graphics to software design. Sometimes the promise of such an approach is postulated from the analogy with VLSI chip design, where computer graphics plays so fruitful a role. Sometimes this approach is justfied by considering flowcharts as the ideal program design medium, and providing powerful facilities for constructing them.
Nothing even convincing, much less exciting, has yet emerged from such efforts. I am persuaded that nothing will...
His argument was that, at the time it was written, the tooling just wasn't "there" yet; for example, screen sizes were notoriously small. Also, the flow chart is actually a really bad design mechanism. Also,
More fundamentally, as I have argued above, software is very difficult to visualize. Whether we diagram control flow, variable scope nesting, variable cross-references, data flow, hierarchical data structures, or whatever, we feel only one dimension of the intricately interlocked software elephant. If we superimpose all the diagrams generated by the many relevant views, it is difficult to extract any global overview. The VLSI analogy is fundamentally misleading - a chip design is a layered two-dimensional object whose geometry reflects its essence. A software system is not.
I'll leave it to you to judge whether or not you agree with him or whether this still applies.
So, to summarize: yes, it's at least theoretically possible, and there have been considerable efforts to generate code from UML diagrams, but you'll need multiple diagrams to generate much more than basic class structures and method stubs. It's not like you can write a use case diagram, press a button, and magically have a complete software system.
I think I have found the answer. We can generate code. Say I have a "use case". I right-click on it. Go to "advance" and select "instance classifier". Over there I can actually make my "use cases" , "sequence diagram objects" etc the instances of an already created class or I can even create a class right over there.
I am looking for insight on modeling. I had a intro course on Design Patterns and basic class diagrams, sequence diagrams, and use cases.
The class diagrams I have found invaluable as a tool of organization in my programming. The use cases are moderately useful so far.
This semester I am in a class going into UML in much more depth i.e. Domain Analysis, Requirements Analysis, Software Design vs. Software Engineering etc.
There is a certain feeling that this is starting to be more voodoo-sciencey or non-concrete when we start trying to be precise with the ambiguities in scenarios, and changing requirements. Is UML past basic class diagrams and use-case diagrams practically useful in productivity in most applications?
It started out voodoo. Diagramming software designs has always been that way. It is a way of showing in pictures what you want to say about the design in a human language. If it was precise enough to generate code from, we'd go ahead and do that and dispense with the coding step altogether.
The only thing UML brings new to the older ways is that it is a standard. Even then, there are so many different kinds of "standard" diagrams that I have to snicker a little when calling it a standard.
However, the activity of design itself is extremely important for all but the most trivial of tasks. The question is whether you are going to spend some time up-front designing your system, or if you are going to do it on-the-fly, after having written a great deal of wrong or unnesscary code. If you want things done quickly and/or well, you do some design up front.
This doesn't just apply to writing software BTW. It is an inherent part of any complex creative activity. My father-in-law, a retired English teacher who writes his children longish postcards when he goes on vacation, actually writes outlines for his postcard messages. Most master painters and sculptors make test drawings first.
No.
All sorts and forms of documentation, are only useful as a means of communication. Documentation for documentations sake is a complete waste of time.
Writing UML is useful and productive only when it comes with a document that explains (in words) what is it you want, why, and how. only then UML can help to illustrate what you are trying to say in the document.
Software teams that produce endless amounts of UML just for the sake of drawing squares, are just wasting time.
You started out with modeling, which is a great thing to do, especially in computer science - you model all the time. Keep in mind UML is a standard for a modeling notation for software systems, nothing more (e.g. it is not an analysis or design methodology) and nothing less (e.g. it is not a way for developers to look productive by drawing nonsense).
You are on the right track, always keep in mind what is actually useful and gives you some value. This is not exactly relevant to your question, but sue cases are not use case diagrams, there are much more, have written form and might help you with much of what you described would be in your next course.
As to your concern, modeling is about abstracting from unimportant details, so some ambiguities might occour. The point is they should be unimportant for the purpose of modeling. For example it does not really matter if you include all the properties of your classes if you want to show the structure of design, e.g. use of some pattern. You can also use public properties without concerning yourself if they are private fields with getters and setters (Java), properties (C#) or generated object methods using metaprogramming (Ruby). The same holds for scenarios captured using use cases - of course you cannot (and should not try to) capture alternative branches using UML, but you can describe the conditions in use case descriptions just enough to avoid ambiguity without having to develop the system first and finding it is wrong afterwards.
As to the voodoo stuff - the problem is that UML is large and so many developers don't know how to use it right and often create more mess than value. Don't be confused by general disrespect for UML, the problem is in tool vendors, commitees and lazy developers... Behind many concepts in UML are well known formal models backed by academic science work, e.g. the state diagrams come from Harel statecharts (http://linkinghub.elsevier.com/retrieve/pii/0167642387900359). So my opinion it is not as much voodoo in principle, it is just oversold with tools not supporting the standard and also the standard tries to be and combine everything (it is an unified language...), however this slowly improves.
My advice for you would be try to learn what is important - those formalisms, analysis and design methods, try them practically and decide for yourself what is useful. If for no other reason, learn UML because it is the language for analysis and design, although large, it is still better than its ~50 predecessors combined:).
From my experience: Not really.
I never came across a really useful sequence diagram. Sequence diagrams stop being useful when the documented process becomes too complex, as you have a hard time following all the lines. But to understand a trivial process, I don't need a sequence diagram. When used as a design tool you will waste a ridiculous amount of time adjusting the diagrams, cussing MS Visio or whatever you use.
The notation however can be useful for a small snapshot when discussing something on a whiteboard. But this is valid for any notation style; UML is just well established, increasing the chances you are understood correctly.
Class diagrams are useful, both in design and in a posteriori documentation. But IMHO you shouldn't be too pedantic about them.
Not in MHO. It's completely superfluous as far as I am concerned.
Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 6 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What is a good way to design/structure large functional programs, especially in Haskell?
I've been through a bunch of the tutorials (Write Yourself a Scheme being my favorite, with Real World Haskell a close second) - but most of the programs are relatively small, and single-purpose. Additionally, I don't consider some of them to be particularly elegant (for example, the vast lookup tables in WYAS).
I'm now wanting to write larger programs, with more moving parts - acquiring data from a variety of different sources, cleaning it, processing it in various ways, displaying it in user interfaces, persisting it, communicating over networks, etc. How could one best structure such code to be legible, maintainable, and adaptable to changing requirements?
There is quite a large literature addressing these questions for large object-oriented imperative programs. Ideas like MVC, design patterns, etc. are decent prescriptions for realizing broad goals like separation of concerns and reusability in an OO style. Additionally, newer imperative languages lend themselves to a 'design as you grow' style of refactoring to which, in my novice opinion, Haskell appears less well-suited.
Is there an equivalent literature for Haskell? How is the zoo of exotic control structures available in functional programming (monads, arrows, applicative, etc.) best employed for this purpose? What best practices could you recommend?
Thanks!
EDIT (this is a follow-up to Don Stewart's answer):
#dons mentioned: "Monads capture key architectural designs in types."
I guess my question is: how should one think about key architectural designs in a pure functional language?
Consider the example of several data streams, and several processing steps. I can write modular parsers for the data streams to a set of data structures, and I can implement each processing step as a pure function. The processing steps required for one piece of data will depend on its value and others'. Some of the steps should be followed by side-effects like GUI updates or database queries.
What's the 'Right' way to tie the data and the parsing steps in a nice way? One could write a big function which does the right thing for the various data types. Or one could use a monad to keep track of what's been processed so far and have each processing step get whatever it needs next from the monad state. Or one could write largely separate programs and send messages around (I don't much like this option).
The slides he linked have a Things we Need bullet: "Idioms for mapping design onto
types/functions/classes/monads". What are the idioms? :)
I talk a bit about this in Engineering Large Projects in Haskell and in the Design and Implementation of XMonad. Engineering in the large is about managing complexity. The primary code structuring mechanisms in Haskell for managing complexity are:
The type system
Use the type system to enforce abstractions, simplifying interactions.
Enforce key invariants via types
(e.g. that certain values cannot escape some scope)
That certain code does no IO, does not touch the disk
Enforce safety: checked exceptions (Maybe/Either), avoid mixing concepts (Word, Int, Address)
Good data structures (like zippers) can make some classes of testing needless, as they rule out e.g. out of bounds errors statically.
The profiler
Provide objective evidence of your program's heap and time profiles.
Heap profiling, in particular, is the best way to ensure no unnecessary memory use.
Purity
Reduce complexity dramatically by removing state. Purely functional code scales, because it is compositional. All you need is the type to determine how to use some code -- it won't mysteriously break when you change some other part of the program.
Use lots of "model/view/controller" style programming: parse external data as soon as possible into purely functional data structures, operate on those structures, then once all work is done, render/flush/serialize out. Keeps most of your code pure
Testing
QuickCheck + Haskell Code Coverage, to ensure you are testing the things you can't check with types.
GHC + RTS is great for seeing if you're spending too much time doing GC.
QuickCheck can also help you identify clean, orthogonal APIs for your modules. If the properties of your code are difficult to state, they're probably too complex. Keep refactoring until you have a clean set of properties that can test your code, that compose well. Then the code is probably well designed too.
Monads for Structuring
Monads capture key architectural designs in types (this code accesses hardware, this code is a single-user session, etc.)
E.g. the X monad in xmonad, captures precisely the design for what state is visible to what components of the system.
Type classes and existential types
Use type classes to provide abstraction: hide implementations behind polymorphic interfaces.
Concurrency and parallelism
Sneak par into your program to beat the competition with easy, composable parallelism.
Refactor
You can refactor in Haskell a lot. The types ensure your large scale changes will be safe, if you're using types wisely. This will help your codebase scale. Make sure that your refactorings will cause type errors until complete.
Use the FFI wisely
The FFI makes it easier to play with foreign code, but that foreign code can be dangerous.
Be very careful in assumptions about the shape of data returned.
Meta programming
A bit of Template Haskell or generics can remove boilerplate.
Packaging and distribution
Use Cabal. Don't roll your own build system. (EDIT: Actually you probably want to use Stack now for getting started.).
Use Haddock for good API docs
Tools like graphmod can show your module structures.
Rely on the Haskell Platform versions of libraries and tools, if at all possible. It is a stable base. (EDIT: Again, these days you likely want to use Stack for getting a stable base up and running.)
Warnings
Use -Wall to keep your code clean of smells. You might also look at Agda, Isabelle or Catch for more assurance. For lint-like checking, see the great hlint, which will suggest improvements.
With all these tools you can keep a handle on complexity, removing as many interactions between components as possible. Ideally, you have a very large base of pure code, which is really easy to maintain, since it is compositional. That's not always possible, but it is worth aiming for.
In general: decompose the logical units of your system into the smallest referentially transparent components possible, then implement them in modules. Global or local environments for sets of components (or inside components) might be mapped to monads. Use algebraic data types to describe core data structures. Share those definitions widely.
Don gave you most of the details above, but here's my two cents from doing really nitty-gritty stateful programs like system daemons in Haskell.
In the end, you live in a monad transformer stack. At the bottom is IO. Above that, every major module (in the abstract sense, not the module-in-a-file sense) maps its necessary state into a layer in that stack. So if you have your database connection code hidden in a module, you write it all to be over a type MonadReader Connection m => ... -> m ... and then your database functions can always get their connection without functions from other modules having to be aware of its existence. You might end up with one layer carrying your database connection, another your configuration, a third your various semaphores and mvars for the resolution of parallelism and synchronization, another your log file handles, etc.
Figure out your error handling first. The greatest weakness at the moment for Haskell in larger systems is the plethora of error handling methods, including lousy ones like Maybe (which is wrong because you can't return any information on what went wrong; always use Either instead of Maybe unless you really just mean missing values). Figure out how you're going to do it first, and set up adapters from the various error handling mechanisms your libraries and other code uses into your final one. This will save you a world of grief later.
Addendum (extracted from comments; thanks to Lii & liminalisht) —
more discussion about different ways to slice a large program into monads in a stack:
Ben Kolera gives a great practical intro to this topic, and Brian Hurt discusses solutions to the problem of lifting monadic actions into your custom monad. George Wilson shows how to use mtl to write code that works with any monad that implements the required typeclasses, rather than your custom monad kind. Carlo Hamalainen has written some short, useful notes summarizing George's talk.
Designing large programs in Haskell is not that different from doing it in other languages.
Programming in the large is about breaking your problem into manageable pieces, and how to fit those together; the implementation language is less important.
That said, in a large design it's nice to try and leverage the type system to make sure you can only fit your pieces together in a way that is correct. This might involve newtype or phantom types to make things that appear to have the same type be different.
When it comes to refactoring the code as you go along, purity is a great boon, so try to keep as much of the code as possible pure. Pure code is easy to refactor, because it has no hidden interaction with other parts of your program.
I did learn structured functional programming the first time with this book.
It may not be exactly what you are looking for, but for beginners in functional programming, this may be one of the best first steps to learn to structure functional programs - independant of the scale. On all abstraction levels, the design should always have clearly arranged structures.
The Craft of Functional Programming
http://www.cs.kent.ac.uk/people/staff/sjt/craft2e/
I'm currently writing a book with the title "Functional Design and Architecture". It provides you with a complete set of techniques how to build a big application using pure functional approach. It describes many functional patterns and ideas while building an SCADA-like application 'Andromeda' for controlling spaceships from scratch. My primary language is Haskell. The book covers:
Approaches to architecture modelling using diagrams;
Requirements analysis;
Embedded DSL domain modelling;
External DSL design and implementation;
Monads as subsystems with effects;
Free monads as functional interfaces;
Arrowised eDSLs;
Inversion of Control using Free monadic eDSLs;
Software Transactional Memory;
Lenses;
State, Reader, Writer, RWS, ST monads;
Impure state: IORef, MVar, STM;
Multithreading and concurrent domain modelling;
GUI;
Applicability of mainstream techniques and approaches such as UML, SOLID, GRASP;
Interaction with impure subsystems.
You may get familiar with the code for the book here, and the 'Andromeda' project code.
I expect to finish this book at the end of 2017. Until that happens, you may read my article "Design and Architecture in Functional Programming" (Rus) here.
UPDATE
I shared my book online (first 5 chapters). See post on Reddit
Gabriel's blog post Scalable program architectures might be worth a mention.
Haskell design patterns differ from mainstream design patterns in one
important way:
Conventional architecture: Combine a several components together of
type A to generate a "network" or "topology" of type B
Haskell architecture: Combine several components together of type A to
generate a new component of the same type A, indistinguishable in
character from its substituent parts
It often strikes me that an apparently elegant architecture often tends to fall out of libraries that exhibit this nice sense of homogeneity, in a bottom-up sort of way. In Haskell this is especially apparent - patterns that would traditionally be considered "top-down architecture" tend to be captured in libraries like mvc, Netwire and Cloud Haskell. That is to say, I hope this answer will not be interpreted as an attempt replace any of the others in this thread, just that structural choices can and should ideally be abstracted away in libraries by domain experts. The real difficulty in building large systems, in my opinion, is evaluating these libraries on their architectural "goodness" versus all of your pragmatic concerns.
As liminalisht mentions in the comments, The category design pattern is another post by Gabriel on the topic, in a similar vein.
I have found the paper "Teaching Software Architecture Using Haskell" (pdf) by Alejandro Serrano useful for thinking about large-scale structure in Haskell.
Perhaps you have to go an step back and think of how to translate the description of the problem to a design in the first place. Since Haskell is so high level, it can capture the description of the problem in the form of data structures , the actions as procedures and the pure transformation as functions. Then you have a design. The development start when you compile this code and find concrete errors about missing fields, missing instances and missing monadic transformers in your code, because for example you perform a database Access from a library that need a certain state monad within an IO procedure. And voila, there is the program. The compiler feed your mental sketches and gives coherence to the design and the development.
In such a way you benefit from the help of Haskell since the beginning, and the coding is natural. I would not care to do something "functional" or "pure" or enough general if what you have in mind is a concrete ordinary problem. I think that over-engineering is the most dangerous thing in IT. Things are different when the problem is to create a library that abstract a set of related problems.
UML is a great language to model software for business requirements, but there is a growing community that points some disadvantages for some lacking features.
What are the most significant disadvantages that you find crucial for UML and what could it be a good alternative to solve this lacking features?
The biggest one is that it's yet another layer of red tape that gets in the way of just $#%$#% coding the thing and making it work.
The fact that people use it to "model software for business requirements", as you put it, and other such process-oriented claptrap. UML started out as a conventionalised way for programmers to communicate software to other programmers in a pictorial form. In that sense it's just formalised napkin-scribbling - and as such it is very effective. You can draw a UML class diagram on a whiteboard and I can understand it without quibbling over notation.
But somewhere along the line someone got the idea that a drawing notation could somehow be a process in it's own right, or at least a formal part of a larger process. And that's just silly. UML diagrams are a fine way to illustrate books, and quite useful as a means for engineers to scribble ideas back and forth. But that's where it should have ended.
I can say at least three:
It takes a lot of time to keep the diagram reasonable and synchronized with the actual code. UML diagrams don't run, but require a lot of time. So they are good only if your organization size can manage them
You cannot represent every condition in a sequence diagram. It's impossible if you want to deliver. So state diagrams should convey basic facts, not all the possible outcomes.
Good UML software costs money and it takes some time to master properly.
So, I think UML is good as a complementary documentation role, and only if the size of your organization allows it.
Solutions... well, in the end, diagramming is just a way to convey high level information to another person, in space or time (e.g. could be you in some year time). Extreme Programming shifts the burden of information retrieval from dead tree to living brain. Of course, it assumes that the living brain never forgets, and never quits. Extreme programming uses redundancy to reduce the impact of such occurrences. In a large company, a strong layoff round could wipeout entire teams, so storing information into brains can be risky. On the other hand, large companies have human power to waste, hence the diagramming.
Also, as WDuffy points out, if you are a designer, and you have to communicate to a team of programmers what they have to implement, it's much easier to use a UML diagram. Of course, a small company with a small team has generally small goals, and you can organize people with a different style. A small company UMLing will only produce UML diagrams of their revolutionary product, and then it will be bankrupt.
UML is not good nor bad. It can be a good tool, but it must be used in the proper context.
Lacking features?
well, I found that UML is strongly aimed at an Object Oriented vision of the world. Our company mainly developed in python, with a strong focus on module level routines. Objects were lightweight data containers, but all the logic was done at the module level. It's difficult to properly model this implementation style at the UML level, unless you resort to some "hacks" in the terminology. I guess it's difficult to model in UML for functional or procedural languages.
Another thing I find annoying is the assumption of use case modeling as a diagram. My experience is that the best way to convey a use case is to write a short story or a short code tingling the feature you want to convey. The story should be short, one page maximum.
This approach has two advantages: if your story is a written prose, the Q/A team can read and test it easily. If your story is code, you can put it as a functional test and let it run during the night. A diagram does not satisfy any of these value added needs.
One issue with UML is due to its universality: things in UML cannot be always implemented directly in the target language, or some languages have capabilities that cannot be expressed in UML. So it can be better to know the implementation language beforehand, which restrain its universality.
See also the criticisms section on UML wikipedia page:
Standards bloat
Problems in learning and adopting
Cumulative Impedance/Impedance Mismatching
Dysfunctional interchange format
It's not Agile
What should have been the last word on UML was written by frustrated student "Candide Smith", well, really Eiffel author Bertrand Meyer.
Another disadvantage of UML is that it tends to overemphasize design, which can lead to 'analysis paralysis' (people over-analyze their problem) and feature creep (loosing sight of the actual problem). A UML design can only take you so far in solving a problem, and you have to be careful to jump into the code soon enough (but not sooner ;-).
UML is somewhat less applicable to the brave new world of loose typing and NoSQL databases. It has OO ideas of Class as a data structure rather than classification embed in it.
Another disadvantage, although not self-inflicted, is that it doesn't explicitly to facilitate abstraction. Everyone I know uses UML tools for more abstract modelling, but the way standards are written that is not obvious.
Another problem with UML (and big design up front in general) is that it's sometimes hard to anticipate all the nitty gritty implementation problems that you'll run into that may affect your design until you actually start implementing something. Granted, I'm a bioinformatics research programmer that works on small one-man projects, but I don't even believe in any design up front, at least for small projects. I believe in the following:
Make it work. (Just get a prototype up and running that has all the basic functionality, no matter how much it sucks. This forces you to see all the little nitty gritty details that might not come through in a formal analysis. Having an actual implementation of your idea makes it easier to see whether the idea was even really worth doing in the first place or whether it should be scrapped altogether.)
Make it right. (Only now, when you have a working prototype and you know that all the nitty gritty implementation problems are at least in principle solvable do you worry about good design. Refactor the heck out of it to follow good programming practices, reduce coupling, do proper error handling, yada yada yada.)
Make it fast. If it's application code, you'd better have proof that you've found the slow part. If it's generic library code, you'd better have good reason to believe that the piece of code could reasonably be the slow part in some use case for the library, i.e. don't optimize a function that noone would ever call in a loop.
For Class Diagrams in UML it only makes sense to use them if there is automated way to generate code directly from diagram. I have implemented such UML editor tool based on 4-tiers meta levels recommended by OMG (Object Management Group) and we had great success using UML in team of 5 devs over 2 years doing around 20-30 architectural iterations. The diagram was the root artifact of automated build chain making impact on hundreds of derived artifacts, APIs, generated Docs, DDLs, projects, tests etc.
So by itself UML in Class Diagrams part is great "programming" language if you actually do programming in it.
For Class Diagrams in UML if it is not translatable in automated way, then its fail.