How to design a project ( detail ) ?? indetail [closed] - programming-languages

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
My question is really very important .
When i program i have seen that i had lots of errors in programming logic + structure + a flexible when it goes for testing , i have read many books on OOPS and my all the concept are clear but i do not know where to start design of my code or project . can any body help me how to improve this part of programming skill.
although i work on php+javascript but this question is for all the programmers on stackoverflow
note- usually when i hold paper and pen i think where to start from ..
if i make something problem is how to simplyfy.... and many others which u all are facing / faced

Well, I think everyone is different as is every project. But here is what I personally do...
For my own projects, i.e. no client requirements, I start at one end or the other, either with the database structure or the UI. I then work down through the layers making sure that I maintain clear separation of concerns to make testing (unit and system) as well as maintenance as easy as possible.
One thing to note is that regardless of your approach I think the process is iterative. I will often work, refactor, work, refactor etc so don't get too bogged down with the details and feel you have to stick to them. The requirements are the key thing (whether for yourself or for a client), the technical implementation is largely irrespective.
When dealing with clients the process is somewhat different. You will need to do a fair amount of design up front so again think from one layer to the next trying to keep as much of the logic in the correct layers as possible. As an example you have your DB, then you want a data access layer (DAL) to abstract your code from the DB access. Then you want specific business logic libraries which use the DAL, this abstracts the higher portions of code from the data (they go through the business layer) etc etc.
Just think of each level and try and keep it as generic as possible, that way when you wish to change the storage for the data, you simply change the DAL and everything else works as before...

As far as starting a design of your project is concerned, whole lot depends on what you are developing, that is requirement of the application. So first thing is that you must collect information about the purpose of your application. And when we start to program, a plain trend must be kept in mind, which is, as a universal fact of programming, Input-Process-Output. So, design starts with input. Just collect as much information as you can about what will be required as your input of application. If the input is not made by your user, than it is not required to be mentioned in your front-end design (In Windows language, the so called "Form"). What user will give, is matter of concern in designing the input area (very first step to start project).
During the designing phase, constant interaction is required with user to make effective & flexible start design, as ultimately he/she is going to use. If I'm starting a project's designing, I always consider the user a lazy person, if we keep that thing in mind, our application will be simpler & easy to use. Once, you'll kick-off the start, its just a flow, that'll suggest you next step.
Hope this helps............. :-)

Related

Where are programming tasks in scrum detailed at? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
When you have sprint task in Scrum, where do you put how you want to program something? For example, say I am making a tetris game and I want to build the part of the game that tracks the current score and a high score table. I have my feature, my user story and my task, but now I want to talk about how to design it.
Is that design something that is recorded on the sprint somewhere as to how to do that or is that just somethign the programmer figures out. Do you put do task x use database such and such, create these columns, etc.? If not, do you record that at all? Is that what trac is for? I don't mean too high level design.
I touched on it here: Where in the scrum process is programming architecture discussed?
but my current question is later in the project after the infrastructure. I'm speaking more about the middle now. The actual typing in the code. Some said they decide along the way, some team-leads. Is this is even documented anywhere except in the code itself with docs and comments?
edit: does your boss just say, okay, you do this part, I don't care how?
Thank you.
There can be architectural requirements in addition to user-specified requirements that can muddy this a bit. Thus, one could have a, "You will use MVP on this," that does limit the design a bit.
In my current project, aside from requirements from outside the team, the programmer just figures it out is our standard operating procedure. This can mean crazy things can be done and re-worked later on as not everyone will code something so that the rest of the team can easily use it and change it.
Code, comments and docs cover 99% of where coding details would be found. What's left, if one assumes that wikis are part of docs?
Scrum says absolutely nothing about programming tasks. Up to you to work that out...
Scrum doesn't necessarily have anything explicitly to do with programming - you can use it to organise magazine publication, church administration, museum exhibitions... it's a management technique not explicitly a way of managing software development.
If you do extreme programming inside scrum, you just break your user stories for the iteration down into task cards, pair up and do them.
When I submit tasks to my programming team, the description usually takes the shape of a demo, a description on how the feature is shown in order to be reviewed.
How the task will be implemented is decided when we evaluate the task. The team members split the task in smaller items. If a design is necessary, the team will have to discuss it before being able to split it. If the design is too complex to be done inside this meeting, we will simply create a design task, agile/scrum doesn't force how this should be done (in a wiki, in a doc, in your mind, on a napkin, your choice) aside for saying as little documentation as possible. In most case the design is decided on a spot, after a bit of debate, and the resulting smaller tasks are the description of how things will be done.
Also, sometimes the person doing it will make discoveries along the way that change the design and so, the way to work on it. We may then thrash some cards, make new ones. The key is to be flexible.
You do what you need to do. Avoid designing everything up front, but if there are things you already know will not change, then just capture them. However, corollary to YAGNI is that you don't try to capture too much too soon as the understanding of what is needed will likely change before someone gets to do it.
I think your question sounds more like you should be asking who, not when or where. The reason Agile projects succeed is that they understand that people are part of the process. Agile projects that fail seem to tend to favor doing things according to someone's idea of "the book" and not understanding the people and project they have. If you have one senior team lead and a bunch of junior developers, then maybe the senior should spend more of their time on such details (emphasis on maybe). If you have a bunch of seniors, then leaving these to the individual may be a better idea. I assume you don't have any cross-team considerations. If you do, then hashing out some of the details like DB schema might need to come early if multiple teams depend on it.
If you (as team member) feels the need to talk about design, to so some design brainstorming with other team members, then just do it. About the how, many teams will just use a whiteboard and brain juice for this and keep things lightweight which is a good practice IMHO.
Personally, I don't see much value in writing down every decision and detail in a formalized document, at least not in early project phases. Written documents are very hard to maintain and get deprecated pretty fast. So I tend to prefer face to face communication. Actually, written documents should only be created if they're really going to be used, and in a very short term. This can sound obvious but I've seen several projects very proud of their (obsolete) documentation but without any line of code. That's just ridiculous. In other words, write extensive documentation as late as possible, and only if someone value it (e.g. the product owner).

How to make an Agile methodology work when developing long-term complex systems? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
My team is building a product that has a lot of components that rely on each other. For example, whenever we add a new type of data to the system, we also have to add logging code to track the changes that use that data type. Or, when we add a new UI screen, we have to make sure that its strings are externalized so they can be translated. These things slow down almost every task we do, and sometimes one of the the steps gets forgotten.
The traditional way to handle this problem is to add required checklists and documentation and things like that. How do Agile methodologies handle it?
The design you describe sounds like it might be a little too tightly-coupled. A renewed focus on enterprise patterns (such as Inversion of Control, programming to interfaces, etc) could help a lot.
If you are doing pair programming, you should be checking each other's work, making sure all of the i's are dotted and the t's are crossed.
If you are doing Test Driven Development, your tests should not be passing until all requirements for that particular portion of the development effort are satisfied.
If you are developing a large, complex system, you need experienced developers who understand the design and development process. You may also need a hands-on (read:coding) architect who can oversee the whole process.
Oh, and checklists (despite their traditional nature) are good too.
I'd suggest reading Alistair Cockburn's "”Agile Software Development: The Cooperative Game" - he takes quite an intelligent approach to Agile that's largely "do what gets the job done". That might help you work out how to get some kind of checklist / documentation into what you're doing without making everything horribly top-heavy.
Could some of your problems be solved by better tests? When you talked about not doing things that need to be done, my first thought was "why hasn't a test failed?" Maybe you need to look at tools for testing user interfaces? (edit: or even some small script on commit that greps code for whatever indicates the need for translation strings and checks against the files with the translations in?)
Also, can you change your design so that it's both less coupled and "forces" you to do the right thing. Perhaps making those data types implement a logging interface that the logger delegates to, or similar...?
Depending on your IDE there are various tools to help identify strings that need to be externalized, but if you are in a habit of just not putting in static strings this can be avoided.
If you need to add logging I would suggest AOP, as, at some point you will want to remove the logging code and you risk breaking the application.
But, a long-term, complex system is ideal for agile development, as, while you are developing, the needs of the client/customer may change, and you can adapt to it.
You need to ensure that the customer has feedback on a regular basis (ideally daily, and in a perfect system the customer has a rep sitting by for questions).
When I have many steps that must be done, esp for something like datatypes, I will resort to using a spreadsheet, so, you add a datatype, you add a row to the spreadsheet. Then you can track everything that needs to be done before that datatype is completely added to the application.
Have cross-component teams. That way when you add some functionality, the member(s) from the logging component will update their part and the translator(s) will update the strings.
I think it is important to understand that Agile methodology is only a process framework, not a process in itself. For example it says to follow test driven development and do pair programming but it does not say how to write the tests, or suggest a review checklist or suggest a coding guideline or say what documents to write. Those parts are entirely upto you or your organization to define and follow.
When planning a feature, you can add a engineering task called "review" and allot time for it. You could do the review task in whatever way works best for you and your organization. If pair programming doesn't work as well as a formal inspection for you, you should do formal inspections.
Do what needs to be done, but not more:
better definition of done (the definition of done is a kind of checklist to me),
better testing,
"just enough" documentation (Agile != no documentation),
etc.
Well, where I work we have QA that can catch some of the bugs if there are missing requirements or something slips through. I'm not saying the development team is intentionally putting in bugs, but as a codebase grows, it becomes harder and harder to remain nimble and thorough in checking everything.
Wikis can be useful for capturing methods used to try to get the clearest requirements. By clear I mean that the story card isn't likely to list all the requirements and that there may be discussions with an end-user or business analyst to get their understanding of what is desired. Part of our sign-offs involve getting an end-user to see the functionality and approve it before moving from the development environment.
Once every handful of sprints, we may have most of a sprint devoted to bug fixing/refactoring so that things can be cleaned up that would otherwise not get done as they aren't likely to be important. This can be cosmetic bugs or broken windows that while they have little business value initially can be useful in the long run.

How to implement code in a manner that lessens the possibility of complete re-works [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I had a piece of work thrown out due to a single minor spec change that turned out not to have been spec'ed correctly. If it had been done right at the start of the project then most of that work would have never have been needed in the first place.
What are some good tips/design principles that keep these things from happening?
Or to lessen the amount of re-working to code that is needed in order to implement feature requests or design changes mid implementation?
Modularize. Make small blocks of code that do their job well. However, thats only the beginning. Its usually a large combination of factors that contribute to code so bad it needs a complete rework. Everything from highly unstable requirements, poor design, lack of code ownership, the list goes on and on.
Adding on to what others have brought up: COMMUNICATION.
Communication between you and the customer, you and management, you and the other developers, you and your QA department, communication between everyone is key. Make sure management understands reasonable timeframes and make sure both you and the customer understand exactly what it is that your building.
Take the time to keep communication open with the customer that your building the product for. Make milestones and setup a time to display the project to the customer at each milestone. Even if the customer is completely disappointed with a milestone when you show it, you can scratch what you have and start over from the last milestone. This also requires that your work be built in blocks that work independent of one another as Csunwold stated.
Points...
Keep open communication
Be open and honest with progress of product
Be willing to change daily as to the needs of the customers business and specifications for the product change.
Software requirements change, and there's not much one can do about that except for more frequent interaction with clients.
One can, however, build code that is more robust in face of change. It won't save you from throwing out code that meets a requirement that nobody needs anymore, but it can reduce the impact of such changes.
For example, whenever this applies, use interfaces rather than classes (or the equivalent in your language), and avoid adding operations to the interface unless you are absolutely sure you need them. By building your programs that way you are less likely to rely on knowledge of a specific implementation, and you're less likely to implement things that you would not need.
Another advantage of this approach is that you can easily swap one implementation for another. For example, it sometimes pays off to write the dumbest (in efficiency) but the fastest to write and test implementation for your prototype, and only replace it with something smarter in the end when the prototype is the basis of the product and the performance actually matters. I find that this is a very effective way to avoid premature optimizations, and thus throwing away stuff.
modularity is the answer, as has been said. but it can be a hard answer to use in practice.
i suggest focussing on:
small libraries which do predefined things well
minimal dependencies between modules
writing interfaces first is a good way to achieve both of these (with interfaces used for the dependencies). writing tests next, against the interfaces, before the code is written, often highlights design choices which are un-modular.
i don't know whether your app is UI-intensive; that can make it more difficult to be modular. it's still usually worth the effort, but if not then assume that it will be thrown away before long and follow the iceberg principle, that 90% of the work is not tied to the UI and so easier to keep modular.
finally, i recommend "the pragmatic programmer" by andrew hunt and dave thomas as full of tips. my personal favourite is DRY -- "don't repeat yourself" -- any code which says the same thing twice smells.
iterate small
iterate often
test between iterations
get a simple working product out asap so the client can give input.
Basically assume stuff WILL get thrown out, so code appropriately, and don't get far enough into something that having it be thrown out costs a lot of time.
G'day,
Looking through the other answers here I notice that everyone is mentioning what to do for your next project.
One thing that seems to be missing though is having a washup to find out why the spec. was out of sync. with the actual requirements needed by the customer.
I'm just worried that if you don't do this, no matter what approach you are taking to implementing your next project, if you've still got that mismatch between actual requirements and the spec. for your next project then you'll once again be in the same situation.
It might be something as simple as bad communication or maybe customer requirement creep.
But at least if you know the cause and you can try and help minimise the chances of it happening again.
Not knocking what other answers are saying and there's some great stuff there, but please learn from what happened so that you're not condemned to repeat it.
HTH
cheers,
Sometimes a rewrite is the best solution!
If you are writing software for a camera, you could assume that the next version will also do video, or stereo video or 3d laser scanning and include all hooks for all this functionality, or you could write such a versatile extensible astronaut architecture that it could cope with the next camera including jet engines - but it will cost so much in money, resources and performance that you might have been better off not doing it.
A complete rewrite for new functionality in a new role isn't always a bad idea.
Like csunwold said, modularizing your code is very important. Write it so that if one piece falls prone to errors, it doesn't muck up the rest of the system. This way, you can debug a single buggy section while being able to safely rely on the rest.
Beyond this, documentation is key. If your code is neatly and clearly annotated, reworking it in the future will be infinitely easier for you or whoever happens to be debugging.
Using source control can be helpful too. If you find a piece of code doesn't work properly, there's always the opportunity to revert back to a past robust iteration.
Although it doesn't directly apply to your example, when writing code I try to keep an eye out for ways in which I can see the software evolving in the future.
Basically I try to anticipate where the software will go, but critically, I resist the temptation to implement any of the things I can imagine happening. All I am after is trying to make the APIs and interfaces support possible futures without implementing those features yet, in the hope that these 'possible scenarios' help me come up with a better and more future-proof interface.
Doesn't always work ofcourse.

How to educate a development manager about the difficulties of software design? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I have had a few development managers who don't seem to understand or appreciate the difficulties of software design and implementation.
Such managers believe that processes and methodologies completely solve the problem and I have a tough time explaining to them that it is not so and that you cannot read a book on the latest process fad and hope to get results by applying them as is.
The latest frustration I have is to convince my manager to
(a) Give me requirements not piece-meal but a larger set as far as possible.
(b) Give my team lead time to think about how to design, thrash out a few alternatives, work out an implementation sketch, to plan out the tasks etc.
The frustrations are compounded because of Agile methodology and the interpretation of it that says not to do up-front design (as against BIG up-front design in Waterfall), product owner can change requirements at any time and so son.
So far I haven't had much success and have to put up with the resulting frustrations.
Can you give me some arguments that can convince such managers?
EDIT-1:
Retrospectives are done, though not always at the end of every sprint, and the problems are brought up. But as I mentioned, my manager doesn't appreciate the need for design lead time and the frustrations with piece-meal requirements.
EDIT-2
I don't have a problem with changing requirements. I understand that it will be so, but imagine this: You want a small feature to begin with and then you keep adding more around it. After a few iterations, the design cannot handle it anymore and a redesign (not refactoring) is required. This could have been solved better with an upfront design in the first place, had the related features been investigated together. Its not BDUF, its the natural way of doing it (what I call software engineering common sense).
My manager doesn't understand why I ask for time to redesign (a few times I just call it refactoring so that it fits the Agile way of doing it, but it really is redesign) and not developing and demoing new features.
Every time requirements are changed (or increased) so should
the estimate to complete and,
the assessment of risk
Start giving updated estimates (even if you have to guess) and lists of risks every time you get an updated or new requirement. This will help your manager make the connection.
Try to do this in a spirit of helpfulness--"for planning purposes"--so that you aren't perceived as obstructive or lacking "can-do attitude." Remember that estimates can (in theory) come down, and risks can be reduced.
Business requirements are going to change no matter where you work. It's not your fault, it's not your boss's fault, it's not anybody's fault. The entire point of taking the requirements on piecemeal is to encourage you to think about the problem at hand, not some other problem that you might or might not need to solve. It's quite liberating once you get into the rhythm of it.
Think of upfront design as premature optimization. You may not need it, and even if you know you need it, you'll know more about your design two weeks from now than you know about it today. It'll help you solve your engineering problem with the best possible knowledge about the state of your code.
That having been said, edg is absolutely right. When you add more requirements, the estimate changes. This isn't the fault of the developers or anyone else; more work means more work no matter how you square it. If your boss doesn't realize that adding requirements will result in a larger estimate for the project you need to explain to him that Agile isn't a magic bullet that allows you to add more features without paying anything for them.
Agile Simple Design doesn't mean don't do ANY design/architecture up front.
It means do the minimal design up front, so that you will not pay a horrible price for reasonable change requests.
Scott Ambler talks about Change Cases - http://www.agilemodeling.com/artifacts/changeCase.htm
James Coplien talks about Agile Architecture - http://www.infoq.com/presentations/Agile-Architecture-Is-Not-Fragile-Architecture-James-Coplien-Kevlin-Henney
http://blog.jaoo.dk/2009/03/04/handling-architecture-in-the-agile-world/
The art/craft in all of this is in how to slice the architecture in a way that allows:
relatively fast convergence on overall architecture/infrastructure - on the order of days per months of estimated development time.
developing "just enough" architecture/infrastructure per each feature/requirement
doing the right balance of preparations for the future compared to focus on the features of today.
Its important that your Product Owner is aware of all of this balancing act as well, and you work collaboratively. He should understand that if you disregard all thinking for the future, each change will be very costly. There is a price to be paid for flexibility.
Its btw very similar to investment in QA and test automation. You pay something now, that will pay off only after X times you test the code. if the code never changes it was a waste of effort. but everyone knows that most code changes...
Buy your manager this book. That's what I did, and it worked great :)
First of all this issue seems quite sensitive, so all I wrote below is just my personal opinion, and not necessarily a wise thing to do.
In my opinion you cannot make software if you do not know what problem it should solve. If requirements come in small parts that are too small to oversee the problem, then I would just fire questions about the parts that seem to be missing. Like: "okay so the software should do X, but does that also mean Y or otherwise maybe Z? Because if it is Y then ... but if it is Z then ..." Of course if the manager is in the middle of extracting the requirements then he cannot answer, but at least he knows that there are still open issues that influence development.
About no lead time for design: design and development are an iterative process that could go hand in hand. It is just how you name the thing. If the manager wants to see some code at the end of the day, okay then I would just use the first half of the day to design and the second half of the day to make some code based on that design. If the manager does not want to see the design, fine with me then I'll just show the code.

How does off-the-shelf software fit in with agile development? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Maybe my understanding of agile development isn't as good as it should be, but I'm curious how an agile developer would potentially use off-the-shelf (OTS) software when the requirements and knowledge of what the final system should be are changing as rapidly as I understand them to (often after each iteration of development).
I see two situations that are of particular interest to me:
(1) An OTS system meets the initial set of requirements with little to no modification, other than potential integration into an existing system. However, within a few iterations of development, this system no longer meets the needs without rewriting the core code. The developers must choose to either spend additional time learning the core code behind this OTS software or throw it away and build from scratch. Either would have a drastic impact on development time and project cost.
(2) The initial needs are not like any existing OTS system available, however, in the end when the customer accepts the product, it ends up being much like existing solutions due to requirement additions and subtractions. If the developers had more requirements and spent more time working on them up front, this solution could have been used instead of building again. The project was delivered, but later and at a higher cost than necessary.
As a software engineer, part of my responsibilities (as I have been taught), are to deliver high-quality software to the customer on time at the lowest possible cost (among other things). Agile development allows for high-quality software, but in some cases, it might not be apparent that there are better alternatives until it is too late and too much money has been spent.
My questions are:
How does off-the-shelf software fit in with agile development?
How do the agile manager and agile developer deal with these cases?
What do the agile paradigms say about these cases?
Scenario1:
This can occur regardless off the OTS nature of the component. Agile does not mean near-sighted.. you'd need to know the big chunks.. the framework bits and spend thinking time on it beforehand. That said, you can only build to what you know .. Delay only till the last responsible moment.Then you need to pick one of the alternatives and start on it. (I'd Avoid third party application unless the cost of developing it in-house is infeasible.. but that's just me). Prototype multiple solutions to check feasibility with list of known requirements. Keep things loosely coupled (replacable), easy to change and full tested. If you reach the fork of keep hacking or rewrite, you'd need to think of which has better value for the business and pick that option. It's comes down 'Now that we're here, what's the best we can do now?'
Scenario2:
This can happen although the chances are slim compared to the team spending 2-3 months trying to get the requirements 'finalized' only to find that the market needs or customer minds have changed and 'Now we want it this way'. Once again, its a question of what is the point of time till which you are prepared to investigate and explore before committing on a path of action. Decide wisely with whatever information you have upto that point.. Hindsight is always 20-20 but the customers wont wait forever. You can't wait till the point of time where the requirements coalesce to fit a known OTS component :)
Agile says Do whatever makes sense and strip out the non-value-adding activities :) Agile is no magic bullet. just my 2 agile cents :)
Not a strict answer per se, but I think that using off the shelf software as a component in a software solution can be very beneficial if:
It's data is open, e.g. an open database or a web service to interact with it
The off the shelf system can customised easily using a similar programming paradigm to the rest of your solution
It can be seamlessly adapted to the rest of your work-flow
I'm a big fan of not re-inventing the wheel, and using your development skills to design the 'glue' between off-the-shelf solutions can be a big win.
Remember 'open' is the important part, and a vendor will often tout their solution as open when it isn't really.
I think I read somewhere that if during an iteration you discover that you have more than 20% more work that you initially thought then you should abandon the sprint and start planning a new one taking into account the additional work.
So this would mean replanning with the business to see if they still want to go ahead with the original requirements now that you know more.
At our company we also make use of prototyping before the sprint to try and identify these kind of situations before they arise on a sprint. Although of course that still may not identify the kind of situation that you describe.
C2 wiki discussion: http://c2.com/cgi/wiki?BuyDontBuild

Resources