fast parallel picking/checking of some ID (think non-game application of loot lag) - multithreading

So I am verifying new operational management systems, and one of these OS's sends pick lists to a scale-able number of handheld devices. It sends these using messages, and their pick lists may contain overlapping jobs. So in my virtual world, I need to make sure that two simulated humans don't pick the same job - whenever someone picks a job, all the job lists get refreshed, so that the picked job doesn't appear on anyone else's handheld anymore, but for me the message is still in the queue being handled, so I have to make sure to discard that option.
Basically I have this giant list with a mutex, and the more "people" hitting it faster, the slower I can handle messages, to the point where I'm no longer at real-time, which is bad, because I can't actually validate the system because I can't keep up with the messages. (two guys on the same isle will recognize that one is going to pick one object and the next guy should pick the 2nd item, but I need to check every single job i'm about to pick and see if it has been claimed by someone else already)
I've considered localized binning of the lists, but it actually doesn't solve the problem in the stupid case that breaks it anyway, tons of people working on the same row. Now granted this would probably be confusing for the real people as well, as in real life they need to do the same resolution, but I'm curious what the currently accepted "best" solution to this problem is.
PS - I already am implementing this in c++ and it's fast, fast enough that in any practical test I don't "need" this question answered, it's more because I'm curious that I'm asking.
Thanks in advance!

I see a problem in the design "giant list with (one) mutex". You simply can't provide the whole list in synchronized fashion, if the list size and/or access rate is unlimited. Basic math works against you. So what i would do is a mutexed flag on each job. You can't prevent a job from being displayed on someone's screen, but you can assure that he gets a graceful "no more available" error and THEN the updated list. If you ever wanted to reserve a seat on highly popular gig, you may have witnessed the solution.

Related

How to name an event describing the acknowledgment of the existence of an entity in an event sourced system?

I am new to Event Sourcing and I am considering using it for an industrial application to track events happening in a production facility.
Since the book of record is the production facility itself and not the system, and also because not everything is automated, workers will need to report at a given point in time (the recorded time) what they did at another point in time (the effective time). Therefore, I will be using events such as: TankFilledRecorded, TankOutputConnectedToPipeInputRecorded, ContainerMovedToFacilityAreaRecorded, etc. where these events refer to entities such as a tank, a pipe, or a facility area for example. These events will have both a recorded time and an effective time. Note that there is no submission or approval process for a record to be considered legit.
Domain-driven design (DDD) encourages to design events that are representative of what happens in the domain (like the ones above).
However, in my domain, I don’t care so much about how a tank, a pipe or a facility area came to existence. I just need to know that something exists from a particular point in time, and I also need to know if it is not there after a particular point in time. The main objective of the software is to track liquids and powders flowing in a circuit made of these pipes, tanks and other components. It is not an asset management system and should not become one.
Therefore, what would be the correct DDD way to design an event that represents the fact that there is a tank, a pipe or an area in the production facility?
It is a subtle question but language is important, particularly in DDD.
Here is what I came up with:
1 EntityExistenceAcknowledgmentRecorded
TankExistenceAcknowledgmentRecorded
PipeExistenceAcknowledgmentRecorded
FacilityAreaExistenceAcknowledgmentRecorded
TankDisappearanceAcknowledgmentRecorded
PipeDisappearanceAcknowledgmentRecorded
FacilityAreaDisappearanceAcknowledgmentRecorded
It seems awful to use this in the ubiquitous language. I don’t see myself talking in these terms or providing a UI with such vocabulary. But it does represent exactly what happens though.
2 EntityRegistered
TankRegistered
PipeRegistered
FacilityAreaRegistered
TankUnregistered
PipeUnregistered
FacilityAreaUnregistered
It seems much simpler and it also seems to be meaningful except for one thing. “Registered” conveys the existence of the representation of an entity in the system with immediate effect, without the possibility of saying now that the entity existed 2 days ago. Think about a UserRegistered event in a website that would indicate that the user “existed” from 10 days ago. What would that even mean?
Events are facts and you cannot change the past. However, I do need a way for my users to invalidate a record in which they made a mistake such as a typo. They can record now that they acknowledged the existence of a facility area a week ago and might realize later than there was something wrong, such as a typo in the name of the entity. They would invalidate the record and create a new one. But, invalidate something that has been “registered” does not sound right.
3 Keep looking
Try to dig more in the domain (event storming) and find the real events that brought the entities into existence even if these events are of no use in the problem that needs to be solved.
TankBuiltRecorded
PipeBuiltRecorded, PipeDeliveredRecorded
FacilityArea<something_meaningful>Recorded
TankDestroyedRecorded, TankDecommissionedRecorded
PipeDecommissionedRecorded
FacilityArea<something_meaningful>Recorded
A caution
TankFilled
TankFilledReported
TankFilledReportSubmitted
TankFilledReportSubmissionReceived
Think carefully about whether the increased precision is motivated by business value.
Therefore, what would be the correct DDD way to design an event that represents the fact that there is a tank, a pipe or an area in the production facility?
What is the business doing today? Is there already a process in place for tracking the lifetime of the hardware in the plant (a maintenance log, perhaps?) There's likely to be vocabulary in that place that gives you ideas as to what spellings would make sense in the code.
Events are facts and you cannot change the past.
That's true - but you can back date events. The effective date of the information is often distinct from the reported date of information.
I do need a way for my users to invalidate a record in which they made a mistake such as a typo.
Yes - error correction is an important part of the process that you are modeling.
You should probably review Greg Young's talk Answering a Question, which was based on this thread. It's a discussion of capturing and modeling of temporality.
Here's the good news: you are running into the right problem. Because you are capturing information about an external system, there are going to be opportunities for errors and conflicts, and you need to (a) figure out the protocols for addressing them, and then (b) model that process correctly. That might include exception reports generated by the system when it observes conflicting information, or compensating events, or even automated conflict resolution (for the easy cases -- see also Stop Over Engineering).

How to resolve Order and Warehouse bounded contexts dependency?

I am working on DDD project and I am currently focused on two bouned contexts, Orders and Warehouse.
What confuses me is the following situation:
Order keep track of all the placed orders, and warehouse keeps track about all the available inventory. If user places one order for certain product item, that would mean one less item of that product in a warehouse. I am oversimplifying this process, so please bear with me.
Since two domain models are placed inside of a different BC, i am currently relying on eventual consistency ie. after one item has been sold, it would eventually be removed from the warehouse.
That situation unfortunately leads to the problem scenario where another user could simultaneously make another order of the same item, and it would appear as available until eventual consistency kicks is. That is something it is unacceptable by the domain expert.
So IMO I am stuck with two options
merge order and warehouse (at least the part regarding product
inventory, units available in warehouse) into one BC
have Order BC (or microservice if you wish) to be dependent of Warehouse BC (microservice) in order to pull a live product units
available
Which option does actually follows DDD patern? Is there another way out?
You could use a reservation system with a timeout.
Using a messaging analogy: With a broker-style queuing mechanism (such as RabbitMQ) you get a message from the queue and you have control over it until you either acknowledge that it can be removed from the queue or you release it back to the queue.
You could do the same thing in your ordering process. You reserve any items on your order. SO when you add them they have a status of, say, reserving and upon sending some message to reserve the items. If the response comes back you can decide how to proceed. Perhaps you could add any items that cannot be reserved onto a back order or try again later.
There are going to be different ways to approach this. Depending on your business case it may be acceptable to only check availability when someone really accepts the order.
If you domain expert reckons it is totally unacceptable that having this resolved at the end of the process then you could move it to the start. The issue is of course that user A could reserve and never buy thereby losing user B as a customer; whereas only applying the real "taking" of the item at the end of the process is closer to ensuring a purchase. But that is a business decision.
This issue is a really great example of where reality actually is eventually consistent. Is it really the best thing to decline an order if there is no inventory currently in the warehouse - even if there was a replenishment due in the next 20 minutes?
What if the item was actually on the shelf, but the operator hadn't yet keyed it into the system?
Sometimes designers and domain experts assume that people want 100% consistency, when really, users might be willing to accept a delay in confirmation of their order, if it increased the chance that their order would be accepted rather than rejected.
In the case above, why make it the user's job to retry their order N minutes later? In an eventually consistent system, you can accommodate such timing flexibility by including a timeout to retry the attempt to fulfill the order for a period of time before confirming to the client that it really wasn't possible.
There are technical solutions that will give you 100% consistency, but I think really this is not a technical challenge but a cultural/mindset one, changing people's minds about what is possible & acceptable to achieve an what is actually a better outcome.
IMO you can build a PlaceOrderSaga which will ask for inventory availability before placing the order.

Online test security measures

I'm developing a feature for a client in which users voluntarily take an important test online. The test is difficult and the users will be highly motivated to do well (think SATs or GRE, etc)... so there's also a high incentive to cheat. Apparently there are 3rd party services in which a human virtually monitors the test taker via a webcam, but they're really expensive and we don't quite have the budget. We still need to make it as hard as possible for a user to game the system. Some of the things we suspect they might try are:
Getting someone else to take the test for them (a pinch hitter).
Taking the test multiple times with different profiles to practice
and gain an unfair advantage.
Taking the test alongside friends or while in contact with a friends
to tell them the answers.
The question order will change, as well as the order of the answers. The test will be timed, and an "open book" format, so we're not really worried about the user looking things up online, but we can't have them sharing their screen and having others assist them. So the main concern at this point is ensuring that the user is, in fact, who they say they are (and not someone else).
Here are a few of the security measures we're considering:
Requiring the user's device to have a webcam, which we'll activate and either record/photograph the user during the test (with the user's consent of course).
Asking users to verify an arbitrary bank deposit amount (presumably via PayPal). There's nothing to stop them from opening up multiple bank accounts, but at least it's a big hassle.
Really scary terms of use that threaten legal action if the user is caught cheating.
QUESTION: Are there any other measure we can/should take to make sure our test is secure and the results are reliable?
CLARIFICATION: We realize that with enough resources and determination, any security system can eventually be beaten. The goal of this question is not to find a magically unbeatable solution, but to find ways to raise the stakes enough so that it won't be worth it for most users to cheat. In this spirit, I'd much prefer answers that focus on what can be done as opposed to what can't.
As you know there are many ways of cheating. Your goal is limit the possibility of cheating as much as possible. Cheating in online courses has been a hot topic.
A pinch hitter:
This type of attack can be conducted a number of ways. Even if you have a cam looking at the person, the video that the test taker is seeing could be mirrored on another screen. A pinch hitter could see the question and just read him the answers or otherwise feed answers the test taker in a covert channel.
Possible counters to this attack is to also enable the mic to see if they are talking to anyone. You can also record the screen while they take the test. This could prevent them from opening a chat window or viewing other unauthorized content. (Kind of like the Elance tracker)
user verification:
In order to register the person should attach a scanned copy of their photo-id. This way you are linking a photo of the person to a unique identifier, such as a drivers license number. Before the person starts taking the test, ask the user to look directly at the camera and make sure you get a good image of them that can be verified against their photo id.
A simple attack against this system is to use photoshop to modify the id. To make this attack more difficult you could verify their name against a credit/debit card transaction. The names should match on both cards.
An evercookie could be used to track machines to see if the same computer is being used. This could happen though legitimate reasons, but it could also be used to flag tests for further review. A variant on the evercookie is to drop a file with a random value or set a registry key with a random value to "mark" that machine.

Giving up Agile, Switching to waterfall - Is this right? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am working in an Agile environment and things have gone to the state where the client feels that they would prefer Waterfall due to the failures (that's what they think) of the current Agile scenario. The reason that made them think like this would be the immense amount of design level changes that happened during the end stages of the sprints which we (developers) could not complete within the time they specified.
As usual, we both were blaming each other. From our perspective, the changes said at the end were too many and design/code alterations were too much. Whereas from the client's perspective, they complain that we (developers) are not understanding the requirements fully and coming up with solutions that were 'not' what they intended in the requirement. (like they have asked us to draw a tiger, and we drew a cat).
So, the client felt (not us) that Agile process is not correct and they want to switch to a Waterfall mode which IMHO would be disastrous. The simple reason being their satisfaction levels in a Agile mode itself were not enough, then how are they going to tolerate the output after spending so much time during the design phase of a Waterfall development?
Please give your suggestions.
First off - ask yourself are you really doing Agile? If you are then you should have already delivered a large portion of usable functionality to the client which satisfied their requirements in the earlier sprints. In theory, the "damage" should be limited to the final sprint where you discovered you needed large design changes. That being the case you should have proven your ability to deliver and now need a dialogue with the client to plan the changes now required.
However given your description I suspect you have fallen into the trap of just developing on a two week cycle without actually delivering into production each time and have a fixed end date in mind for the first proper release. If this is the case then you're really doing iterative waterfall without the requirements analysis/design up front - a bad place to be usually.
Full waterfall is not necessarily the answer (there's enough evidence to show what the problems are with it), but some amount of upfront planning and design is generally far preferable in practice to the "pure" Agile ethos of emergent architecture (which fits with a Lean approach actually). Big projects simply cannot hope to achieve a sensible stable architectural foundation if they just start hacking at code and hope it'll all come good some number of sprints down the line.
In addition to the above another common problem with "pure" Agile is client expectation management. Agile is sold as this wonderful thing that means the client can defer decisions, change their mind and add new requirements as they see fit. HOWEVER that doesn't mean the end date / budget / effort required remains fixed, but people always seem to miss that part.
The agile development methodologies are particularly appropriate when you have unclear requirements and when you may need to make design changes at later stages in your project. Waterfall is a less appropriate approach in this case. The waterfall approach is appropriate for projects which are well understood and when the requirements are unlikely to change during the project's lifetime. It doesn't sound like that is the case here.
How long are your sprints? An alternative approach might be to decrease the sprint length - at least at the start of the project. Deliver new versions to the customer more often and discuss the changes with the customer. If you aren't doing what they want this will become apparent more quickly so less time will be wasted on implementing solutions that don't meet the customer's requirements.
I'm not sure what kind of shop you run, so it's hard for me to come up with good recommendations. I can offer two guiding principles though:
If you have bad communication with the customer, no development methodology will save you.
It's none of the diner's business how a chef organizes the kitchen, as long as the meal is tasty.
It sounds like you have serious project management and architecture/design issues, and it sounds like your communications have also broken down. Fundamentally I don't think changing your dev methodology is going to fix any of that, and is therefore the wrong thing to be doing (though it may restore some client confidence).
I would be especially concerned about moving towards waterfall since you are now choosing to essentially capture the requirements just once (which we know you have a problem with) with no capacity for input. That rigidity is good for inflexible delivery targets, but it's completely inappropriate here where you have changes all the time - that's agile!
Short term I'd step back and double check your requirements at this stage with them. Renegotiate and confirm your current state in relation to those.
Medium term, I'd open up more communications with the client - try and get them involved in a daily scrum for a while (until you restore confidence, then you can be more flexible).
Long term, you have to be worried about how your PM's and senior devs have managed to get you into this position. If the client is being unreasoanable that's one thing (but it's still up to the PM to manage that, so you're not absolved). It's not reasonable to complain about having too many changes, that just means you screwed up in determining requirements (which is a dialogue, not a monologue) or that you have to have more numerous, but probably shorter sprints.
Above all, I can't see moving towards waterfall is possibly correct. It doesn't fix anything directly and I can only see it exacerbating the problems you've already highlighted.
Caveat: I'm not really capable of a balanced view on waterfall since I've never seen it work effectively and imho it's just completely outdated for enterprise projects.
Agile development does not save you from the burden of actually coming up with a design which both you and the customer understand similarily. Agile just makes it possible to come up with the design in smaller increments and not all at once. And, in the case of a difficult customer, coming up with a proper design takes time.
So, I would spend more effort in sitting down with the customer, with a whiteboard, going over what is it that they actually want. I don't think it really matters in this case if the development process is agile or waterfall.
Agile or waterfall are just words. There are only things that work, and things that don't.
Software development seems virtual to many people and they don't understand why it's hard to change a small thing they request.
Your customers should understand that building a software is just like building a house : when you have built all the foundations and walls, it's hard to change all the house final plan, and room design.
Some practices helps avoid this kind of problem : data modeling, data dictionary, data flow diagrams... the goal being to know every requirement in complete detail. Cutting your product in many independant blocks help starting coding while continuing designing or specifying other parts of your final product.
See Steve McConnell book : "Rapid Software Development : taming wild software schedule" for all the practices that work.
The reason that made them think like this would be the immense amount of design level changes that happened during the end stages of the sprints which we (developers) could not complete within the time they specified.
Scrum is in a way a "short waterfall", and you should be isolated from changing requirements for the sprint duration. It seems that this is not happening! Therefore, don't see you will gain anything from switching to traditional waterfall, but you should stick to freezing requirements for the sprint duration.
Maybe your iterations are too long?
(I assume you follow Scrum, since you mention sprints).
Talk to your clients and agree the following:
- Shorter iterations, up to 3 weeks max.
- No changes in requirements during the iteration.
- Features are planned at the beginning of the iteration
- Every iteration ends with deliverable: fully functional software with all features that are fully operational
- Iteration length does not change. Unfinished features are left for the next iteration (or maybe discarded if client changes his mind).
- Number of "feature points" you can deliver in a single iteration should be based on the team metric, not client insistence. This is your "capacity".
- Client decides what features (but not how many of them) are planned for the iteration
Another thing you should ask yourself is why there are so many "design level changes" in your application. By now, you should have basic architecture and design in place. Maybe you should review the actual design and try to impose some design guidelines and implement some patterns. For example, in a typical enterprise web app, you will probably end up using something like DAO. When you add new features, you create new DAO, but basic architecture and design will not change.
It seems however, that you are not delivering what the client wants. In that case, it is of outermost importance to deliver working product to the client, so he could provide sensible feedback for the next iteration.
Regarding
"we (developers) could not complete
within the time they specified."
The client should not be the one to specify the iteration time-frame. Iteration length should be always the same. The requirements that enter into the iteration should be obtain as a result of client prioritization, but the amount of requirements that is planned for the iteration should be based on the estimation that team performs and number of "points" you are able to deliver during iteration.
For me it sounds as if there was no "Big Plan[TM]" in the agile project. Using an agile process does not mean that there is no long term plan, it is more about to deal with the increasing uncertainty in the farer future. For example there should be a release plan with the planned features for all releases in the next 2 months (and a lesser detailed plan with features for the releases after that), so it is clear to the customer when to expect a feature, and when there is a possibility change requirements.
Also to me it seems that there was not (enough) customer involvement in the process. I know that this is a very problematic point, but it helps a lot if the current progress can be discussed with the customer at the end of each iteration. As #Mark Byers already wrote, the more feedback you can get from your customer the better you are.
Also try to not assign blame, as this keeps people to block. Try to use the inspect-and-adopt approach to get a better process instead.
It's not clear what sort of design changes you mean. Graphical design? User experience design? Code design?
In any event, the best solution is more, and earlier, discussions with the client. Jointly develop explicit, concrete examples that satisfy the client's requirements. You can turn these examples into regression tests to ensure that you continue to satisfy them.
Also, continue the discussions as you progress. Show your output as it is available--don't wait until near the end of the sprint. And work on the part most likely to generate problems first. Also look at ways to make it easier to change the things you're finding often change.
The point is to get the client more involved, even to the iteration of a design. Perhaps you'll want to have some discussions focused only on the design.
Your client does not know about how to develop software, or how to manage the software development process. Don't expect the client to provide meaningful instruction on these matters. As a special case, the client does not really know what terms such as 'waterfall' and 'agile' mean; don't expect them to provide meaningful input on your development methodology. Moreover, the client will not really care about these details, as long as the requirements are met within the agreed budget and timeframe. Don't expect them to care, and don't confuse them with lots of inadequate builds and irrelevant information on your internal process.
Here is what the client does care about, and is trying to talk to you about (partly using your own technical jargon): their requirements, their disappointed expectations, and the way you communicate with them. On these matters, the client is the absolute authority. Interpret what they are saying as being about your relationship and the product, not as usable commentary on internal process. Don't cloud the water with your internal deadlines and processes, discuss progress and expectations and the relationship. (If they insist on talking about internals you can remap the terms: e.g. what they understand as being 'the next release' may be internally known as 'the next major release', or whatever).
It sounds to me like the client may want a higher threshold before they get asked for feedback or play with a bad build. It's worth verifying if this is true. If so, you should honor that - and still use agile methods internally if that is what your team feels is best. If they say "waterfall," you may be able to interpret that internally as meaning "we set a deadline for requirements, and then we don't allow more features to be added for a while." Discuss with the client whether it will suit them to have a requirements deadline followed by this sort of freeze.
Someone on your team needs to be the client advocate, and sit on top of the client's issues and fight for them. This advocate must not be sidelined, nor can they take the team's side against the client; they should be the proxy-boss. Then you can separate the internal process communication (team to advocate) from the external communication (advocate to client). The advocate can in some measure insulate the client from the chatter and the builds they don't appreciate, without artificially imposing a certain sort of management or scheduling on your internal process.
To clarify, I do not at all think that you should be secretive or distant with the client, but you should (A) listen to what the client is saying about the relationship and how you are communicating and honor that, (B) keep that separate from internal development process, which should be managed in whatever way will ultimately meet client's expectations.
Fire the client. Even if it is your fault for not understanding what they mean, waterfall would give them 1 chance to give you feedback instead of a chance at the end of each sprint. Some people/clients are literally so stupid that they are not worth working for. Fire them, or tell them that you're using Waterfall without actually switching.
Obvious problem here is communication with customer. If you really want to do agile you have to communicate with customer on daily basics. Only customer should be able to make decision. If you communicate with customer only during mid spring and at the end of the sprint it is natural that later on you will found problems in your application. Also features implemented in sprint has to be accepted and tested by customer. Until that features are not completed.
I'm writing this because I have similar problem on my current project but I know where we failed.
If the communication issue between the Team and the Customer is not fixed, the situation could be worse with waterfall, if the customer only sees the product once it is complete (tunnel effect).
You commented changes from sprints 6-7 started to cause rework of tasks achieved in earlier sprints. Those changes should have been detected earlier - during the Sprint Review.
If there is a misunderstanding in a feature description, and the Team does not implement what the customer is expecting, this should be detected no later than the Sprint where the feature is implemented, and ideally fixed in the current Sprint.
If the customer changed it's mind, the new ideas shall be added to the Product Backlog, prioritized and selected for a Sprint, as any other backlog item. This should not been deemed as rework.
Do you deliver the software to the customer after each sprint, or are you just demoing it ?
The origin of the miscommunication could be at the Sprint Planning: the Team should only commit on Backlog Item that are clearly defined. The definition of the items should comprises the acceptance criteria. Is the customer the Product Owner, and is it the Product Owner ?
Remote debugging of a development process is sufficiently difficult that I would hesitate to offer any opinion about what you should do. It seems to me noone outside your team can plausibly have enough information to make a very useful judgement about that.
A lesser jump to a conclusion would be to make a guess as to what went wrong. From your description, it sounds like early deliverables, which you thought were progress in the bank, ended up being majorly reworked.
One common cause of that is the late discovery/creation of 'all' requirements, things that are supposed to be true about everything in the scope of the project. These can be pretty fatal if taken seriously: something as simple as 'all dialog boxes must be resizable' is, for example, apparently beyond the capability of Microsoft to retrofit to Windows.
A classic account of this kind of failure (albeit in a non-agile project) can be found here
"Once they saw the product of the code we wrote, then they would say, 'Oh, we've got to change this. That isn't what I meant,'" said SAIC's Reynolds. "And that's when we started logging change request after change request after change request."
For example, according to SAIC engineers, after the eight teams had completed about 25 percent of the VCF, the FBI wanted a "page crumb" capability added to all the screens. Also known as "bread crumbs," a name inspired by the Hansel and Gretel fairy tale, this navigation device gives users a list of URLs identifying the path taken through the VCF to arrive at the current screen. This new capability not only added more complexity, the SAIC engineers said, but delayed development because completed threads had to be retrofitted with the new feature.
The key phrase there is 'all the screens'. In the face of changes of that nature, then, unless you have some pre-existing tool support you can just switch on (changing all background colours really should be trivial), you are in trouble. The progress you think you had made up to that point will have retroactively turned out to be illusory.
The only known approach to such issues is to get them right first time. If that fails, live with having them wrong.
A lot of shops add Agile trimmings to make themselves "look Agile" to customers who expect it. Maybe you just need to add some Waterfall trimmings, and show them the product once every 2 sprints.
I believe your client is wrong to move to waterfall. It's curing the symptom, not the disease.
The problem you describe is one of communication - the client wants a tiger, you're giving them a cat.
The waterfall model includes many steps to verify that the requirements as written are being delivered - but it doesn't ensure that the written requirements are what the business meant.
I would look at techniques like impact mapping, behaviour-driven development (BDD) and story mapping to improve communication.

How to deal with clients and iterations in Agile team? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
This thread is a follow up to my previous one. It's in fact 2 questions, so I hope no one minds, as they are dependent on each other.
We are starting a new project at work and we consider it as a great opportunity to try Agile techniques in action. We had a brainstorming about ideas we read in several books and articles, and came up with concept that would suit us the best: 2 weeks iteration, followed by call with clients who would choose what stuff they want to have in next iteration. I just have few more questions, which we couldn't figure out ourselves.
What to do in the first iteration?
What to, generally, do in the first few iterations if we start from the scratch? Just give it a month of development to code core of the application or start with simple wire-frames with limited pre-coded functionality? What usually clients want to see? Shiny stuff that doesn't work or ugly stuff that does work?
How to communicate with clients?
Our initial thought it to set the process to something like this:
alt text http://img690.imageshack.us/img690/2553/communication.png
Is it a good idea to have a Focal Point on client side or is it better to communicate straight with all the clients to prevent miscommunication?
Any thoughts are welcome! Thanks in advance.
In my opinion, a key success factor for agile development is to focus on delivering value for the customer in each iteration. I would definitely pick "ugly stuff that does work" over "shiny stuff that doesn't work". Doing shiny UIs and trying to get the client to understand hat business logic takes a lot of time to implement is always risky which Joel Spolsky has written a good article about.
If the client wants enhancements to the UI, they can always put that as a requirement for the next iteration.
Regarding communication with clients I think that your scetch should be slightly adjusted. Talking in scrum terms your "focal point" is called "product owner". Having one person coordinating with the clients is good, as it can take quite a lot of time to get the different stakeholders agree on the needs. However the product owner (or focal point) should be in direct contact with the developer, without going through the project manager. In fact, the product owner and the project manager has quite distinct roles that gain a lot by being split on two people.
The product owner is the stakeholders' voice to the development team. The project manager on the other hand is responsible for the wellbeing of the project team and often keeps track of budget etc. These roles sometimes has opposing agendas, and having them split on two people gives a healthy opportunity for negotiation between conflicting interests. If one person has both roles, that person often tend to favour one of them, automatically reducing the other one. You don't want to work on a team where the project manager always puts the client before the team's needs. On the other hand no customer wants a product owner that always puts the team's needs first, neglegting the customer. Splitting the responsibilities on two people helps to remedy that situation.
I'd agree with Anders answer. My one extra observation is that many clients find it impossible to ignoire the Ugly. They get concerned about presentation rather than function. Hence you may need to bite the bullet and do at least one "Nice" screen to show that you will pay attention to presentation details.
What to, generally, do in the first few iterations if we start from the scratch?
Many teams use an Iteration Zero to:
setup the development infrastructure (source control, development machines, the automated build, a continuous integration process, a testing environment, etc),
educated the customer and agree with him on the methodology,
create an initial list of features, identify the most important and do an initial estimation,
define time of meetings (planning meeting, demo, retrospective), choose the the iteration length.
Iteration Zero is very special because it doesn't deliver any functionality to the customer but focus on what is necessary to run the next iterations in an agile way. But subsequent iterations should start to deliver value to the customer.
Just give it a month of development to code core of the application or start with simple wire-frames with limited pre-coded functionality?
No, don't develop the core of your application during one month. Instead, start delivering vertical slice of the application (from the UI to the database) immediately, not horizontal slices. This doesn't mean that a screen has to be complete (e.g. implement only one search field in a search screen) but it should ideally be representative of the final look & feel (unless you agreed with the customer on an intermediate step). The important part is to build things that provide immediate value to the customer incrementally.
What usually clients want to see? Shiny stuff that doesn't work or ugly stuff that does work?
To my experience, they want to see demonstrable progresses and you want to get feedback as soon as possible.
Is it a good idea to have a Focal Point on client side or is it better to communicate straight with all the clients to prevent miscommunication?
You need one person to represent the clients (who is called the Product Owner in Scrum):
he provides a single authoritative voice
he has a perfect knowledge of the business (i.e. he can answer questions)
he knows how to maximize the ROI (i.e. how to prioritize functionalities)
Agile generally wants to provide the client something valuable, quickly.
So I certainly would not spend "month of development to code core of the application". To me, that smells of the "big up front design" anti-pattern. Also, see YAGNI.
Get as much information from the clients about what they need soonest, and implement that in your first iteration. "Valuable" is in the eye of the client. Thet will know if they want to see slick UI (maybe they want to give a slide show about the product at a trade show, so functionality can be fake) or simple working features (maybe you're developing something that they need to start using ASAP). Business Value is what they say will help them do their job.
I'd make my iterations as short as I can (your 2 weeks could work, I suggest considering 1 week) If you absolutely can't have your dev team and your clients co-located, instead of having a call with the clients, I suggest a meeting. Demo what you've done over the previous iteration and solicit feedback about what should stay, what should change, and what should be added.
As others have said, your "Focal point" sounds like a Product Owner. What worries me about your drawing is if it is meant to imply that devs don't interact with the PO or the clients. One thing that makes Agile work is when there is lots of communication. Having communication to/from the dev team always filtered through the Project Manager is almost certainly bound to result in miscommunication, unnecessary work, and missed details.
I agree with the two answers given but I would just add one thing from personal experience. Are your customers bought in to the change towards quick iterations? As well as providing feedback after each iteration which is going to require the customer performing usability tests on each feature.
Now I don't know what your groups relationship is with your customer but its not unusual for customers to take a "Put request in - get working system out" attitude in that they are enthusiastic when giving requirements but not so forthoming with time when it comes to testing the feature.
Now this may be totally inappropriate to your situation but its always worth considering how your customer workflow will have to change as well as your groups.
Cheers

Resources