I have to write an essay on scheduling non-CPU resources. I can't find any information on what exactly this entails, could anybody give me an idea as to what this is.
That's actually anything that you can plan, for example train scheduling on the station (movement between tracks, arrivals, departures, etc) This particular example is actually a topic for roadef competition this year, you can read about it here
Transportation is actually often solved real-life scheduling problem that a lot of companies are facing.
Related
My purpose in this discussion - Find the good method to choose best Learning Rate Scheduler in model.
I know No Meal For Free, continuously training, trying many Scheduler to find the last method with the best performance sometime is the last solution. But I hope you could share some your experiences, we can collect and use in future.
Firstly, I searched kernels that introducing some Learning Rate Scheduler (LRS)
https://www.kaggle.com/code/tolgadincer/tf-keras-learning-rate-schedulers/
https://www.kaggle.com/code/snnclsr/learning-rate-schedulers/
https://www.kaggle.com/code/isbhargav/guide-to-pytorch-learning-rate-scheduling
Secondly, I have some questions for beginning
What points do you think when choosing a LRS?
If we choose 1 in 2 LRS, what standard do you get result for the better one?
I'm a college student who volunteers as a program manager for a local community service organization. One big part of my job involves matching volunteer schedules (submitted to me via text and email) with tutee schedules (submitted by teachers via a google form). For the past two years, I've been matching the requested time slots with volunteer availabilities manually with excel sheets and color coding. This has been easy so far because I've received a relatively small number of tutor requests and volunteer sign ups.
Over the past two months, I've worked hard to grow the tutoring program at the school I manage. This semester, I received 18 request forms for over 25 students. Matching volunteer schedules manually for this many people will take hours, if not days, for me to complete. Given my work load, I figured there'd have to be a better way to approach this problem.
I am curious if any of you all with constrained programming experience could help me (1) solve my scheduling problem or (2) recommend software that can help. Below I will outline the scheduling process in more detail and list the constraints that must be taken into consideration when scheduling shifts.
THE SCHEDULING PROCESS
I ask my volunteers to send me their Monday - Thursday availability in a format like so:
M: 9:30 - 12:00
T: 2:00 - 4:30
W: 12:00 - 1:30
Th: 10:00 - 11:30
The school is a 15-20 minute drive away, so I rely on 'drivers' to carpool other volunteers to their 1-hour shift. If the volunteer has a car and is willing to carpool, then I try and match at least 2 other volunteers with the same availability with that driver (given that the car has enough room and a tutor has been requested by a teacher for that time slot).
I then pray that a teacher has requested a tutor for that carpool's time-slot. If not, then the carpool is no good and I have to manually come up with another solution.
THE CONSTRAINTS AND VARIABLES
Obviously, there are several constraints and variables that come into play when making the schedule. I will list as many as I can below:
Is the tutor a 'driver'?
If the tutor is a 'driver,' how many seats does he have?
Does the driver's availability match up with any of the requested time slots?
Are there other, non-driving tutors that have the same availability as the driver? (i.e. is there anyone that can carpool with the driver)
Do all volunteers in the carpool have a student they could tutor?
Did the teacher request individual or group tutoring (i.e. one or more tutor for one or multiple students)?
If yes, how many tutors did the teacher request?
REMEMBER, one volunteer can tutor more than one student
That is a small list of the constraints and variables I can come up with off the top of my head.
So can anyone offer a solution to this scheduling problem? Would someone who has no knowledge about constraint programming be able to use OptaPlanner to solve this problem?
Thank you for giving this a read and offering your advice.
You can use OptaPlanner for solving this problem, but it feels like Minizinc could be a better option. My point is that in case of Minizinc you describe required properties of a solution, instead of implementing workflows and algorithms to manage variables, constraints, parsers for input data etc, in case of OptaPlanner.
If one is not experienced with constraint satisfaction, it could be much simpler - just describe allowed/disallowed configurations in a text configuration file and run a solver. You can even provide a simple GUI for generating input data.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We've been recently implementing Scrum and one of the things we often wonder is the granularity of tasks within stories.
A few people inside our company state that ideally those tasks should be very finely grained, that is, every little part that contributes to delivering a story should represent a task. They argument that this enables tracking on how we are performing in the current sprint.
That leads to a high number of tasks detailing many technical aspects and small actions that need to be done such as create a DAO for component X to persist in database.
I've also been reading Ken Schwaber and Mike Beedle's book, Agile Software Development with Scrum, and I've taken the understanding that tasks should really have this kind of granularity; in one of the chapters, they state that tasks should take between 4 to 16 hours to complete.
What I've noticed though, is that with such small tasks we often tend do overspecify things and when our solution differs from what we've previously established in our planning meetings we need to create many new tasks or replace the old ones. Team members also refrain from having to track each and every
thing they are doing inside the sprint and creating new tasks since that means we'll have to increment our total tasks in our burndown chart but not necessarily adding a task that aggregates value.
So, ideally, how granular should tasks be inside each story?
Schwaber and Beedle say "roughly four to sixteen hours."
The upper bound is useful. It forces the team to plan, and helps provide daily visibility of progress.
The lower bound is a useful target for most tasks, to avoid the fragility and costs of overspecification. However, occasionally the team may find shorter tasks useful in planning, and is free to include those. There should be no mandated lower bound.
For example, one of our current stories includes a task to send something to another team -- a task that will take 0 hours, but one we want to remember to finish.
The number of tasks in your burndown chart is irrelevant. It's the remaining time that matters. The team should feel free to change the tasks during the sprint, as Schwaber and Beedle note.
On my last assignment we had between 4 and 32 hours per task. We discovered that when we estimated tasks to more than ~32 hours it was because we did not understand what and how to do the task during estimation.
The effect was that the actual implementation time of those tasks varied much more than smaller task. We often also got "stuck" on those tasks or picked the wrong path or had misunderstood the requirements.
Later we learned that when estimated tasks to be that long it was a signal to try to break it down more. If that was not possible we rejected the task and sent it back for further investigation.
Edit
It also gives a nice feeling to complete tasks at least a couple of times a week.
It also gives rather fast feedback when something does not go as planned. If someone did not complete an 8h task in two days we discussed if the person was stuck on some part, if somebody else had some ideas how to progress or if the estimate was simply wrong from the beginning.
Tasks should probably take one-half day to a day, maybe as much as two days sometimes.
Think about it this way: on a more macro level, short iterations promote agility by creating small amounts of value quickly and allowing plans to change as business needs change. On a more micro level, the same is true for tasks. Just like you don't want to spend 3 months on a single iteration, you don't want to spend a week on a single task.
Daily standup meetings can give you a clue that your task size is too big. If team members frequently answer "What did you do yesterday?" and "What will you do today?" with the same answer that they gave the day before, your tasks are probably not small enough.
An example of that would be if a team member regularly answers: "I worked on BigComplexFeatureObject today and will work on it tomorrow" for more than one day in a row, that's a clue that your tasks may be too big. Hopefully, the majority of days a team member will report having completed one task and be about to start another.
Short tasks, 4-16 hours as others have said, also give the PO and team good feedback about project progress. And they prevent team members from going down "rabbit trails" and spending a lot of effort on work that might not be needed if business desires change.
A nice thing about having many smaller tasks is that it potentially gives the PO room to prioritize tasks better and optimize delivered value. You'd be surprised how many "important" parts of big tasks can be postponed or eliminated if they are their own small task.
Generally a good yardstick is that a task is something you do on a given day. This is ideal, which means it's rare. But it does fit nicely into that 4-16 hour estimate (some take half a day, some take two days, etc.) that you gave. Granted, I don't think I've ever spent an entire uninterrupted day on a single task. At the very least, you have to break for the scrum meeting. (At a previous job a day of coding was considered 6 hours to account for overhead.)
I can understand the temptation of management to want to plan every single granular detail. That way they can micro-manage every aspect of it. But in practice that just doesn't work. They may also think that they can then use the task descriptions to somehow generate detailed documentation about the software, essentially skipping that as an actual task itself. Again, doesn't work in reality.
Agile development does call for small work items, but taking it too far defeats the purpose entirely. It ends up becoming a problem of too much up-front planning and having to put in a ton of extra re-planning any time anything changes. At that point it's no longer agile, it's just a series of smaller waterfalls.
I don't think that there is a universal answer to this question that fits every situation. I think that you should try what your collegues are proposing, and after the first sprint or two you evaluate and see if the process needs tweaking to accomodate everyones needs and wishes.
That 4 hour figure sounds like a good minimum to me. I like to think in terms of visible results. We don't have a task per line of code, or a label on a screen, or per refactored utility method surely? But when we get to something that someone else can use, like a public class used by someone else, or a set of fields on a screen that allow some useful action then this sounds like a trackable task to me.
For me the key question is "Do we know we've finished it?" with individual helper functions there's a pretty good chance of refactoring and change, but when I say to my colleage "Here, use this" it either works or it doesn't. The task's completeness can be evaluated.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Summary
We are start up and we provide software development services. We develop windows, web, services and mobile applications. We were aware of agile and we are scrum certified developers . We do user story based estimation and task planning. No issues.
Issue
We are approached by many small customers. Customers says very high level features or few words about the concept of their dream project. They asks for Effort Estimation and Cost Estimation. Mostly they are interested in Cost.
For each customer we did create the user stories and estimated the user stories and based on story points, we estimated the effort in days and we convert the days to the cost based on hourly rate. We involve the team of 3 or 4 people and get the estimation done. We spend at least 20 to 30 hours of team total time for estimation. (Team of 4 discussing for 5-6 hours)
The problem is that many customers would never turn back. We do not want to spend 20-30 hours of team effort. We don't want to use the exact user story estimation that we follow for contract signed project.
Question
What could be done in order to provide approximate estimate for small customers with small business?
I don't know there is a solution, other than to find 'better' customers. It sounds like you're doing it right to me. Non-technical customers often want you to spend 30min on the phone with them and then give them a price for the whole thing, so it's good you take the time over it properly. However then you often waste your time.
Maybe you need to say 'no' to customers who you don't think are serious. Or charge for the time spent doing highly skilled estimation work.
By 'better' customers I mean bigger companies, who are more experienced with software (and also probably have bigger budgets). The downside is more paperwork - you are much more 'free' dealing with small firms but also more at risk.
You don't have to stick to a fixed-price contract, for requirements that are vague you should look at doing time and materials.
Basically you need to spread the risk of the cost of overrun between you and the client.
A hybrid option might be to do some T&M proof of concept work then fixed price for the rest when you understand it better.
Alternatively, if you client has a pot of money then use your agile strengths to work with the customer to incrementally deliver functionality until they run out of money.
"We do not want to spend 20-30 hours of team effort."
Then don't.
If your estimating method is too costly, stop doing it.
"Customers ... are interested in Cost."
Then get them a cost more quickly. Do less work. Don't use a team of 3-4 for 20-30 hours. Have one person do it quickly.
One person can create a spreadsheet with stories, story points, priority, hours and cost. That's the project backlog for Scrum. That's the estimate. That's enough to start a conversation. It's not a fixed-price estimate.
A simple spreadsheet with stories, story points, price and priority is all you need. Then you can work with the customer to adjust priorities to determine how many of those stories they can actually afford to buy.
If they want a fixed price, you simply need to review each story summary to see if the points are right. You already have the spreadsheet, and the priorities, and the formula to compute price.
The unfortunate short answer is you are going to have to significantly reduce your estimation costs. The only way to do that is to reduce the number of people to one and use a formula approach.
Take a best guess. Put reasonable assumptions in the contract. If you do a decent job, some will be a little high, some will be a little low and it will average out in the end. If you are off by too much, track changes within the project and charge for them. The key will be in the assumptions placed in the contract, usually in the form of a statement of work.
This is normal business, from small projects to enterprise projects. Just a matter of scale. It doesn't have to be done this way, but it often is.
This reminds me of the challenge between time, money, people and overall quality. Some people can see this easily and others may struggle with the idea. Part of the key point here is to understand is what kind of expectations do you want to set and what kind of leeway do you have with the customer. For example, how are bugs in the software or overall support factored into the project.
You may want to consider how much work do you want to do upfront and on what kind of scale are you spending the 20-30 hours estimating a project. Comparing the cost of that much time spent generating an estimate, which # $40/hour is $800-1,200 by the way, to what the revenue from the project would be is something to consider. If the entire project is $400, then was it worth spending twice that coming up with an estimate? On the flip side, for million dollar projects, it may well make sense to spend that kind of money.
My suggestion would be to see if there is a cookie-cutter approach that could be taken for their projects so that there isn't as much variability to the projects if that is possible.
Theres two sides of estimation, first creating the initial 'ball park' figure this should be done relatively quickly, it should be emphasised its a ball park figure and its the start of your conversation with your customer, its not a contract. Second you do your more detailed team based estimation.
This is how this consulting company does it - Ball Park Estimating
Create a template spreadsheet with times and costs for typical pieces of work, look at the initial requirement and update your template. Start the conversation with this is the ball park but we will need to work together to confirm a final price and get a more accurate estimate. This more accurate estimation process will take x hours and cost x dollars.
I am in the early stages of design of an application that has to be highly available and scalable. I want to use an eventual consistency data model for this for a number of reasons. I know and understand why this is an unpopular architectural choice for many solutions, but it's important in my case.
I am looking for real-world advice, best-practices and gotchas to look out for when dealing with distributed / document-style databases. And particularly areas around e-commerce (shopping cart style) apps that traditionally are easier to put together with a relational db.
I understand using these types of DB is challenging, but hey, Google and E-bay use them so they can't be that hard ;-) Any advice would be appreciated.
If you want to have a Distributed System (that "Eventual Consistency" thing) you need people, build, maintain and to operate it.
I found that there are three classes of people which have very little problems with "Eventual Consistency":
People with a solid background in distributed systems. They have learned about Eventual Consistency Byzantine Failures and stuff like that. If you understand that Paxos is not about holidays, you are probably one of them.
People experienced in network programming. They might miss the theoretical background but have an intuitive understanding of asynchronity and the "no global clocks & counters" paradigm. If you own at least 8 books by Richard Stevens you are probably one of them.
Very experienced coders which had little exposure to RDBMS. Kernel guys, people from scientific computing and the gaming industry come to mind.
All in all this people are very sought after in the job market. For example 75% or so of the academics in distributed systems leave for institutions who run big, self-designed distributed systems, e.g. the stock exchanges.
The whole thing got somewhat simpler with offerings like Hardoop, SimpleDB and CouchDB but it is still a big challenge to build something on distributed systems technology.
On the other Hand RDBMS are a very fine pice of engineering. They are well understood and expertise on them is available the job market. There are a lot of decent tools, education opportunities and lots of highly skilled experts are available to be rented by the hour. So think twice of you can't get on with a RDBMS approach - perhaps coupled with some clever cheating. I usually point students to the Lifejournal architecture.
For Distributed Databases there is much less experience. That's exactly the reason you have found so little advice so far.
If you are determined to use "Eventual Consistency" I think besides immature tools the main challenge is the mindset of everyone involved. Are your API users (coders) and application users (your employees and your customers) are willing and able to accept the inconsistency? Can you hide it from certain classes of users? We are not used to that mindset that computers are inconsistent. Something is in stock or it isn't. "Maybe" isn't an answer users expect.
Also keep in mind that "eventual" can mean a very long time to algorithm designers. For how long can you accept inconsistency?
For a shopping cart application you might want to go truly distributed: Use the Clients Browser as data store. On checkout you can submit the cart to the server side batch processing system. This means for the catalog you need read only high availability (easier) and the cart submission is a very narrow interface with no need for transactions. Later on the processing of the order has no (Soft) real time requirements and thus is easier.
BTW: Last time I checked on E-Bay architecture they where big in RDBMS but it may have changed since then. (Edit: it did change - see comments)
The only solution to your problem is to decide which tradeoffs in the CAP theorem are right for you, then begin implementing it.
mdorseif has a great point. There are many configurations of to what extent you trade off consistency, availability, and partitioning. You have two main options.
Go the route of an in-house distributed system (takes lots of expertise and research)
Vet and experiment with a number of distributed databases to decide what can handle your requirements as scale.
This is probably an over-simplification. A real production-ready pipeline is an eco-system. It'll at least get you on the right track.
Appnexus is an ad platform that uses hbase for very high availability and eventual consistency. They talk a lot about this here.
An article on http://highscaleability.com outlines how the New York Times implemented RabbitMQ alongside Cassandra across a WAN for fault tolerance and high availability.
MongoDB provides a great deal of flexibility in balancing consistency with availability with their implementation of write concerns. They've got excellent documentation that highlights exactly how to implement it with all the gotchas (including partitioning). They implement the two-phase commit to maintain state across the network (on their config servers).
Google has a great paper on this subject, their photon project implements a highly scalable, highly reliable system with the paxos algoritm at the heart of it alongside a few other techniques. It also happens to be very consistent (with end-to-end latency of about 10s) and fault tolerant, standing up to regional failures.
All systems build on distributed computing models are build on CAP and BASE. Here the main concern is If our system provides Availability and Partition Tolerance we cannot have true consistency but we can have eventual consistency.
The idea behind eventual consistency is that each node is always available to serve requests. As a trade-off, data modifications are propagated in the background to other nodes. This means that at any time the system may be inconsistent, but the data is still largely accurate.
Source: http://www.techspritz.com/eventual-consistency-and-base-model/
How to achieve high availability and scalability using relational databases is well known and there is a vast body of knowledge out there on how to do this!
Google is a special case which does not apply to most sites, very very high volumes of queries, very very large amounts of data, and, most importantly no Service Level Agreements with most of its users. There is no correct answer to a Web search only better answers, for the average user Google is good enough, if Google misses a vital page from a search list you as a user cannot complain.
E-Bay is a rather different case, somehow they have persuaded there users and customers to accept poor service in exchange for theoretically lower prices -- good on them but this is not an option for every business.