Can I charge my clients for debugging the code I have developed for them? - payment

I charge my clients on an hourly basis, some times they came back with an error or bug in code requesting me to resolve it. It takes time, sometimes reaching 2-3 hours. Most clients think it should not be charged as it was my fault and I should fix it for free. Is that so? It's almost impossible to code 100% error free.

To me it depends. Is the product that you sold working as described in the contract ? If not, well you can't decently ask for more money since you didn't do your job in the first place. You should test your software and do debug for free. It is true that no software is bug free, but it isn't the customer's fault and, as long as you didn't explicitly state that debugging had a cost, well I think it isn't okay to charge for it. (be sure to not let them add features pretending it's a bug, though) !

Related

How do I determine virtual user amount and pacing if the client cannot give me any real data about the website?

I've come across many clients who aren't really able to provide real production data about a website's peak usage. I often do not get peak pageviews per hour, etc.
In these circumstances, besides just guessing or going with what "feels right" (i.e. making it all up), how exactly does one come up with a realistic workload model with an appropriate # of virtual users and a good pacing value?
I use Loadrunner for my performance/load testing.
Ask for the logs for a month.
Find the stats for session duration, then count the number of distinct IP's blocked by session duration.
Once you have the high volume hour, count the number of page instances. Business processes will typically have a termination page which is distinct and allows you to understand how many times a particular action takes place, such as request new password, update profile, business process 1, etc...
With this you will have a measurement of users and actions. You will want your stakeholder to take ownership of this data. As Quality assurance, we should not own both the requirement and the test against it. We should own one, but not both. If your client will not own the requirement, cascading it down to rest of the organization, assume you will be left out in the cold with a result they do not like....i.e., defects that need to be addressed before deployment to production.
Now comes your largest challenge, which is something that needs to be fixed with a process issue by your client.....You are about to test using requirements that no other part of the organization, architecture, development, platform engineering, had when they built the solution. Even if your requirements are a perfect recovery, plus some amount for growth, any defects you find will be challenged aggressively.
Your test will not match any assumptions or requirements used by any other portion of the organization.
And, in a sense, these other orgs will be correct in aggressively challenging your results. It really isn't fair to hold their designed solution to a set of requirements which were not in place when they made decisions which impacted scalability and response times for the system. You would be wise to call this out with your clients before the first execution of any performance test.
You can buy yourself some time. If the client does have a demand for a particular response time, such as an adoption of the Google RAIL model, then you can implement a gate before accepting any code for multi-user performance testing that the code SHALL BE compliant for a single user. It is not going to get any faster for two or more users. Implementing this hard gate will solve about 80% of your performance issues, for the changes required to bring code into compliance for a single user most often will have benefits on the multi-user front.
You can buy yourself some time in a second way as well. Take a look at their current site using tools such as Google Lighthouse and GTMetrix. Most of us are creatures of habit, that includes architect, developers, and ops personnel. We design, build, deploy to patterns we know and are comfortable with....usually the same ones over and over again until we are forced to make a change. It is highly likely that the performance antipatterns pulled from Lighthouse and GTMetrix will be carried forward into a future release unless they are called out for mitigation. Begin citing defects directly off of these tools before you even run a performance test. You will need management support, but you might consider not even accepting a build for multi-user performance testing until GTMetrix scores at least a B across the board and Lighthouse a score of 90 or better.
This should leave edge cases when you do get to multi-user performance testing, such as too early allocation of a resource, holding onto resources too long, too large of a resource allocation, hitting something too often, lock contention on a shared resource. An architectural review might pick up on these, where someone might say, "we are pre-allocating this because.....," or "Marketing says we need to hold the cart for 30 minutes before de-allocation," or "...." Well, you get the idea.
Don't forget to have the database profiler running while functional testing is going on. You are likely to pick up a few missing indexes or high cost queries here which should be addressed before multi-user performance testing as well.
You are probably wondering why am I pointing out all of these things before your performance test takes place. Darnit, you are hired to engage in a performance test! The test you are about to conduct is very high risk politically. Even if it finds something ugly, because the other parts of the organization did not benefit from the requirements, the result is likely to be rejected until the issue shows up in production. By shifting the focus to objective measures even before you need to run two users in anger together there are many avenues to finding and fixing performance issues which are far less politically volatile. Food for thought.

Best/common practices for website transfer (what time of day?)

Basically, my question is to get a feel for what time of day you do things like DNS transfers when moving sites across servers, and provide updates to sites. Does anyone do this during the day, or do most developers do this at night?
I think different approaches can be valid, depending on your situation, the actual change to be performed, how often a change happens and what your users expect.
If you do it during the day, you potentially have a longer time to fix issues if there are any, including if they are issues that need to be dealt with your service provider (which might be more responsive during business hours than at night or during week-ends).
If you enjoy working at night or during week-ends and you don't rely on any service provider to solve any issue related to the move/migration you're asking about, doing it at this time might be less disruptive to your users.
But all that also depends on who are your actual users. More and more users are spread throughout the globe and can interact with your sites at any time of day or night.
Common systems administration practice, when dealing with big changes or migrations, usually recommends not doing it just before off-time (be it night, or week-end, or holidays), unless you're sure that you (or the team) is available to work to fix issues. Basically, you need to plan for issues, because they always happen.
Something else to consider is any SLA you, or your company, might have with the users. In that case, you might need to perform your maintenance action outside of business hours, or with provisional notice, or risking any penalty to be applied.
A proper answer to the question you're asking really depends on your situation and the business you're in.

Does GAE offer default quantitative abuse protection?

If I were to say, upload the sample application written in Python, would Google protect me from malicious bots trying to eat up my resources? DoS attacks?
Exactly how much security can I expect from Google?
background:
I've read this article and it looks like you have the option to manually request certain blocks of IP addresses to be blocked. I am not very knowledgeable when it comes to security, but I would have imagined that Google would automatically blacklist suspicious IPs. But then I realized I really didn't know what kind of protection Google did provide, if any, so I thought it might be best to ask.
They will not protect you. You have to manually block the IP's and even that requires a redeploy of code (there's no UI for it).
I'm speaking about this from the experience of a surprise $1000 / week bill on a normally $5 / day app. I had upped the limit to do a major import of data consuming a ton of resources and then not set it back down again. Big mistake. They did give me system credits for less than a third of it, not sure if that was due to this being the day after the billing change (pre-billing change it wouldn't have cost more than the $5 / day) or if it's general policy after a DoS attack.
Even if you have the bill set to be low, they will just stop serving your resources as soon as your bill is eaten up and no warning email will be sent requiring you to use a third-party monitoring service or watch your site 24/7, making the DoSers job much easier.
Bottom line: tread carefully.

Question re: Dropbox Security Breach and how this happens in production environment

For reference: http://news.cnet.com/8301-31921_3-20072755-281/dropbox-confirms-security-glitch-no-password-required/. Can someone explain to me how this bug can happen if they properly tested this on a staging server with an identical environment to production? I'm trying to understand if it was just a random mistake that could happen to anyone, or if it was just negligence on their part. Thx in advance for any input!
I suppose it could happen to anyone in testing - after all, after you've tested a very secure Dropbox system for a few years you don't expect to need to test blank passwords - but I think that is negligence on the development team's part. When you think about it, there is just no way a flaw like that could be unintentional (maybe the developers wanted to try it out and not have to keep entering passwords - I don't know) because they should employ hashing and all, and even if they didn't even protect against injection in any way a blank password could never match.
I'm not on the Dropbox development team, so I don't know what exactly happened. All you can do is guess. I'm probably completely wrong about this and maybe it was some sort of small technical problem that could easily be overlooked. I don't know.
It often comes down to how extensive the test phase can be - far too often the budget goes on making sure the product looks good to customers, or is out the door by a specific deadline.
It is exceedingly rare to find a company that builds security testing in from the start and has it as required for go-live. Security spend is almost always the first to be cut when a project is over budget or time.
So my guess is that there was a time budget during there was enough to fit in the main functional tests and the 'highest risk' security tests and then it was released.

Do you use 30 day trial servers to do development work? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I know this is an odd question but I need to ask it to get information to present to a client. Their lead network admin wants me to work on 30 day trial servers like Sharepoint & SQL Server to develop projects for their clients. While I will do as they ask, I'm not convinced this is the best way to go about developing software or troubleshooting previously developed software. To be honest, I've never worked on custom development for any server/software using a trial version.
What arguements are there for and against working on trial software/servers?
Pro: It enables you to mock up a concept and see if it seems like the development path will be easy before you shell out large amounts of money for the real deal.
Cons: It could trap you in a vicious cycle of wiping your virtual machine and re-installing the OS, the trial version, and your product (you do use source control, correct?) if they are hoping that this will alleviate the need for ever paying for the real product.
Suggestion: If you don't mind unsolicited advice, then I would determine why the lead admin wants to use the trial versions -- and then go from there. Until you know the reasons you cannot respond to them.
If they are doing it for the pro reason, then determine if you feel comfortable working with the possibility of switching technologies 30 days into your build. (Can you do it efficiently?)
If they are doing it to avoid spending money, present some of the alternate open source / free options that you are comfortable developing with. If they will not change their modus operandi at that point, then do what is necessary, knowing what you will be walking away from / getting in to.
(And if you don't mind one more bit of unsolicited advice -- if they are doing it for the con reason and will not change WALK AWAY)
Point them at BizSpark. Microsoft is begging people to use their stuff. A hunny will get you everything on the map for 3 years or until you start making money.
Oh, to answer your question: If I need to get funding for technology not present in the infrastructure or to do a proof of concept I would not think twice about using evals. That is what they are for. I would be evaluating the suitability of the product for use with my designs. Seems easy to me. Maybe I am just, hold on, i have to give my parrot a cracker... ;-)
Apart from the ethical arguments, there are practical ones:
What are you supposed to do if development overruns? Start reinstalling everything, wasting several days doing so?
Additionally, if the client is so strapped for cash that they want to do this, how can you be certain they will pay you (either due to cash flow problems, or simply because of their shady ethics)?
I'm pretty sure that that kind of use is a violation of license terms. Trial editions of servers are for evaluating a product. And if you are in fact creating a product, then you have gone way beyond evaluation.
I would never work under such terms. If you are developing a concrete product, get proper licenses for the development tools. I know that the developer edition of SQL server is not hugely expensive (compared to a version licensed for production use), so I would imagine that the same counts for Sharepoint.
And then there is of course, as already mentioned, what do you do when the trail period expires?
I wouldn't mind doing this so long as the job is shorter than 30 days. Make sure your work contract they're paying for the time worked and not specific deliverables, because your deliverables are time-bombed.
Also be prepared to walk away. If this company doesn't have resources to get the right software, you don't want to be there longer than 30 days anyhow.
Microsoft provides several pre-built virtual machines, that contains full stacks.
(Server 2008/Sql 2008/Sharepoin) (Server 2003/Sql/Project Server) etc.
They are time bombed, but often (not always) Microsoft will provide a new image after the time out.
The benefit of using these images is that they are already configured and good to go.
As an example here is a beta of sharepoint 2010 (http://www.microsoft.com/downloads/details.aspx?FamilyID=0c51819b-3d40-435c-a103-a5481fe0a0d2&displaylang=en).
If the project has a quick timeline, it provides the developers access to the configured stack right away, with no ramp up time of building new virtual machines.
Esp when working on beta/early release software this is great.
The SQL Server evaluation's download page mentions that the evaluation license is good for 180 days, and specifically advertises it as a tool you can use for mission-critical applications. This tells me MS is fine with your using it for development work.
To answer a question with more questions:
How long does this project run?
What phase of the effort are you in now?
Is this an internal/proof-of-concept project, or something that your customer(s) will be using for a long time?
If you are going to need to use SQL Server for Operations & Maintenance support months past the initial evaluation period, you ought to get a license for the full version of it. And also consider what your customers are using so that you can reproduce any bugs that come back from them.
I don't think it's ethical to continually renew evaluation licenses to have a longer evaluation period. Companies call them "evaluations" as a try-before-you-buy, not a keep-trying-without-buying.
I'm not sure what others are seeing as unethical here. If the project is short enough to be completed within the 30 day trial, I don't see any issues. I think that's a great use of trials - if they can't handle a clients applications then they aren't a good option and you can use something else.
I think others here have given some good advice regarding the longer than 30 days projects and some good contract ideas.
How in-house do the servers have to be? Would a hosted solution work for them? (Dreamhost, Amazon Web Services, whatever)? Some hosting systems provide pretty complex machine images (lots of stuff pre-installed--definitely AWS, presumably most others), decreasing setup time/effort. I think those come with licenses, though I don't honestly know. Plus, in at least some cases, you (they) only pay for what you (they) use.
Obviously no good if the physical machine needs to be in-house, or if things are otherwise super-sensitive.

Resources