Everybody heard about the 80-20 rule that says that 80% of the results are coming from 20% of the subjects.
This can be applied to any field as follows:
- 80% of the revenue of a company is coming from 20% of the clients
- 80% of the donations for a charity are coming from 20% of the people
- 80% of the books from a bookstore are purchased by 20% of the clients
For software, this could mean that
- 80% of the clients are using 20% of the functionality
- 80% of the bugs are caused by 20% of the functionality
I used to think that this is how things are as the rule is too attractive in its common sense and simplicity.
The problem is that when you investigate it a little bit, things are becoming a little more complicated.
How does this apply to testing?
Well, the project release date is fixed so you cannot test everything well.
So, test only 20% of the application as this is what the majority of the users will use.
Select the 20% of the application's functionalities that have the highest risk and test them well.
Test the remaining 80% of the functionalities by just taking the happy paths.
You think you did a good job, the project manager is happy with the results.
And after the release, the support team receives lots of issues from the clients about the 80% of the application not tested well.
More, the senior management of the company starts noticing problems all over the application too.
The solution is, of course, applying an endless number of patches with bug fixes for the issues discovered by the customers, frustrating the customers as much as possible and wasting as much time as possible for both the development and testing team.
This can be applied to any field as follows:
- 80% of the revenue of a company is coming from 20% of the clients
- 80% of the donations for a charity are coming from 20% of the people
- 80% of the books from a bookstore are purchased by 20% of the clients
For software, this could mean that
- 80% of the clients are using 20% of the functionality
- 80% of the bugs are caused by 20% of the functionality
I used to think that this is how things are as the rule is too attractive in its common sense and simplicity.
The problem is that when you investigate it a little bit, things are becoming a little more complicated.
How does this apply to testing?
Well, the project release date is fixed so you cannot test everything well.
So, test only 20% of the application as this is what the majority of the users will use.
Select the 20% of the application's functionalities that have the highest risk and test them well.
Test the remaining 80% of the functionalities by just taking the happy paths.
You think you did a good job, the project manager is happy with the results.
And after the release, the support team receives lots of issues from the clients about the 80% of the application not tested well.
More, the senior management of the company starts noticing problems all over the application too.
The solution is, of course, applying an endless number of patches with bug fixes for the issues discovered by the customers, frustrating the customers as much as possible and wasting as much time as possible for both the development and testing team.
No comments:
Post a Comment