In this post I will continue discuss what is needed before any actions can be taken to improvement and I will focus on knowledge of your current quality.
To receive this knowledge, data must be collected and analyzed, and it can be done in many different ways. I will discuss some of them here.
Collect data from usersUsers are those how use your product in any phase, from using the actual code to end user of the final product.
- What does the software developer think about the code? Is it easy to read? Is it easy to modify? And so on.
- What do the testers think about the product? What's their feeling? Is it well tested?
- And finally, is the end user satisfied?
Once when I developed a user interface to a customer, I was really satisfied with it. You could almost do everything from one screen. The error handling was rock solid. The color contrast was chosen well. Then one day it was time to show it to the end user. We installed the product in the customer's vehicle and went out to the forest. The computer screen was shaking all the time during the demo and my choice of small buttons, edit fields and drop down boxes really sucked! There was no chance you could hit the right button.
So....what did I miss? I didn't know enough about the environment it should be used in (I had only tested it in a car that was parked outside the office). This was in my early career and the project was 100% waterfall. RUP was not yet hot (if it ever was)
Lesson learned: make sure you know your customer. Physical meetings/demo are extremely valuable.
Test resultsHow did the tests went? Was there lot of errors found? Test coverage, is it good enough? Was the test performed under good conditions or was the test phase decreased due to late deliveries? Any lose ends?
This is a combination of measurable results (PASS and FAIL) and a feeling about the whole test scope. Yes, I feel very comfortable with the test results or Well..we didn't had time to test this special case X, but it happens extremely rarely...
Metric resultsToday, the range of tools to measure code quality are enormous, and it's more about taste and platform what is chosed. Some of them measures :
- Static code analysis
- Memory profiling
- Cache misses
- Cyclomatic complexity
All these gives a indication about the none-functional quality and in some sense the functional quality. I want to point out that metrics needs to be put in context to be useful. A single “bad” value dosen’t says that the quality is low.
SummaryThe end user might don't care if the code is extremely tight coupled, but it will effect him when he wants an update fast. The best knowledge of your current quality is received through gathering information from several sources continuously.
This was the last post about what is needed before any actions can be made for improvement. The next coming posts will be about what can we do to improve the quality.