Recently I somebody told me that we need to check code coverage for functional tests and try to get it as high as possible. Seriously ??? I am not a big fan of code coverage even for unit tests. In most cases, the build ends up being red after some refactoring and then people try to force feed tests (these tests are written just for the sake of writing), so that it crosses the magic number. It is a like a target for a salesman; he has to sell stuff even if the client doesn’t need it. Code coverage for functional tests is just taking it too far. For me functional tests (or acceptance tests) are there to test business scenarios. For me they are a form of black box testing which make sure that a happy message appears after I make a payment on my application. What objects I use for making my payment is none of your business. We have unit tests to test that (and yes we run ncover for checking code coverage for unit tests).
However, functional coverage is very important for my functional test suite (note the absence of the word ‘code’). It is just the amount of business functionality the application covers. How do you quantify it ? Unfortunately there is no way to quantify functional coverage. There is however a way of improving it though. Whenever a defect is found manually, just make sure you automate it and you have done your bit to the greater good of improving your functional coverage. So next time if a project manager asks you for the magic code coverage number for her shiny dashboard, say that it is somewhere between zero and hundred. Because THERE IS NO SUCH THING !!!