I've seen similar questions around "How can I start with unit testing?" and I believe I've even run across a couple that ask how to best introduce it to the team. IIRC, the responses were mostly to the tune of "if you build it, they will come"...meaning once others see what you're doing with it, they'll start using it too. While I'm definitely not knocking the approach (I've used it myself in several other areas of improvement and seen good results), it seems that unit testing is a bit of a tall hill to push the team over.
While I could certainly go ahead and start to use it myself on projects I create, inevitably others will take over the project or work on it simultaneously, and I'm afraid the unit tests will start to break down, not to mention productivity. The #1 argument I've heard against it is that it's just too complicated for the rest of the team to absorb right now, given our workload (and we are slammed). I don't want to introduce something that others don't have free time to learn (I'm bad with the work-life balance thing and will do it in my spare time) and would only bring them down and not get adopted widespread, causing maintenance headaches for anyone down the road who picks it up when it's not the team standard. We just got some new headcount, so theoretically the workload should lessen over time for each person and we could maybe attack it.
Anyone successfully introduced unit testing to their environment against all odds? How'd you do it? Am I being too paranoid about the reaction?
-
This is how I introduced it to my colleagues:
I asked them: if you build this method that gets data from the server, how do you test it? Most of them replied: I make a test form with a button and a grid and use that to test my method.
Then why not put the same amount of time in writing a simple unit test for it? (Actually this will not be a real unit test but an integration test because it accesses the database). After that I demonstrated how this should be used: before every check-in: run the unit tests. Also give a demonstration what happens if someone breaks it.
My advice is start with simple integration tests: get data from your domain and check if it's not empty. After they get used to that you can start thinking about mock objects and other advanced techniques.
Another story is about a task I had to give to a junior. Instead of just giving him the requirements on mail, I sat behind his PC together with him and wrote the outline of a unit test. I told him: if that test passes without any errors, your task is done. As a bonus you have a test that will run every time in the future and which tests your code. I also reminded him about the test form with one button most people make.
The overall message they should get is this:
- no silly test code anymore to test new methods
- tests will always be run in the future
- overall quality will go up
- easier to outline tasks
- safety check when committing code (continuous integration)
-
You could introduce them as smoke tests on your continuous integration system. The build is not valid if some tests have failed. You can only add few tests at first.
Once the team sees that bugs are found when builds failed, it may help you push the notion of unit tests further.
-
You might be interested in the book, Fearless Change: Patterns for Introducing New Ideas. It has a lot to say about how to go about bringing in new ideas, like unit testing, TDD, etc.
To counter the argument that unit testing takes time, I would point out that while it does take more time to develop with unit tests initially, eventually your unit tests will enable you to develop faster. It provides a safety net that makes it hard to introduce breaking changes without knowing and a road map to find where breaking changes occur through the pattern of failing tests (or even a single test). Without unit testing, eventually you run into the situation where development time is lost to debugging time. Unit testing minimizes the amount of debugging of existing code (the code covered by tests) that you need to do. If, in developing new code, you break something else, your unit tests will immediately tell you and tell you where to look to fix it.
-
One of the key things to know about unit testing (which, in all honesty, I really only learned recently) is that you don't have to test everything, right from the get-go. Only implement tests in the really sketchy parts of the system that, if they fail, will be deal breakers for the software. Once people see that the tests are there, and they run them after making a change (and maybe break something) then they'll immediately see the benefit.
-
First make sure the infrastructure is there and easily accessible for people who want to do unit testing. Make sure the libraries needed for writing unit tests are checked in to source control so people can include them in their test-projects. And make sure there are test-projects so people can easilly add tests and make sure test-runners are accessible for people.
Start by showing people that unit-testing can save time. Debugging code that is hard to reach in your application is an ideal place to start unit-testing because it actually costs less time that way. Instead of having to click through an application to run the code you're working in you can run the unit-test.
Make sure you first have a CI server running and run the unit tests there. Having the unit tests run on every build and having the results visible can be a great motivator for people to add to the tests.
Spoike : +1 on CI server. watching builds be checked by their tests is a good motivator for people to write their own and make sure they don't commit code that turns the build red.WestDiscGolf : +1 on automated builds and running unit tests -
First of all, tell your colleagues that they already do this, they are just not aware of it. What you're proposing is a tiny change in their workflow which will yield great results the longer they do it.
The basic workflow of most developers is:
- Write feature
- Run the program once to test it
- Write next feature
- etc.
Your change is, instead of testing the program manually, you write a tiny program to do the test. That takes about the same time after a while. BUT you will also get an "expert system" which knows whether your application is "correct" or not. With the standard cycle, you just know that your current feature works, you have no idea how much else you broke today.
Guidelines:
- Tests must help. If you feel that adding another test is just a waste of time, don't write that test.
- If you can't think of a way to test something, then make a note and bring it up during the next team meeting and move on. Someone else might have an idea. Also, bigger changes must have the support of the whole team.
- Never attempt to achieve 100% test coverage. It's a waste of time.
- Tests must be simple. If you can't create a simple test, that tells you about code rot in your application. Don't try to fix code rot with more complex tests. That's a loose-loose solution.
- Accept that some things just would take too much effort to test. You can only do your best; there is no way to become better than "best".
See my other answers for more details:
- http://stackoverflow.com/questions/556006/do-you-need-to-do-unit-and-integration-testing-if-you-already-do-functional-testi/556042#556042
- http://stackoverflow.com/questions/301693/why-didnt-unit-testing-work-out-for-your-project/301919#301919
I have a series of articles in my blog.
-
I'd recommend starting on a greenfield ; a new project, a new codebase, or at least a new component, as retrofitting unit tests into legacy code is more difficult and time consuming. If this is not possible, unit test should be tried on a standalone piece of code such as a [new] library.
Also, test-driven development (TDD) can ease the transition to unit testing, as the code produced will be - by nature - unit testable.
-
In general I'm a bit against adding unit tests to existing code unless you are trying to ferret out a problem that you "know" is in a particular function. If it already works and you know it works, than what are you adding a test? Where I have found unit tests useful is when I'm about to refactor code and want to make sure it continues to work as it used to while adding the new functionality.
In short, target the code that will benefit the most from the tests rather than trying to get some tool to tell you that you have 95% coverage. If you take that attitude you will probably get more buy-in from your team.
BTW: I'm a big fan of "integration tests" or "end-to-end" tests which try out cases where users (or management) has found problems in the past to make sure that those issues don't come back.
-
Gosh we went through this about 15 years ago. It started with a small group of people that first decided to add assertions to all new code that they wrote. Soon afterwards someone designed an automated unit test harness, something similier to JUnit but this was back in about 1994 and for fortran and C++.
At that time we had dedicated teams developing company wide libraries: graphics subsystems, geometric modeling, GUIs etc. Come library integration day the librarians started running the unit tests on their libraries to ensure that recent changes hadn't broken things. They also started insisting that any changes to the libraries had to have a unit test associated with it.
Meanwhile back in the applications the integrators (who were also part-time librarians) also started to insist that all changes to the application code had an automated unit test too. As the applications could all be scripted test scripts were also produced to test at the application level.
By 1996 the main applications had developed a system where no code change could be submitted unless it had a test and all the application and unit tests had been run and passed. By 2000 the system was fully automated a code change is put into a holding area and picked up by a chron job. The code is integrated into a copy of the toplevel and the unit tests run also any application tests that have been submitted in the current cycle are also run. At days end the entire suite of application tests are run against a test integration build.
You DO NOT break the automated test system either by submitting code that doesn't compile, or that is improperly merged with top level code, or that fails a unit test, or that fails an application test.
We do not work late nights, and we do not work weekends.
-
The #1 argument I've heard against it is that it's just too complicated for the rest of the team to absorb right now, given our workload (and we are slammed).
Perhaps the lesson you want to teach your team over the long run is that practices like unit testing and others that increase the quality/reliability of your software will help you get out from underneath that giant workload (of which I assume a large part is bugfixes, emergency changes, "oh no we deployed X and Y broke", etc).
-
Start small, plant the seed.
Next time you're working on some code, add just a few unit tests to the project for the most critical functions.
Then whenever the opportunity arises just say
"hey, check this out. Now before each relase I can regression test this functionality to make sure it didn't break"
Or whatever unit test example you think would best convey the value of unit testing to your team.
Then you can only hope others will see the value. Lead by example.
0 comments:
Post a Comment