Benefits of automated functional testing (was: Why unit testing is a waste of time)

Bookmark this on Hatena Bookmark
Hatena Bookmark - Benefits of automated functional testing (was: Why unit testing is a waste of time)
Share on Facebook
Post to Google Buzz
Bookmark this on Yahoo Bookmark
Bookmark this on Livedoor Clip
Share on FriendFeed
Benefits of automated functional testing (was: Why unit testing is a waste of time)SDK

Update 2009-06-30: Despite the attention seeking title and first few paragraphs, the point I was trying to make with this post was about the benefits of automated functional testing. Since it’s been posted to proggit and hacker news, a number of people have objected to the misleading intro. Apologies for that, and please start reading from “Benefits of automated functional tests” if you’re not interested in the misdirection! — Tim.

In the last few years of writing “enterprise” software in Java I’ve created and maintained thousands of unit tests, often working on projects that require 80% or better code coverage. Lots of little automated tests, each mocking out dependencies in order to isolate just one unit of code, ensuring minimal defects and allowing us to confidently refactor code without breaking everything. Ha! I’ve concluded that we’d be better off deleting 90% of these tests and saving a lot of time and money in the process.

The stem of the problem is that most “units” of code in business software projects are trivial. We aren’t implementing tricky algorithms, or libraries that are going to be used in unexpected ways by third parties. If you take most Java methods and mock out the services or components they call then you’ll find you aren’t testing very much at all. In fact the unit test will look very similar to the code it is testing. It will have the same mistakes and need parallel changes whenever the method is modified. How often do you find a bug thanks to a unit test? How often do you spend half the day fixing up test failures from perfectly innocent code changes?

The complexity that does arise in this kind of software is all down to interactions between components, messy and changing business requirements and how the whole blob integrates at runtime.

Should we do away with automated testing all together then? No. In fact I have found “functional” end-to-end automated tests to be incredibly useful. By functional tests, I mean those that test software in more-or-less the same way as a human tester does, covering wide swathes of the application without checking the result of every possible variation.

The benefits of functional tests are outlined below, and after discussing the one big downside I admit where unit tests can be appropriate.

Benefits of automated functional tests

Smoke testing

If we deploy this release of the application, is the application going to release a deployment of toxic fumes and metal scraps?

I’ve worked on a project where, with the “final” release looming, we were creating a release for manual testing at the end of every day. And the next morning, testers would try to run a few simple scenarios and the application would fall over at the first hurdle. Incorrect database patches, configuration errors, NullPointerExceptions in critical paths and the like.

Later on I developed a set of a dozen or so functional tests, covering just a few basic scenarios. Nevertheless, this was sufficient to ensure that manual testers were able to find useful bugs in the application rather than having to sit around for days twiddling their thumbs because nothing works.

New developers

It’s tough being a new developer on a project. Faced with a mountain of code I haven’t seen before, performing some process I don’t really understand – wait, what does this application do again? – the first thing I like to do it run the damned thing and get a few ideas on what it actually does. To play tester for a while. Unit tests aren’t very useful here. We’re wanting a high-level overview of the application rather than worrying about how some tiny part of it might be implemented.

So how do you get the application up and running on your development environment. If you’re lucky, there’ll be some up-to-date instructions for getting it to kind of start up. Then you’ll get one of the other developers to show you how to run a few things. “Um, let’s do it on my machine,” they’ll say. See, you need to hack up the reference data in the database like this and take this sample XML file I’ve got sitting around and modify this, this and this field, stick it in this table with these dozen values, and then … it goes on. Good luck ever replicating that yourself.

If however, you have some automated functional tests, you can say “run this test that covers the simplest scenario with everything going right”. They can then explore the database and see what kinds of information has been written and even create breakpoints in a debugger to step through the application and get an idea of how everything sits together.

This is massively handy, not just for the new developer but for everyone around them. Instead of having to spend weeks or months babysitting each new developer you can just point them at a set of tests and tell them to play around for a while.

What’s more, a “new developer” is not just one who is new to the entire project. With any project of a reasonable size, you will naturally find that each developer has parts of the application that they don’t see, touch or understand. If you don’t want to maintain the same part of the application for all time then it’s in your best interests to make sure you can hand it over to another person with the least impact possible. Having functional tests is a great way of doing this.

The benefits described here give us a few clues to how our functional tests should be implemented.

First, they need to set up their own reference data so that a new developer can sit down and run them straight away without hacking their database. In fact, the first thing any test should do is to clear out all the reference data, and set up just the absolute minimal data required to run through this test successfully. This helps people understand the impact of setting difference reference data values (this is often woefully under-documented!) and by having the data be independent, we avoid the problem where adding a new test requires jumping through hoops to avoid breaking every other test in the system. (Aside – I have a convenient method for setting up reference data in code that I may share at some future stage.)

Second, if people are going to be running through the test in a debugger and stopping at breakpoints to explore the runtime state of the application, then we can’t get away with putting in sleep(fixedTime) calls through the tests. You can’t say “the application should have finished processing by now” if someone has actually paused it for half an hour. Tests that sleep are slow and unreliable in any case.

Experimentation and Precise Requirements

If a client asks you “how does the application behave in this weird scenario,” how long does it take you to give an answer? The requirements documents probably don’t have that level of detail, and in any case the question is how the application really behaves.

If you have functional tests already, then it’s probably straightforward to modify one that checks a related scenario. After you’ve written it, you can read over it again to check that you’re actually testing what you intend to. Checking it manually would be error-prone and involve a lot of set up time.

Odd special cases

I’ve had cases where we decided that one particular client would be able to submit input data that was incorrect in a couple of specific ways, and we’d accept it anyway. The last was one of those good times where something that’s causing major headaches at a high-level in the business turns out to be an issue that I as an individual developer could make go away with some minor code changes. I think sometimes we take for granted how fortunate we are as developers to be able to make such a direct impact on problems instead of being wholly dependent on others.

Right, so I’ve put this worthwhile hack in. What’s to stop another developer coming along and changing the code again? There’s no obvious reason for why it behaves in this way – it’s just down to history. I can put comments in the code, get requirements to update their documents, but no-one reads documents anyway ;-) What’s more, even though I only had to make some minor changes, that was due to some co-incidental reasons of how other parts of the code just happen to be currently implemented.

What I did was to write a functional test that takes the exact sample file from the client and runs through the applicable scenario, checking that it is accepted. (I also wrote another test to check that the same file is rejected if it’s from any other client.) If a developer ever makes a change that breaks the test, the first thing they’ll do it open it up and find a big comment at the top that I wrote explaining all the history around why this is necessary and what it does.

This makes the requirement discoverable when it matters. Bonus points!

Fixing a bug, really

A tester raises a defect. “When I do this, the application gives an error.” You check the code and find, oops! We’re doing something stupid. A quick fix, commit, mark the defect as fixed. Weeks later they re-test it, re-open the defect because the same series of steps still fails in the same place. It’s a different error message though! As a developer you can try to say that the original defect was “fixed”, it’s just the application it now getting further along and hitting a different issue. No-one cares though.

If you write a functional test to verify the scenario now works (even if this is just an ad-hoc test that you don’t commit) you’ll ensure that the tester will agree with you that it’s been fixed.

Downsides of automated functional tests

There’s really only one – they are SLOW. Whereas you can easily run a hundred unit tests in a second if you’re using a lot of mock objects, a single functional test may take 2 minutes to complete. This is a big problem. Whenever I’ve been talking about writing an ad-hoc test, and it being beneficial even if you don’t commit it, I’m alluding to the issue that it’s practical to write a functional test for every defect raised, but not so practical to run them all every time someone commits, or even once a day, as a way to guard against regressions.

Because I think that functional tests have enormous benefits, I also think it’s worth investing time getting them to run as fast as you can. This may include running tests in parallel on a cluster of servers (hey, seems I’ve found a way lots of companies could utilise “cloud computing”). It also tends to be better to add more assertions to existing tests than to construct an entirely new independent test. You can make tests that are “data-driven”, so that a single test may have a table of a dozen different options that it loops through, without having to start from the beginning each time.

The slowness means that you’ll only be able to have a small number of functional tests – under a hundred – and this means that there’ll be a lot of scenarios (particularly negative scenarios) that you’re still not testing.

On the other hand, an automated functional test still runs faster than a human tester. It’s not unusual to have teams of testers that take a week to run through a hundred scenarios. With every release they have to go through this entire process again. What’s the cost of this, in both money and time? More than the cost of a set of servers?

(The other option, a key feature of Rails and becoming popular in other recent web frameworks as well, is to support “integration tests” that work at a lower level than functional tests, but still well above unit tests. Such a test would, for example, request a URL and assert that the response is to be forwarded to a different URL, without actually going through a web server or web browser. The exact level at which you test will depend on the type of application, and how easy it is to simulate a user.)

There’s an interesting post on the RunCodeRun Blog called It’s Okay to Break the Build. It argues that it’s unreasonable for developers to have to run an entire set of slow tests before every commit. I agree, and have found that simply running a couple of “relevant” functional tests before committing ensures that it is unlikely that you will break any other tests. Breakages can be quickly reverted after the test failure shows up on the main build server. (Perhaps there are some DVCS workflows that would be useful here?)

Continuous Deployment at IMVU: Doing the impossible fifty times a day is another fascinating article. It describes a project that has 4.4 hours of automated tests, including an hour of tests that involve running Internet Explorer instances an simulating a user clicking and typing. By running them across dozens of servers, they can run all the tests in 9 minutes. What’s more, they have so much faith in these tests that their code is automatically deployed to live when the tests pass. This happens 50 times a day on average. Brilliant!

Concluding by backtracking

Although I’ve dismissed the benefit of having lots of unit tests in the types of projects I’ve recently been working on, what if your environment or project isn’t the same as mine?

Consider the strength of interfaces and the separation of developers in your project.

Are you writing a library or framework that people may use in ways you don’t expect? Do you have a lot of “generic” code that has strong interface boundaries? What is the impact if a bug is found? If all your development is within one team, as soon as you discover a bug in one section of the code it’s easy enough to write a functional test and fix it. That doesn’t hold though if the developers that discover the bug and need the fix will have little or no hope of writing it.

In some cases it is useful to test code in isolation and mock out its dependencies – because you actually don’t know what these will be at runtime. In a lot of “business” code though, while people may claim some code is generic it isn’t really. Or if it is, it’s a case of premature indirection (not abstraction!) being the root of all evil.

Another dimension to consider is the rate of change of requirements documents.

If requirements are fuzzy and constantly changing, then what’s the point of testing all sorts of weird edge cases? The desired behaviour won’t be well-defined in any case! With changing requirements, functional tests are very useful to keep track of how the application is supposed to (and does) behave at the current point in time.

I would go so far in certain circumstances to say that good testers should be able to themselves write automated functional tests, based of higher-level requirements, using a domain-specific-language (DSL) written by the developers. This provides a precise language with which business analysts, testers and developers can communicate. Imagine receiving a bug report with a test case that you can immediately reproduce instead of having to guess at! With enough skill, a large set of functional tests would act in concert with higher-level requirements documents to produce a system that has well-documented behaviour. Unfortunately the “certain circumstances” I’m talking about is “Tim’s dream world”. In the real world, you need to have a least the first big set of tests written by a decent developer who has good taste.

To conclude, delete most of your unit tests and replace them with just 10 functional tests for a start. Forget about code coverage metrics, and let me know how it goes!

(If you liked this post, you may also be interested in It’s not done yet, where I talk about some of the issues around getting the kinds of applications I typically work on into a live environment.)

Posted by: Wednesday, February 25th, 2009 Testing

39 Comments to Benefits of automated functional testing (was: Why unit testing is a waste of time)

  • Mr Internets says:

    Your post is correct.

    Unfortunately, it is also thouroughly out of vogue.

    At my current job, if I tried to replace any unit tests with automated functional tests I would be immediately murdered. There would be no reasoning. Just murdering.

  • [...] Tim’s Treaties On Sloths By ine8181 Tim writes [...] they are [...]

  • ine8181 says:

    I think that unit testing has been around for long enough that it must be fashionable now to criticise it.

    Of course I’m not insinuating that this blog post tries to be a fashion leader instead of a well-thought out technical blog post.

  • Mr Internets: Well, you can change your organization, or you can change your organization.

  • scdf says:

    You make some interesting points, and in certain cases I agree with you.

    On paper, unit tests are really amazing. On paper, they would do what your integration tests do, but faster, and they would isolate the very ‘unit’, as it were, that was failing, and probably why. On paper you can refactor with confidence, knowing that all your requirements are covered by a tests that will point to you the exact point of failure. On paper you can add or modify functionality due to changing requirements by changing tests then coding till they all pass.

    On paper.

    If wishes were horses then we’d all be eating steak. Or something.

    The point is, this ideal is far, *far* from reality. The amount of work required to code to a standard that makes your code unit testable and then actually write the unit tests ranges from improbable to impossible.

    Unit tests are an inefficient way to increase confidence in your code. They are that 5% that takes the *other* 95% of the time.

    In my short career I’ve seen one, and only one use of unit tests. Incidentally, it’s happening right now.

    At work we are doing some major refactoring to an existing code base. How major? No more hibernate, no more database, hello completely different way of dealing with data.

    It’s kind of a big deal. And I honestly can’t imagine how we would deal with this situation without unit tests. Requirements are thin on the ground, the code is not well written, understanding of the code base is low. Without unit tests… well, make a few changes and before you know it you’re ankle-deep in shit and can’t smell death any more[1].

    But when are you in that situation, really? How many times are you in the situation where you have a major change to perform on bad, unfamiliar code with little requirements but *stellar* unit tests. Not never. But not often.

    One could make the observation that, if people have the discipline to write unit tests they probably have the discipline to write good code. But I won’t go there, as I’ve rambled long enough.

    In conclusion, unit tests are great in a fairy-tale land that no one lives in, or in a few improbable scenarios that don’t come up often enough to care. Unit tests are developers jacking off over a concept. An ideal that has a decidedly minor place in reality.

    Rant complete.

    [1] (credit where credit’s due)

    • Josh says:

      > The amount of work required to code to a standard that makes your code unit testable and then actually write the unit tests ranges from improbable to impossible.


      > In my short career I’ve seen one, and only one use of unit tests. Incidentally, it’s happening right now.

      Well, that certainly increases your credibility. ;-)

    • Sidu says:

      “The point is, this ideal is far, *far* from reality. The amount of work required to code to a standard that makes your code unit testable and then actually write the unit tests ranges from improbable to impossible.”

      You’re right; bad code cannot be unit tested. But test driven code naturally is. So test drive your code (spec your code? ‘test’ is such a wrong notion) and make sure it has quality.

      I’ve been writing code of that standard every day for years, and I expect the same standard from anyone working with me.

      Putting my money where my mouth is:

  • Reid says:

    I disagree. I think it’s a balance kind of thing. Too many mocked unit tests, and you’re writing trivial unit tests. Too many integration tests, and you won’t be able to pinpoint bugs because you just get weird action-at-a-distance bugs with no way to pinpoint the culprit.

  • Dave Brosius says:

    I agree with alot of what you say here. A point that keeps coming back to me is that 99% of the time when you have a unit test failure, you go fix the unit test. That tells me i’m doing something wrong….

    But, IMO, the value of a unit test is to help force (or at least prod) people into writing code that is decoupled and easy to manage. Perhaps we should write unit tests, and then throw them out when your done ;)

  • Nick Bauman says:

    Full disclosure: You will take TDD out of my cold, dead hands, buddy.

    I have worked on both types of systems: those without or with less than 50% coverage and those with tests over 50% covered. I’ll take the latter any day of the year. Of course either system is significantly worse if the tests were written after the code was written. Test First, man.

    Certainly there are mountains of tests out there that are poorly written by people who where simply checking a box on a list of things they are supposed to do when they write code. This is not the fault of TDD, though. And when I look closely at the kind of testing you’re doing, I’ll notice that the practice of completely isolating the unit by mocking out all of its dependencies is one of the easiest patterns in unit testing to get wrong. The reason for this is that it’s too easy to “bake in” to the test deep, questionable assumptions about your architecture. So maybe your problem is that you’re just doing it wrong. Might I suggest at your next job (because this one is not long for this world, sorry to say), you begin by writing the tests first and you mock or fake at the logical or physical system boundary instead of directly up against the unit in question.

    So, in summary, I think you might be saying “unit tests are bad” because you have crappy unit tests.

    • Tim Sutherland says:

      Nick wrote:
      >Might I suggest at your next job (because this one is not long for this world, sorry
      >to say), you begin by writing the tests first and you mock or fake at the logical or
      >physical system boundary instead of directly up against the unit in question.

      Hi Nick,

      That’s exactly what I did – I created a set of “functional tests” that mocked the system/application boundary. These were massively useful in terms of improving the quality and maintainability of the code-base and especially for ensuring we could make changes without introducing regressions. The project I was working on would probably have failed without these automated tests so I’m very much aware of their value.

      I’ve just added an Update to the top of the post and changed the title, because what I was meaning to say with the post is “automated functional tests are really really useful” but because of the trolling where I make it sound like I’m just bashing unit testing, a lot of people have had the same response you’ve had.


      • Nick Bauman says:

        I completely apologize for my too polemic response. On balance I think we likely “agree violently” and sometimes the real problem is a kind of weird attachment to what is a “unit test” and what is an “integration test”.



  • I think you’re neglecting one of the big benefits of TDD: When you write the tests first, you write code that can be tested and refactored. generally, when you don’t – not so much.

    Of course, it’s also true that if you’re writing tests that seem trivial and look a lot like the code under test, you could be testing at a granularity that’s too fine, but that doesn’t mean you should dump unit testing altogether.

    My 2c.

  • David says:

    Interesting rant, Tim (et al.). Sadly, from my perspective, it sounds like something written by the typical developer who just knows something’s no good because he’s never heard anyone say anything nice about it. I can only say that you must not be in a test-driven environment, and you’ve probably never had your job saved by one of those useless unit tests. I am, and I have.

    When done properly, the unit test allows you to wrap your head around the piece you’re coding — to test-drive the concept, as it were — and then verify your mileage. If you happen to be working in an environment where you can guarantee that no other change will break your current code, then unit testing is a waste of time. But, my projects have always included plenty of code overlap and special ingredient mixing to make me sweat just thinking about what the other developers are doing. Unit testing has been critical and effective.

    Of course unit testing can be taken to the extreme and become overburdening and wasteful.

    I suggest that you practice test-first development (even if nobody else is), where you write the test to prove whatever requirements you are coding to, then code to make your test work. Divide your unit tests into logical pieces so you can run only the ones you’re certain you want to verify (an not the whole package) — this also prevents you from having to re-create mock objects redundantly, as you can group elements that use the same objects and test them as a group. Then integrate fearlessly once your tests pass. Mocking data and objects is tiresome and error-prone if you have to do it manually each time you run your tests, so it is best to take care of your mocks initially.

    The BEST thing about unit testing is that, when properly set up and executed, you can be ASSURED that your new/changed code is playing nicely with the other kids in the playground every time, and if it isn’t you will know it.

    As you’ve stated, unit testing is no substitute for integration, regression, and end-to-end functional testing, but that’s not its purpose. It’s purpose is to verify and guarantee that each object that you write behaves EXACTLY as you have defined it in the scenarios under which it is placed. If the business rules change, so must the unit tests and the functional code. It’s as simple as that.

    When I first started writing unit tests (and hating every day because of it), they made me angry because they reminded me how much of an idiot I was to be making such stupid mistakes. It wasn’t long before I began to rely on those annoying tests, soon learning how to streamline their processing without sacrificing quality. Prior to integrating our code, our little team would run our tests, update our code from the server, and rerun the tests (several times each day) on our personal dev boxes prior to check-in, then we would re-run the tests in the dev environment. It took time (thus we learned how to streamline), but we rarely had any unexpected problems. Further testing would find bugs, but those were generally due to a developer’s lack of understanding the larger picture and thus not coding for the proper exceptions. Those changes were relatively easy, quick and thus inexpensive to fix.

    Since that first experience, I’ve worked at a number of organizations, some where the majority of developers and management sounded much like you do, and others where the goal or mantra was to do unit testing along with everything else. I can say I have learned to have greater faith in the quality of product coming from the unit testing developers than the others. By far.


    • Tim Sutherland says:

      Hi David,

      Thanks for the thoughtful comment. I did actually end up introducing a (mostly) test-driven development approach and it worked really well.

      What I mean by automated “functional” tests is that the tests I wrote were at quite a high level, to the point that they were similar to the tests that human testers would do by hand when reading the requirements. In some cases the tests almost became a more precise and detailed version of the requirements that present and future developers could refer to in order to understand the system. (And when there were changes in requirements, we could immediately modify the corresponding tests as well.)

      Some people would still call these “unit tests”, which is maybe where some of the difference of opinion is coming from. There’s no way I’d try to develop software these days without automated tests.


  • Michael Feathers says:

    I wrote a blog about a year ago that takes the opposite point of view. I think that when people ding unit tests in preference to functional tests they are missing something. Often they have a view of how unit testing contributes to quality but it isn’t quite right:

  • Sean Tindale says:

    Yes, in theory 90% of unit tests could be deleted because they dont test the “real” logic of the application. But the real problem is not unit test coverage but rather the people.

    Most programmers just arent that good and cant identify what code should actually be tested. They also become lazy and neglect testing in the long run.

    So IMHO code coverage becomes a management decision (part of the reason many hear are frustrated by it). Code coverage makes very little sense to the technically enlightened, but it does make a lot of sense when you are a manager who has to introduce policy that caters to the lowest common denominator.

    If you are coding with a team of aces (changes are your not) then ignore code coverage. If you are like me and you are coding in a team thats abilities span the full range then code coverage becomes a necessary evil. I have tried turning off code coverage checking before and it was a disaster! The sub par coder (and the ones that just dont care) introduced all sorts of regressions into the system that ultimately made us less productive. For me the proof is where the rubber hits the road, we turned off code coverage and things got worse.

    Of course the better option is always going to be to hire good coders, but we all know that that doesnt always happen…

  • jack rabbit says:

    just learn to read…half the time the dev’s I work with are more interested in playing WOW than in READING the code. Good God man your suppose to know how to program not play freaking video games all day.

  • Tungano says:

    Hey Tim,

    Grats on triggering response from the TDD inquisition which is still very much alive. Always the same weak argumentation: you don’t understand it, you are incompetent, you aren’t doing it right. Followed by some chest pounding and heroic tales how they solve testing code with more code.

    With 100% code coverage you can still be doing the WRONG thing ‘bug’ free. TDD is not testing.

    Your post would have been a very good read without doing the versus TDD thing!

    • Sean Tindale says:

      If TDD is not testing then what is @Tungano?

      Its very easy to just say your wrong, but to actually provide a suggestion….

      Suggest something @Tungano

      No testing????

      • Tungano says:

        Quote from wikipedia:

        “Note that test-driven development is a software design method, not merely a method of testing.”

        Read into some of the argumentation around BDD. Various blog posts have take the position that TDD is about design, how it funnels developers into creating quality code and not about testing.

        I am not saying TDD holds no value, I think it does. Comparing functional tests to TDD based unit testing is somewhat comparing apples to oranges though.

    • I think you have the wrong idea about what TDD is. When you’re doing TDD right, you should be able to throw away the tests when you’re done and still have gotten value from the process. The tests themselves are just an artifact – a waste product, as it were. The desired output is good code; it just so happens that TDD also produces tests.

      There’s nothing that constrains TDD to unit testing. You can apply TDD to functional testing as well.

  • Recently I have come to appreciate 4 things from tests.


    A lot of time we get caught up in how we achieve these. It sounds like you are progressing with TDD to the point where you are starting to see the advantages of BDD. It’s sounds like to went a bit too far to straight functional testing. I would guess you will soon settle a bit closer to BDD.

    I screen captured a bit of the 4 stages here, in regards to testing gui’s.

  • Brendan says:

    I Agree. Unit tests are kind of useless for enterprise java software. TDD zealots here are probably using Ruby :-) )

    • Sidu says:

      So ‘enterprise java software’ == ‘poorly factored, horrible code’?

      I beg to disagree – I’ve worked on ‘enterprise’ java, C# and Ruby projects with well factored codebases and good coverage.

      If you have bad code, you can’t spec/test it. Stop blaming unit tests and start blaming the people who don’t do their jobs right.

  • rationalist says:

    A huge benefit of unit testing the tests themselves is the functional decomposition of the code which increases readability and reusability. By forcing my staff to write unit tests they write more and smaller functions with a more defined purpose.

  • SF says:

    I have a question for the TDD advocates. David speaks of requirements and business cases. That sounds completely foreign to my work, where a typical case would be something like, “Here’s two dozen files and a half-assed description of the file format. Please make sense of the geometric data in them.”

    How the heck to you test first with something like that? Until I do a lot of experimenting, I usually have only a rough idea what the right answer is. What sense does it make to write tests first if you don’t know what the answer is ahead of time?

    (Mind you, I do my best to do lots of automated functional tests, and unit tests where it seems to make sense. But except for when I’m writing mathematical support routines for my own code, I rarely write the tests first.)

  • [...] people complain that writing unit tests is a burden that they just don’t have time for. In fact, I came across a post today that said just that. I think that this is a myth that is generally driven by poorly designed [...]

  • Kyle Wendling says:

    Well done sir, I agree and applaud your wisdom.

  • Steve Donie says:

    The main point of TDD in my book is to drive the design of the code. As one of the earlier commenters said, tested code is well designed code that is easy to refactor, etc. The ‘tests’ are strictly a side benefit. I agree with the folks who say that we should rename TDD (Test Driven DESIGN) to DBE (Design By Example)

  • [...] recently read a blog article, Benefits of automated functional testing, which included a section arguing that unit testing was not as important as functional testing. I [...]

  • Elroy says:

    I know a couple of guys who think exactly like you. Their code works completely fine. But, you’d rather commit suicide than maintain that code. It’s B.S.

    That’s where UT helps. Its cool to have a different opinion. Probably your work involves mundane stuff that is quite straightforward and does not require UT that much. Functional tests suffice in that case. But when you’re talking about complex systems involving complex arithmetic and algorithms, proper UT can alone save you.

    I’m also sometimes of the opinion that you need not unit test each and every method. There are critical methods in your code that DEMAND unit testing. A method which is at least 10 lines long totally demands a unit test. And that’s when you can be sure that you unit is working completely fine and you’ve set a well defined contract with the method.

  • jane says:

    Chanced upon this site and was truly envious.
    I work in an entirely different field and grew
    up in the pre computer age. You guys may disagree strongly sometimes, but what fantastic professional development you are making the opportunity of. Fight on and grow!

  • [...] is the perfect place to address flammable jugs of kerosene such as Tim Sutherland taking a dump on unit testing. The Software Developers (K)ollaborative blogger launches: The stem of the problem is that most [...]

  • Thanks for stating the obvious… my comrade just did the same and makes a GREAT analogy of why testing is important.

  • Steve C says:

    I happen to be on a project where we’re building an api at the moment, and we have a weird holy-grail situation – automated functional/acceptance tests that are FAST. You’re right to call out speed as the problem.

    It’s “weird” in that, from feature to feature, I’m not sure to what degree I ought to drive out code using just the functional tests. They’re so fast they feel like unit tests. You pay some price for only testing this way though: people expect Foo class to have a matching FooTest.

    So I’ve been working on Schnauzer (not very actively, I’m ashamed to say). Were it to get to completion (there are some problems in webkit around Ajax posts, also I have no time), it would be (it *is*) a blazing fast, REAL (i.e. real Safari) end-to-end web functional testing api in ruby, with no traffic going over any wires.

    I’m convinced it (or things like it) would change how we think about these testing problems in ways I can’t predict right now. I’d love to get schnauzer working in full, maybe with webrat integration, with a simple api that other implementations (Chrome, FF) could conform to. Sigh.

  • Steve C says:

    I find there’s a low-level of unity testing that’s very beneficial, and then I combine that with fast, clear functional tests. You run into problems every so often…but the trick is to find the right black boxes, to sense what classes and components are cohesive…if something isn’t clearly cohesive I find the tests that get written act as “casts” – code that gets in the way – rather than the really valuable feedback mechanisms that raise the project’s level of agility.

  • [...] Benefits of automated functional testing – “In the last few years of writing “enterprise” software in Java I’ve created and maintained thousands of unit tests, often working on projects that require 80% or better code coverage. Lots of little automated tests, each mocking out dependencies in order to isolate just one unit of code, ensuring minimal defects and allowing us to confidently refactor code without breaking everything. Ha! I’ve concluded that we’d be better off deleting 90% of these tests and saving a lot of time and money in the process.” [...]

  • [...] Benefits of automated functional testing (was: Why unit testing is a waste of time) » SDK (tags: testing programming development functional unit article tdd) [...]

  • Leave a Reply