> I believe any methodology must have built-in feedback to ensure that it's applied correctly to be practical or successful.
That will be difficult; TDD is a design activity. You can test artifacts of the process, such as regressions over time, amount of rework necessary, time spent refactoring, quality of releases, and a few other metrics, but how do you measure the quality of a design in and of itself?
You might as well measure the efficacy of individual design patterns on their own.
> ... today the case for TDD as a one-size-fits-all methodology is far from conclusive.
Who suggests that it is? I'd very much like to see references to (well-respected) people who've written "If you don't use TDD, you cannot produce great -- or even working -- software."
I guess you've never read anything by "Uncle" Bob Martin, a leading TDD proponent who has put forward the notion on several occasions that the only "professional" software developers are those using TDD (he has a lot of people who agree with him, too).
And yes, you're right, it's difficult to measure the usefulness of a design activity. But in order to make progress we must be able to do so. Otherwise everyone in the entire world will be doing TDD, most people will be doing it "wrong" and we'll be no better off, and possible worse off, than we are today. I stand by my statement. Any worthwhile development practice must have objectively measurable results, otherwise there's no way to tell if it's being used correctly or even if it has any value. Otherwise it becomes the software development version of a ouija board or a dowsing rod.
> Any worthwhile development practice must have objectively measurable results....
I thought you were asking for objectively measureable results in isolation. You'll still have terrible trouble trying to isolate a single variable (and good luck with a control group for a creative design activity!), but I suspect you can get decent results by measuring the cost of a change, the correctness of a change, and the amount of rework in a change when using TDD.
That will be difficult; TDD is a design activity. You can test artifacts of the process, such as regressions over time, amount of rework necessary, time spent refactoring, quality of releases, and a few other metrics, but how do you measure the quality of a design in and of itself?
You might as well measure the efficacy of individual design patterns on their own.
> ... today the case for TDD as a one-size-fits-all methodology is far from conclusive.
Who suggests that it is? I'd very much like to see references to (well-respected) people who've written "If you don't use TDD, you cannot produce great -- or even working -- software."