Everyone loves Agile. Built from the ashes of the waterfall methodology, it solves so many of the problems that derail conventional development schemes. A well run agile project can add new features, react to changing requirements and fix bugs at a lighting pace. But this quick turnaround requires constant refactoring, which can potentially introduce bugs. The only solution is to obsessively test every aspect of your application. Agile essentially trades time spent dealing with communications and planning for time spent figuring out effective test strategies.
An Agile project should, theoretically speaking, test everything. In practice, however, a line must always be drawn. There comes a point where the value of additional testing fails to justify the time spent writing them. For integration and functional tests it’s almost impossible to be too detailed; model tests are less clear cut. Why? Because in general you should test behavior, not implementation. You don’t want to create in your testing code a perfect facsimile of your model code. This would mean that when you want to change any implementation detail in your model you essentially have to change it twice: once in your model code and once in the failing test. This makes perfect sense in things like custom model methods (state inquiries, calculators, special finders, etc) and complex validations. For example, instead of checking the exact format pattern (implementation) of a validates_format_of you should use actual test strings to check that the validation behaves properly.
Things get somewhat more complicated when you get to extremely simple, declarative activerecord statements, such as validates_presence_of validations and simple associations. Statements like this blur the line between implementation and behavior, especially since activerecord tends to use natural english language for class method names.
The ruby community is hardly of one mind on this issue. Some would argue that testing something like has_many or validates_presence_of amounts to testing the Rails framework itself, and should therefore be avoided altogether. I disagree with this position, however, because at the very least you should test for the existence of these model declarations. What if a section of model code was commented out during refactoring and accidentally left in that state? Without a bare minimum of model testing this problem could take a while to come to the surface.
If we assume, then, that activerecord declarations should be tested, the situation arises where there will be a large number of extremely repetitive tests. Code examples for validates_presence_of validations will generally be the same in all places. Even though testing code is somewhat more “unwound” than application code, such exact repetition begs for refactoring with some sort of macros or custom matchers. This solution leads to a problem, however; tests that use macros will end up matching one to one with their associated activerecord statements. This is an uncomfortable situation for many coders because by creating a parallel copy of our model code we do appear to be breaking the principle outlined earlier regarding testing behavior as opposed to implementation.
At the end of the day I think it makes sense to test all activerecord model declarations. In addition to testing for existence, they serve as a check that the database is connected properly and the schema is set up to support your model code. Also, it’s theoretically possible that there could be a bug in Rails. Without proper model testing this could eventually bubble up to your production application, and I doubt your users will be very understanding when you tell them that it’s pointless to test framework code.