I hope you're looking at metrics on your team's software code just about as often as you look at those numbers. One tool that's helped me to do that very efficiently is the Code Metrics Viewer extension for Visual Studio.
So what kind of metrics does this tool collect for you? As the author of the tool puts it, he endeavors to measure "evolvability of a software system, which is an indicator of the inner quality of software". Seems like it would be a good idea to keep an eye on that! Code Metrics Viewer makes this visible through five categories, which I'll briefly explain (technical explanations are available through the links):
- Maintainablity Index - In my opinion, this is the most important metric per class. It is a weighted calculation of the other metrics listed below; consider it as the "overall" rating. A score of 20 indicates reasonably maintainable code. However, I suggest you look for at least double that number if not more. The last two major projects I ran (both multi-tenant web applications) averaged a score of 79 across all assemblies. 
- Cyclomatic Complexity -The number of possible paths through the code, generally determined by logic blocks (IF .. THEN; FOR EACH; DO .. WHILE; SWITCH .. CASE). To me, this is probably a close second in terms of importance; think of this as the number of test cases (read: team cycles) that might be required to properly cover all the possible results / outcomes of the module. 1 is the lowest score; keep this number as low as your business needs allow.
- Class Coupling -A count of the number of other classes Class ABC depends on. Look for a score below 25.
- Depth of Inheritance- How many other classes are used to make up this class? The more there are, the tougher it can be to follow the flow of the program when modifying or debugging. A score around 3 here is good; above 6 is a warning.
- Lines of Code - pretty simple; how many lines of code are in the class.
If you don't have a metrics tool for your .NET projects, I suggest giving this one a try.
 - I ran into an odd situation while writing this article that I need to investigate further. I ran the Metrics Viewer on a small Solution that creates an .exe file. Even though the three main classes, which made up 80% of the codebase, only averaged a 56 Maintainability Index rating, the Solution received an 80. I think this is because there were some very small classes / enums that scored 90-100. My initial assumption is that the overall Maintainability Index rating is an average of all the class-level ratings, and is not weighted like the class-level ratings. In my case, this was confusing, as I had most of my major classes showing the yellow warning signals, but still received a high rating on the whole assembly. I will try to clarify this with the author.
UPDATE (11/27/2012): I had some extra time off over Thanksgiving weekend and was finally able to reach out to the author, Matthias, with my question. Matthias was very helpful; he confirmed my thought that the Overall Maintainability Index is in fact an average of the Maintainability Index ratings for each class, and shared the following information:
"... the formula to calculate the maintainability index for the module ... is correct ... but again, I would not give that much into that single value. The maintainability index is of course a magic number, which demands some quality feeling when working with it ... Whenever I do reviews based on metrics I use the maintainability index as some kind of pre-filter criterion to unveil the hot-spots, but afterwards I take a look at all numbers (even LoC) to detect code smell ... I am not sure if the maintainability index is good enough to express how clean and evolvable a software system is."He also said he has started work on a similar plugin for Visual Studio 2012, but ran into an impediment; the Professional version now includes a very similar feature.
My thanks to Matthias for his time in answering my questions, providing some guidance on the tool, and for building and sharing it in the first place.