Code quality sometimes seems like an inherently subjective term - you like OOP, I like procedural, you prefer CamelCase and I like delimited_identifiers. Explicit self is ugly, explicit self is explicit and therefore Pythonic. And some areas of code quality are even harder to quantify - what makes an API elegant? How do you measure Pythonic-ness? Ok, that's not even a word - but I just want to issue the disclaimer at the beginning - high quality code will continue to be a matter of opinion. Low quality code - well that we can measure.
One more quick disclaimer here - why do we care about code quality? Let's face it - there are two reasons that we need to improve the quality of our code. The first reason is that I suck. It's true! Sometimes I write really poor code; usually, even. There are lots of reasons; excuses really - maybe I'm exploring the problem space. Maybe I thought it would be a one-off script. Maybe I'm new to the language or the library. Maybe I'm under time pressure so I just want it to work. Maybe I just don't care.
The second reason is that you suck too. In fact the only code worse than my code is other people's code - and I'm confident that you all could make that statement yourselves.
Seriously though - every working programmer at some point deals with maintenance, with bugfixing, with legacy code and has to start refactoring. I can't encourage strongly enough the Martin Fowler refactoring book - it isn't just for Java programmers and it will help you think practically about how you can get from "here" (large code base of varying quality) to "there" (better quality code that's easier to maintain, bugfix, etc).
So lets look at a couple of tools for finding crappy code in your python projects. Just for fun I ran these on the current Django trunk to see if I could find any dusty corners.
First up is a tool called clonedigger. Clonedigger is really cool and does exactly what it says - it looks for clones or regions of similar code. Often these are evidence of copy-n-paste style programming and should generally be refactored. DRY!
Clonedigger installs via easy_install and running it on the Django trunk took over an hour and produces an html report. It found 1323 clones and says that 6,143 out of 50,782 are duplicate lines (12%) but most of these were legitimate duplication; locale files for instance.
You can see an example of the output here. Basically clonedigger has detected that the classes that define the widgets for the DateInput, DateTimeInput, and TimeInput are 18 lines of code apiece but differ only by the classname in the call to super. Introducing a common parent class or having 2 of the three subclass the other would eliminate the duplication (reducing the code by 36 lines) and more importantly make clear that currently the widget for all three classes functions in exactly the same way - something that isn't instantly obvious when you scan the code.
There might be good reasons not to introduce another class in the hierarchy and arguably you shouldn't have DateTime subclass Date (or vice versa), similarly the clones found in django/db/models/fields/__init__.py might best be left alone (should IntegerField subclass FloatField or should it be the other way around). Clonedigger did find some repetition in in the generic views, however, and if you're interested you can download the whole output (137k gzipped) here
OK - the Django codebase certainly has less duplication than the stuff I produce, clonedigger has found some nice areas needing refactoring in my own code. I've also used another tool to find different sorts of problems to good effect, so let's take a look at the Cyclomatic Complexity in Django.
Cyclomatic Complexity is basically the measure of how complicated a unit of code is - it counts all the independent paths through a unit of code to produce a unique score. Obviously a function that has a very high cyclomatic complexity score (say 100) needs to be refactored. It's doing too much for you to get your head around, it can't be unit tested and can't safely be changed. The refactoring necessary might simply be to extract a lot of methods or functions but I frequently find an area of high cyclomatic complexity indicates a problem that needs some rethinking as to the approach.
I've used David Stanek's tool pygenie to scan python code and report on Cyclomatic Complexity - it isn't released but you can check it out of svn and use it to scan your python source.
Running it on the Django trunk produces a text report with any functions with a CC score of more than the ideal of 7. Django actually is outstandingly well written by this measure - the high scores are some dense thickets of third party code. The doctest and pure python Decimal implementation have functions with scores in the 20's, 30's and one 52!
The highest scores in code that originated in the Django project looks like it lives in the utils module. The normalize method in django/utils/regex_helper.py is pretty scary and has a CC score of 25! To be fair, it's reversing regex patterns - that's a legitimately complicated task that may be written as it is for performance reasons.
A more likely candidate for refactoring by mere mortals is the truncate_html_words function in django/utils/text.py - although with a CC score of only 14 and using regexes to parse html and close any tags in the truncated portion it's also legitimately complicated. The _html_output method of the BaseForm class could probably safely be tackled but even this doesn't look to bad.
Pygenie is actually a more useful tool than this demonstration shows - on my own codebases it picked up some unmaintained messes of procedural code that was complex only because I was lazy. You can look at the report for the rest of the Django code base but I encourage you to use this tool on your own code - it runs fast and is ignored at your own peril.
Code quality is subjective. It's possible to have crappy code with low duplication and low cyclomatic complexity. But removing duplication from your code and making sure that the codebase stays in discrete (testable) chunks definitely helps.