Our Software Development Process
Given the multiple and shifting demands on our time, we try to adhere to agile software development methodologies as much as possible, using elements from both Extreme Programming and Scrum methodologies. These are documentation-light methodologies, and prefer spending effort on getting the right software written over generating lots of intermediate documentation. This approach has been key to our productivity as an organization.
We have used Subversion for our source control needs for about 6 years. Recently, the advantages of using a Distributed Version Control System have begun to be clear, so we are in the process of migrating our code from Subversion into Mercurial, one of the best-of-breed and most customizable DVCS packages available.
Bug and Time Tracking
We used TRAC to keep track up bugs and as a wiki for collaborative documents for 5 years, but it lacked time tracking capabilities. About a year ago, in response to the need for better accounting of our team's efforts, we switched to using FogBugz, a hosted service. This provided the time tracking we needed and allowed us to focus more on the software our customers needed, rather than on our internal tools.
When looking for solutions, we lean toward Commercial Open Source when it's available. This approach allows us to get a lot accomplished even with minimal resources, to fix any bugs we find, and to customize the apps to our unique institutional needs. (For example, Gato, TRACS, our Event Calendar, and our CRM software (OTRS) have all been based on this strategy, and have been implemented with a combined team that has only included 6 programmers at its largest).
We tend to be fairly language-agnostic, and will happily work in whatever language the tool that best does a job is written in. When developing from scratch, we use Perl, Python, Ruby, PHP, and Java, according to their various strengths. We have considered consolidating on a single language, but decided that "the right tool for the job" worked better for our team, especially given the breadth of projects we work on.
We work as closely with the customer as possible. When there's a single person asking for an application or a feature, we have the programmer sit down with that person directly to make sure he understands the requirements. When that's impossible, as with a large system like TRACS or Gato, we rely on the support organization to act as a proxy for the user, and to let us know how they want things to work according to the needs of the system's users. In either case, we put as much information as we need to communicate what's needed into a ticket to document the requirement.
Design and Estimation
For small tasks that one person can manage, both design and estimation are often left up to the individual programmer. For anything larger, we discuss them as a programming team in our "Planning Poker" sessions. (Planning Poker is a technique that has all the programmers estimate a task, then discuss the estimates and implementation details until a consensus is reached on the estimate.) We've found these sessions to be terrific tools for our team, as they harness "the wisdom of crowds" for our estimates, and provide an excellent forum for technical design discussions.
Prioritization and Release Planning
We provide updated releases for each of our applications on a regular schedule (usually monthly). To determine what goes into a release, we sit down with the support team and show them our list of tickets. (Ideally the tickets have estimates from our Planning Poker sessions, but if a particular ticket hasn't yet been estimated, we can provide an estimate on the fly.) The support team balances the amount of time each will take and its importance, and selects enough tickets to fill the development time that we have available for the release.
Once the scope of the release has been determined, our programmers do the actual implementation work, tracking their time as they go and seeing how the estimate compared to the reality. (FogBugz later uses this to adjust its timelines according to the programmers' individual estimation histories.) If the team runs out of time before all of the features are implemented, we postpone the lowest-priority items to the next release. If we're able to finish the chosen features early, then either the programmers or support folks can select additional tasks to include in the release.
A week before we publish the changes we've been working on, we declare a "code freeze". During that week, we test heavily, running regressions, verifying look and feel, and doing whatever else is required by our test plan. If we find problems, we fix them with the minimum amount of change possible. In order to minimize risk, we do not make any other changes to the code during this period.
Our goal for more-agile verification is to move increasingly toward automated testing, so that we can eventually do away with Code Freeze and continue to work on features right up until deployment relying on our automated tests to quickly catch any errors we introduce. This is a substantial effort, however, and we're still a way from reaching it.
Once our testing is complete, we publish our changes and verify that they work properly in production.
After this, the process starts over with another round of Prioritization and Release Planning. (Requirements Gathering and Design and Estimation are continuous tasks.)
By using Agile methodologies, using open source software, and carefully hiring skilled programmers, we're able to get an enormous amount accomplished with a small team while remaining responsive to customer needs.