The first advice I can give is to write functional specs. In the link, Joel Spolsky explains the motivation and method of writing functional spec. A functional spec describes how the end-user will use and interact with the program. It is not a technical spec which explains how the inner software will work.
From my experience, functional specs are fun to write, reveal many problems in the initial design, and make you think how the software will work on the inside.
Here are three examples for functional specs I wrote:
Functional Spec for a Freelancers Board (with a Biblical theme).
Functional Spec for a better classification of CPAN Modules (with a theme of FOSS world celebrities).
Functional Spec for a Windows package management system. (featuring characters from Ozy and Millie).
You should write automated tests such as unit tests, integration tests, system tests, that will run automatically on the code and yield an answer if all of them are successful, or if any of them fail.
There are many good practices for automated testing such as having daily builds or reaching a 100% test coverage. Here are some resources to get you started:
The Perl Quality Assurance project, also see their wiki.
Note that having automated tests is not a substitute for having dedicated software testers (and vice versa). By all means, they are both necessary for any serious operation.
Most people agree that designing a software, planning it and thinking about it is a good idea. Extreme Programming suggests to have one design meeting every day. I personally feel that too much design like that is equivalent to very little of it, but still design and planning should be done
Refactoring is the process of improving the internal quality of the code (from "a big mess" to "squeaky-clean and modular code"), while not changing its external behaviour. This is done for mostly functional and bug-free, but sub-optimally-written, code so it can be better managed.
"Joel on Software" features two excellent article about the motivation for refactoring: "Things You Should Never Do, Part I (why rewriting functional code from scratch is a bad idea)", and "Rub a dub dub" (how and why to do refactoring)..
There are many other resources for that online, along with many refactoring patterns.
There are many types of refactoring: grand refactoring sessions (= what Joel describes), continuous refactoring (refactoring as you go), "just-in-time refactoring" (refactoring enough to achieve a certain task), etc.
But refactoring is important, makes development faster in the long run (and even in the short-run), and can prevent the code from deteriorating into an ugly, non-functional mess that would be hard to salvage.
There are few software engineering methods that I find pointless, so I'd like to briefly point them out.
The first is the "Huge-design-up-front", where an "architect" or even the entire team spends a very long time writing an extremely comprehensive and detailed technical document that specifies in detail how the software will work.
The problem with this approach is that it is a huge waste of time and also it is impossible to design a large project top-down like that. A better way is to involve the entire team in a good design session (a week long at first) while writing some functional specs, diagrams and some other documents, and then to write it incrementally.
A similar fallacy is the "Mountains of documentation fallacy" of having superflous commenting, Literate programming, etc. The problem with this approach is that the extra documentation is often redundant if the code is well written and factored out [Extract Method]
Some documentation (especially API documentation such as Perl's POD or Doxygen) is good, and you shouldn't feel too guilty about writing a comment for an interesting trick, but too much documentation slows things down, and doesn't help with the design process, and eventually may turn out to be misleading or harmful.
Something else I consider a bad idea is Overengineering or YAGNI ("You Ain't Gonna Need It"). The basic idea is not to implement too many features at once, or over-complicate the design of the code, and instead focus on getting the necessary functionality working.
YAGNI, put aside, I still believe that some forward-planning is good.
The code-with-accompanied-documentation is somewhat a sub-case of a more general fallacy: the belief that a company can simultaneously maintain two different codebases (say two codebases implementing the same functionality in two different languages), while keeping them both synchronised.
The Extreme Programming experts have been warning against it, and it also makes sense that it is hard to do given that programmers, by nature, lack the appropriate discipline to keep maintaining two or more codebases at once.
One possible solution to this problem has been illustrated by Fog Creek software. What they did was implement a compiler from a common language, to several target languages. Naturally, this compiler also needs to be maintained and extended, but it is less work and less error-prone than maintaining several different codebases.
[Extract Method] A good example is that instead of having the following pseudo-code:
function create_employee(params) { my emp = emp.new(); emp.set_birth_year(params[year]); emp.set_experience_years(param[exp_amount]); emp.set_education(params[education]); ### Calculate the salary: emp.set_salary(emp.calc_education_factor(emp.get_education()) * emp.set_experience_years()); ### More stuff snipped. return emp; }
Then it would be a good idea to extract the
complex emp.set_salary()
call to
a simple emp.calculate_salary()
method. This
method
extraction will make the intention of
the code self-documenting (as the method will have
a meaningful name) and much more robust for future
changes than adding a comment.
And this is just a small example.