Config Editor
This is one of several stories about cool stuff I’ve done. See the Portfolio Intro post for more info.
This is a project that started out as a simple configuration file editor, but turned into a mini IDE, doing both less and more than originally envisioned. Along the way, lessons were learned about waterfall vs. iterative development, and special guest appearances were made by the Golden Hammer problem and the 80/20 rule.
In essence, what the company did was receive text documents from our various customers and process them to extract specific bits of information. Some of the processing was standard across all customers, but there were custom options and processing rules that the customers could specify. Processing rules would also have to be added as idiosyncrasies in the customer’s data were discovered. These modifications constituted a large portion of the work handled by one of the teams.
This processing team was composed of subject matter experts (SMEs) and engineers. The SMEs would work with our customers to identify changes and customizations to the way we processed their data. The SME would document the request and hand it off to an engineer to implement. The engineer would interpret the request and make changes to the processing configuration, then hand it back to the SME to review. Each handoff involved a delay while waiting for the other person to become available.
To speed this process up, they came up with the idea of developing some sort of web interface that would allow the SME to safely modify the configuration themselves. They took this idea to a separate application engineering team, and proposed it as a “config file editor”. The applications team went at it with a standard waterfall approach: They gathered requirements, designed interfaces, and created mock-ups. They were Java web developers, so they focused on the UI challenge. They thought the hard part would be coming up with forms to enter the config specifications; actually modifying the config files would be a fairly straightforward operation, and it was left as an implementation detail.
I had started out in the applications team, but switched to the processing team at about this point. For administrative reasons, they sent this project with me. I had been involved in this initial phase, but had doubts about the approach. So I took the opportunity to “reboot” the project. Rather than implement all the forms first, I decided to do a more iterative approach: Get one or two common cases working end-to-end, then build out the rest.
The first thing I learned was that there was more going on on the back end than we thought. The config files were actually managed in a version control system (Subversion), so I had to be able to check out and commit workspaces. I had to deal with multiple workspaces, as the SME would normally be working on several issues at once. Why? It took a long time for the tests to run. What tests? Ah, that’s another part that had been overlooked. Once changes were made to a configuration, we had to create a test data set, process it, and report on the results. The processing engineers already had separate tools for each of these stages, but they weren’t connected up in a way that the SMEs could use.
All of this back-end work was happening on a Linux server, and the existing tools were all shell and Perl scripts. This application was shaping up to be a lot of script invocation and file system interaction. Editing the config files themselves would be a lot of text parsing, regular expression work. This project had originally been specified as a Java webapp, but that was making less and less sense. You can do all of that in Java, but it’s really awkward. Add to that the fact that the engineers on the processing team, who would have to maintain it, were almost all Perl developers. So I implemented it as a simple Perl CGI script.
Working closely with the lead SME, I got the first usable version up in about a month. Even in that release, some significant things had changed. The workspaces turned out to be too large to check out clean for each issue, so I had to manage a pool of them. More and more error conditions and edge cases appeared and had to be handled or designed around. Over the next six months or so, we got the rest of the SMEs using it, and I gradually added in more functionality: More configuration editors, svn conflict detection and resolution, closer integration with the tools for creating test data sets and analyzing results, and more. In the end, only a handful of the original editing features were implemented. The others were too rare to justify the development time. A key part of the design was that it was always possible for the engineers to go in “under the hood” to edit the configurations, so I never had to implement a feature that wasn’t justified in cost/benefit terms.
The app has been in regular daily use for over ten years as of this writing. The processing team engineers report that it’s been easy to extend and virtually trouble free. (One went so far as to describe the code as “beautiful.") The same SMEs are handling a greater number of change requests with quicker turn-around, and they’ve been able to reduce the full-time engineers from four to one, freeing up the others to work on new features for the processing engine. So, pretty big win there.