One of the persistent worries I have is whether a particular technology I am in love with is really a boondoggle. In other words, the technology solves a hard problem in an interesting way that is useful because of some abnormal discontinuity computer system performance, but in reality is of transitory interest so spending much time and energy on the problem is of limited value.
Having said that, I thought to myself, well what is a computer science boondoggle exactly?
So I came up with the following pre-requisites of a boondoggle:
- There must be a free compute resource.
- An existing application can only take advantage of the free compute resource if modified
- The performance gain of the modified application is significant
The field goes on boondoggle when someone determines that the barrier to using the compute resource is the application modification and so tries to come up with ways to transparently take advantage of said resource. In effect, the allure of this free compute resource, and the challenge of making it usable drags people down a rabbit hole of trying to make it easy to just transparently use this new compute resource!
The boondoggle ends when the compute resources that require no application modification eventually catch up eliminating the need to modify applications or use special libraries to get superior performance.
As an example, consider Software Distributed Shared Memory (SDSM). SDSM began life when people observed that in any engineering office there were these idle workstations that could be connected together to create a cheap supercomputer. People then observed, that to take advantage of said resource applications would have to be modified. So some applications were modified and performance gains were real. And it could have all ended there.
The SDSM boondoggle took hold, when some bright folks realized that modifying every application was going to take too much time. So they decided to invent this rather elegant way to allow applications to run, unmodified, on these networks of workstations. Of course, the problem was that applications assumed uniform memory access and SDSM provided non-uniform memory access, whose non-uniformity was measured in the 10s of milliseconds. Because of the latency issues and the non-uniform memory access, these unmodified applications performed poorly. It could have all ended right there, but the allure of this transparent way to take advantage of the performance of these idle workstations was so tantalizing that folks spent a significant amount of energy trying to make it work.
They failed.
What did happen was that computers got bigger and faster and cheaper making the performance gains less and less interesting given the amount of effort required to make them work using SDSM.
So SDSM was trying to exploit a discontinuity in computing resources (the relative costs of a collection of workstations versus a supercomputer), was trying to do it in an interesting way, but in reality was not of long term value because of the hardware compute trends that were in place at the time.