copyright 2000, Rex Ballard
Almost since my earliest days working with Unix at Computer Consoles, I had been fascinated by the ability to strap together multiple smaller computers to create something that was far greater than the whole. The Early CCI clusters were 8 PDP/11 processors interfaced to cages containing as many as 24 8085 processors.
By the time I left CCI, I had been working with clusters of up to 1,000 processors for the British Telecomm project. We used System Interface Controllers to interconnect clusters of clusters, and use message content to route updates to all of the appropriate clusters and to read from any appropriate cluster. The clusters were also spread across several locations.
When NASA started using the Beowulf Clusters, I was already very familiar with the technology. I'd used remote shells to schedule systems, I'd used several methods to interconnect pipelines, and I'd used firewalls to protect less secure nodes. For me, the challenge was looking for ways to make clusters, even very large clusters practical in real-world commercial applications.
In 2006, I got my chance to make this dream a reality with a project for a company called ViewPointe. IBM was a partner in this venture, along with 6 of the largest banks in the united states.
Soon after this project, I began working on a number of projects for companies who wanted to use Service Oriented Architecture (SOA) to interface with other businesses. For example, a bank that wanted to interface to Sales Force for lead management, another company for credit checks, another for document archival and retrieval.