One of the difficulties of IRI’s data delivery to its customers was that many of the monthly files were very large. In those days most electronic transfers were slow and often unreliable, especially with very large files. So most of our deliveries were on tape cartridges, send to our customers.
Although this had been working for some years, it was fraught with problems. The tape creation process was finicky, and often had to be restarted. Sometimes delivered tapes couldn’t be read by the customer and we had to create a new set, resulting in an additional 2-day delivery delay.
In particular, we have very large monthly deliveries to our offices overseas — the U.K., Italy, France, etc. These had become especially problematic. No one understood why, but most of the time one or more tapes in a delivery set couldn’t be read. They suspected airport x-ray screenings, so they actually started having someone hand-carry them to Europe! However this didn’t solve the problem.
Much effort had been spent on the mainframe side (where the tapes were generated), yet they could not determine the cause of the problems. So they next tried using FTP to electronically move the files. However, in those days, FTP had file size restrictions, and it was also unreliable and non-recoverable — meaning that if the transfer failed 2 hours into a 3 hour transfer, it had to be started over from the beginning.
I was approached to see if I could help. After some thought, I proposed a software solution that I would engineer and program with the help of one of my staff. There was some skepticism (especially from the mainframe folks), but I was given the green light to try.
I had three main objectives: (1) Transfer any size file 100% reliably, (2) maintain security (data lines were not very secure, and much of the data was considered proprietary), and (3) engineer a solution that our client IT departments would be willing to install on their systems.
The third of those requirements steered me towards developing the entire system using only Unix scripting rather than any compiled language, and in a way that did not require special system privileges. As an IT manager, I knew that would be the only way to overcome any client objections.
I designed a system based loosely on package deliveries by the likes of FedEx. I defined each delivery as a package, with an accompanying manifest. Much like when you ship a lot of goods, you divide your shipment into smaller packages, my large files would be split into “chunks”, each of which would be sent separately, re-assembled at the receiving end based on the manifest. For reliability, if any “chunk” failed to deliver (verified by a mathematical checksum), the system would proceed with the other chunks and only later re-deliver those chunks that got missed. In that way, an unreliable line would never force a restart, simply require one or two small chunks of the delivery to be resent.
Obviously there was a lot more to this, but that’s the nutshell version. The EDT system (as I called it) became quite a success story. Clients were thrilled because it cut a day or more off their delivery times; the European offices saw a 4-5 day reduction. Even more importantly, once in place there were zero errors!
I’ve attached a number of email images, and a full slide show that I presented live to Frito-Lay in Plano, Texas.

The following are clips from emails ( bragging rights 🙂 )




