I think of Fetch as a reliable FTP client.
For twenty years we've tuned and tweaked its code to handle new situations, and users regularly tell us that Fetch has worked when other alternatives didn’t. Nonetheless, from time to time we have received a particular and troubling sort of user report.
The Quarry
The user would be uploading a big file, or a bunch of files, and somewhere in the process the upload would stall, or fail with an error. It didn’t happen every time, or always in the same place, and my colleagues and I could never reproduce it ourselves. Most users would never see this issue — out of the hundreds of thousands of Fetch users we were only getting two or three reports a month. But we knew that for every user who took the time to contact us there were probably several more who didn’t.
Reliable Unreliability
I was itching to go after this problem; writing some code to make a user’s life a little better is the best part about being a programmer. But first I needed to get a good look at it. If I couldn’t reproduce the problem, my attempted solutions would be shots in the dark, and I’d never be sure that I’d actually fixed it. I needed a reliable way to make Fetch behave unreliably.
10,000 Files To Fetch On A Wall
We never saw this problem with the server we used for much of our Fetch testing, which (like our website) is hosted at Pair Networks. For the first time I cursed Pair’s excellent reliability. Our support staff started asking users who reported these problems where they were hosting their sites, and I bought accounts from each company. We are now the proud owners of accounts at some of the worst hosting providers around. Since the problem didn’t happen all the time, or even most of the time, I wrote scripts to upload 10,000 files to each of our many test accounts, one after another, hoping that the problem would appear (preferably before the Comcast bandwidth police appeared at my door).
Hosting From Hell
After some false alarms I finally hit the jackpot — a hosting service that randomly but reliably failed every time I tried our standard 10,000 file upload. I tried uploading with other FTP clients, and they all failed as well. The best part was that it failed in completely unpredictable ways: sometimes during a transfer, sometimes setting up the transfer, sometimes getting a file list, sometimes deleting files. I never knew how it would break, but I knew that if I tried to upload 10,000 files it was sure to fail in some way. It was a hosting service you wouldn’t wish on your worst enemy, and I was thrilled to find it.
Step by step I refined Fetch’s error handling to keep it going in the face of the demonic server’s errors. Several times I was sure that I’d fixed the last remaining issue, only to have another appear. Our QA engineer, Doug Grinbergs, and I varied our test routine, uploading lots of small files, lots of empty folders and very deep hierarchies of 100,000 folders.
Sigmas
At last I had a Fetch version that would reliably upload 10,000 files to all of our test servers, including the cursed one. In fact I ran tests until I saw one million straight uploads to that server without an error. Big businesses like Motorola and GE talk about reducing the rate of defects to under 3.4 in a million (they call it Six Sigmas™). When those million uploads were done I felt that we’d earned our sigmas.
But all this work was based on a hypothesis, that the solution to uploading problems with our cursed test server was also the solution to the more elusive problems seen by some of our users. We started sending a pre-release Fetch version to every user who reported a similar sounding problem, and asking them to try it. At first we found a few more issues, which we were able to address. And then ... nothing but positive feedback. We’ve now been distributing these special versions for over a year. In all that time we’ve yet to come across a user whose random upload problems weren’t solved by the new code.
Today we’re releasing Fetch 5.5, and we’ll find out if we can keep that streak going with many, many more users. I can’t wait.
I had that problem just last week… with downloading 1,000 one to five MB files off our company webserver… hosted at a big, supposedly super-reliable server farm in Texas. It’ll be interesting to see if 5.5 can handle it. Thanks.
John,
Let us know how it goes. I should note that the problem we addressed in 5.5 deals with uploads (from the Mac to the server). If you’re seeing a download issue, that may be something for us to address in our next release….
I’ve been using Fetch since 1997.
At first ’cause I didn’t know much better. Later – more out of habit. And these days because it has been good and reliable about 99.9% of the time. I also really like Fetch for the single Pane interface as opposed to other Apps that show a “desktop browser Pane” in addition to the server. I already have access to the desktop and am looking at the files I want to upload – I don’t need to see them again.
For that last .1% of the time I end up switching over to [previously Macromedia’s and now] Adobe’s Dreamweaver for some FTP stuff. Since I’m already running it to build a site or some web project. I prefer simply grabbing a Folder on the desktop – dragging and dropping into a window [set to a file location on a server] – but the DW solution works in a pinch.
I also tend to do larger file transfers of compressed files [ZIP – SITX – DMG] in a single or very few quantity of files transfered. And this is more where I have had stuff crap out. Even with the “Keep Connection Alive” option checked and Passive Mode active.
Didn’t know about the Beta test. Didn’t know about the site re-design – nice by the by. Nice to see life still going strong in the company. And I look forward to the latest.
Regards.
It was Network Solutions, wasn’t it? I haven’t been able to get reliable upload to them with ANY mac FTP client. Miserable host. I don’t have the client who used that host anymore, but if I did I’d give fetch another shot.
I’d rather not say, to protect the guilty :).
We’ve had ‘Fetch’ quit on us every so often over the past year as well. We knew it had something to do with Fetch because we’d give the same task to ‘Transmit’ and it would complete with no errors. It usually happens during laaarge uploads/downloads and that led me to assume some “keep-alive” command in one of the subroutines was failing to do its job, thus allowing the connection to terminate after 1 – 2 hours of activity. Glad to hear the problem’s been taken care of – fingers crossed :-). I sure appreciate all the hard work FetchSoftWorks put into tracking down the issue. Much kudos!