Myth: Cloud computing is too hidebound by network or storage constraints to be useful.
Well, yes and no. This depends entirely on the scale and scope of what you have in the cloud, how I/O bound it is, how scalable the processes involved are, and your chosen implementation for all of it.
I/O, into and out of the cloud, is still by and large, the main choke point, and will remain that way until we all have terabit-fiber-to-the-curb connections -- which at this rate will arrive shortly after Godot finally shows up. In the meantime, the smart thing to do is to build your cloud architecture -- its front and back ends -- in such a way that you only move the data that needs to be moved. Failing that, you can move around only the work that needs to be done on the data, which while more computationally difficult and programmatically challenging, is a bit more flexible than moving all the data.
One side of this issue that hasn't been discussed as much is how network speeds constrain the back end as well as the client-to-server connection. Bandwith on the back end (and backbone) isn't as limited as it is to the client, but data densities are growing faster than network speeds -- we're talking petabytes of data that may need to be synchronized among multiple hosts.
What's needed to offset is not a bandwidth breakthrough, but smarter use of the existing networks via innovations such as low-bandwidth file systems. The most important thing is to assume nobody else will do this for you: you have to make it happen, and set an example for others.
If nothing else, these issues should provoke more conversation on the subject of how to most efficiently use the network, and what sorts of clients are worth pairing with the cloud as a server / back end.