According to the Wall Street Journal story the Department of Defense knew that the video feeds were being sent unencrypted:
The U.S. government has known about the flaw since the U.S. campaign in Bosnia in the 1990s, current and former officials said. But the Pentagon assumed local adversaries wouldn't know how to exploit it, the officials said.
It's a cliché in the IT security community, and it's true: you can't rely on security by obscurity and argue that a system is secure. And you don't ever want to underestimate the skills of an IT adversary. But that's exactly what the Pentagon seems to have done if they "assumed adversaries wouldn't know how to exploit it."
Besides: what is the exploit, Really? Grabbing live video-streams broadcast unencrypted over the air?
If you want a system to be secure, a good path to choose is to make it inherently secure. So maybe the video feed service from the UAVs should have of been threat-modeled years ago when the system was being designed. If it had of been done properly, threats would had of been identified and the ones deemed important would had of been mitigated.
Another lesson: it tends to be considerably more expensive to retrofit security into a system, than design it to be secure from the jump. Now, in order to encrypt the feeds retroactively, the government (through its contractor) is going to have to find a way to encrypt the video feeds. Perhaps that encryption will require some form of hardware update for all of the UAVs. It's also likely going to require an update on all of the vehicles that use these feeds, as well as portable troop receivers in order to work. None of this is usually cheap to add after the fact.
The encryption method will require some form of key management. To be of any use, those keys will have to be able to be quickly updated, in the event the enemy is able to get hold of them. In fact, they'll probably have to be updated frequently, because the safest assumption is that the keys are breached from time to time.
All of this would had of probably been easier to initially design and build. Which takes us to another lesson in IT security from this incident.
Building secure systems is not fast or convenient. In the rush to get some capability out on the street, no one wants to hear that the system has to wait for a few hours, days, weeks, or months while it is being "threat-modeled" and security controls are being built-in.
The usual response goes like this: "Wait? What? We ain't got time to wait. We'll deal with those problems later."
The problem is that "later" is also usually when the problem is exponentially more expensive to fix, and you've already been hacked.
It's true of software, and apparently true of UAVs, too.