The idea behind hardware-based trust is that trust must start somewhere. Whether it begins in a preloaded bundle of certificates from various certificate authorities (as SSL does in browsers) or from a web of trust manually established at Pretty Good Privacy key-signing parties, cryptographic algorithms are only as strong as the trust they're built on.
If you want a secure transaction with someone, at some point in the past you must have verified the identity, or had someone you trust verify the identity--as SSL certificate authorities claim to do. Then, trust relationships are used to securely pass that information forward to a later transaction. Assuming there aren't any bugs in the crypto itself, this chain of trust allows for convenience later on from an initial trust relationship.
TPM gets its own internal key from the manufacturer, and the manufacturer theoretically is unable to track those keys. TPM never releases its internal key to anything outside of itself, so it becomes its own root of trust.
Of course, bugs do crop up. The cryptographic math is fairly well-understood at this point, but implementations often leave something to be desired.
Take, for example, the recent weakness in the Nintendo Wii's code-signing and verification mechanism, which was meant to ensure that only authorized apps would run on the game console. Implementation flaws resulted in third parties being able to bypass the protection. Datel, which makes video-game peripherals and cheat systems, released a commercial product based on the flaw. A small group of Wii hackers also had been using the flaw to explore the Wii. Nintendo has since fixed the problem.
A Tipping Point For The Trusted Platform Module?