Oracle's massive pile of patches this week complicated the already onerous process of updating the database, other apps

Dark Reading Staff, Dark Reading

October 13, 2010

5 Min Read

As Oracle prepares to dump a passel of 81 security fixes on its user base -- including seven critical patch updates (CPUs) for its database product -- many database administrators are preparing to patch their Oracle database platforms accordingly. But if recent numbers from the Independent Oracle Users Group annual security survey are an accurate barometer, there are still plenty of others who will sit on the CPUs due out next week for a year or longer. Security experts believe organizations first need to improve these numbers by instituting patching best practices for databases.

"I find it funny that there are patches everywhere else that are applied on a regular basis to machines like desktops and so on, but it is still not a general practice for the databases," says Michelle Malcher, director of education for IOUG and a DBA and team lead at a Chicago-based financial firm.

According to a recent survey of its members, only 37 percent of organizations patch their systems within the same three-month cycle that CPUs are released. Approximately 28 percent either take a year or more to patch, have never applied a CPU, or don't know how long it takes them to patch their databases.

Malcher believes there are a number of systematic steps that organizations can take to improve their processes. She recommends garnering executive buy-in with cooperation of DBAs and security team: Many DBAs are up against the wall with diminishing maintenance windows and uptime demands by management and application owners that make it near impossible for them to meet and still apply patches on schedule. She suggests that DBAs can make the sale for more breathing room with the help of a security team member who would be prepared to offer these line-of-business leaders the low-down on how much financial risk the company would be under if it chooses to forgo regular patching.

"Something like that would be very helpful," Malcher says. "Then you have buy-in across the board. Because when you're a DBA looking at 2,000 databases you have to patch, that's a pretty big task to take on, and if you had all the people in the room -- security and management -- saying, 'OK, we may not need to do quarterly patches, but we've decided to do semiannual patches or have each quarterly patch applied by the next month,' that's a good start."

And wouldn't it be nice if you didn't have to always apply a patch every time a CPU is released? If you configure your databases properly, then you don't. By uninstalling database components that your organization doesn't use, you're not only reducing your attack surface for future threats, you're also lowering the number of moving parts that need fixing every time the Oracle team finds a vulnerability.

"Honestly, the first step is not to necessarily install all of the components of the Oracle database if you're only using specific components," Malcher says.

She says the only good patching is regular patching, and without some sort of plan it is inevitable that databases will fall through the cracks. Organizations should choose a patch window they're comfortable with based on their appetite for risk and their resources, and then set out procedures in advance that they'll be able to stick with once patches are released in order to meet their goal's time line.

The plan should include how and when the patches should be tested, how the organization will choose which patches are rolled out first and how the organization will deal with patches that cause problems. "When you have that process planned out, it is fairly straightforward for you to run through when CPUs are released," Malcher explains.

Testing patches before going live is critical in database environments that organizations depend on. DBAs must not forget that they should not only be testing how the patch reacts within a test environment when it is deployed, but also what happens when that patch is rolled back. This gives the organization a way out should there be a need for more troubleshooting if a live deployment doesn't work out.

"There should be a simple back-out plan for CPUs," Malcher says. "You should have a good tested backup and recovery plan for the databases and should test your roll-back plan during your testing of that patch so if something goes wrong you have a quick way to roll that patch back and continue on."

Malcher recommends leveraging Oracle's documentation to prioritize patches, too. There once was a time when there was a real reason for DBAs' disdain for Oracle patches, back before Oracle instituted its no-nonsense CPUs that only patched critical security errors and didn't try to monkey with anything else within the database. Now organizations can be assured they're truly fixing things that really need to be patched.

But even within the CPUs, some fixes may be more important than others, depending on the install and the business environment. Malcher says that Oracle does a good job offering very thorough documentation of the risk level of each vulnerability fixed. "The documentation provided from Oracle with the vulnerability scores basically shows which pieces of the database are affected," she says.

Organizations should prepare themselves by reading through the documentation and using it as a way to prioritize its patches, she adds.

Have a comment on this story? Please click "Discuss" below. If you'd like to contact Dark Reading's editors directly, send us a message.

About the Author(s)

Dark Reading Staff

Dark Reading

Dark Reading is a leading cybersecurity media site.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights