Risk
2/1/2010
01:25 PM
Chris Murphy
Chris Murphy
Commentary
Connect Directly
Google+
LinkedIn
Twitter
RSS
E-Mail
50%
50%

Global CIO: CIOs Bet Big On Data Center Strategies

Cloud sounds good, but it's still often a brick-and-mortar decision.

Part of the irresistible appeal of the term "cloud computing" is the imagery of computing power as light and floating. For most CIOs, nothing is so immovable as the data center.

The whopper data center bets they must make still pivot around brick and mortar. Here are a few companies that made very different decisions to meet their data center needs in the past year--insource, sell, and build.

(For additional insight into data center strategy, see also Bob Evans' Global CIO column, "Data Centers Behaving Boldly: Meet Tech's New Rock Stars")

Insource A Data Center

Whitney National Bank decided to insource a data center, under the leadership of Scott Erlichman, senior VP of technology infrastructure for the regional bank. The bank's 3-year lease was up on co-location space it used for disaster recovery, and the bank found rates had risen 50% or more since 2006, because demand for such space is high. At the same time, the bank already was planning some construction at a site it owned in Alabama, in order to do some back-end work such as check processing. That made insourcing an intriguing option.

The project turned out to be more complex and costly than initially expected, says Erlichman. The building itself, and the heating and cooling infrastructure in particular, weren't up to the standard needed for a modern data center, so that added to the projected cost estimates. That pushed out the expected return-on-investment, but the numbers still added up to go ahead.

Since October, Whitney Bank has been running the Alabama data center as a lights-out facility, with the computing infrastructure managed entirely remotely, mostly by staff in New Orleans with some oversight from its Dallas and Houston offices. There's someone on-site who monitors heating and power as needed, but that was required for the back-office processing, so didn't mean additional staff. No IT people work there, though there's staff within a few hours drive if needed.

Whitney Bank didn't treat the insourcing move as a blank technology slate; in fact, it basically fork-lifted the IT infrastructure, including the storage used to support specific apps at the outsource facility, and moved it Alabama. Since that involves confidential banking information, it was no small matter--think secure transportation and police escort. Over the next two years, however, the bank will do some application redesign that will let IT bring in more new technology. The bank's also looking into new application monitoring technology, so IT has more insight into performance the business units value--say, the time it takes to complete a transaction, rather than the availability of a server or network. It's already largely tapped out the gains to be had from virtualizing servers, but it's looking at increased use of blade servers for some further efficiency gains in management.

Even as it insourced, Whitney kept its core data center with an outsourcer, AT&T. That's a high-performance data center, and the bank didn't feel it had the operations expertise to run that. However, it did use virtualization to reduce its footprint at that facility by 40%, which cut its costs.

Global CIO
Global CIOs: A Site Just For You
Visit InformationWeek's Global CIO -- our new online community and information resource for CIOs operating in the global economy.

As companies strain against data center capacity, they face a choice: build, retrofit, or outsource, says SunGard Availability Services data center consultant Mickey Zandi, who worked on the Whitney bank project. Zandi says he sees more companies looking at "smart retrofit" projects like this one, with an in-house data center paired with an outsourced one, with each backing up the other. They want "flexible infrastructure."

Three big things Whitney did right, says Zandi. First, it had an integrated project management office for all functions under the CIO's office--covering elements including construction management, data center design and relocation, commissioning and testing, and planning of heating and cooling systems. Second, it had top executive oversight and support, including Erlichman's. Third, the IT team understood its infrastructure, even though it had been outsourced, so they were clear on the interdependencies of apps, systems, and networks. "If something happens, then one needs to know exactly who is going to be impacted, what line of business, what end user client, and what is going to be the impact on the business," Zandi says. Most companies, Zandi says, don't understand those interdependencies.

Previous
1 of 2
Next
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Security Operations and IT Operations: Finding the Path to Collaboration
A wide gulf has emerged between SOC and NOC teams that's keeping both of them from assuring the confidentiality, integrity, and availability of IT systems. Here's how experts think it should be bridged.
Flash Poll
New Best Practices for Secure App Development
New Best Practices for Secure App Development
The transition from DevOps to SecDevOps is combining with the move toward cloud computing to create new challenges - and new opportunities - for the information security team. Download this report, to learn about the new best practices for secure application development.
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2017-0290
Published: 2017-05-09
NScript in mpengine in Microsoft Malware Protection Engine with Engine Version before 1.1.13704.0, as used in Windows Defender and other products, allows remote attackers to execute arbitrary code or cause a denial of service (type confusion and application crash) via crafted JavaScript code within ...

CVE-2016-10369
Published: 2017-05-08
unixsocket.c in lxterminal through 0.3.0 insecurely uses /tmp for a socket file, allowing a local user to cause a denial of service (preventing terminal launch), or possibly have other impact (bypassing terminal access control).

CVE-2016-8202
Published: 2017-05-08
A privilege escalation vulnerability in Brocade Fibre Channel SAN products running Brocade Fabric OS (FOS) releases earlier than v7.4.1d and v8.0.1b could allow an authenticated attacker to elevate the privileges of user accounts accessing the system via command line interface. With affected version...

CVE-2016-8209
Published: 2017-05-08
Improper checks for unusual or exceptional conditions in Brocade NetIron 05.8.00 and later releases up to and including 06.1.00, when the Management Module is continuously scanned on port 22, may allow attackers to cause a denial of service (crash and reload) of the management module.

CVE-2017-0890
Published: 2017-05-08
Nextcloud Server before 11.0.3 is vulnerable to an inadequate escaping leading to a XSS vulnerability in the search module. To be exploitable a user has to write or paste malicious content into the search dialogue.

Dark Reading Radio
Archived Dark Reading Radio
In past years, security researchers have discovered ways to hack cars, medical devices, automated teller machines, and many other targets. Dark Reading Executive Editor Kelly Jackson Higgins hosts researcher Samy Kamkar and Levi Gundert, vice president of threat intelligence at Recorded Future, to discuss some of 2016's most unusual and creative hacks by white hats, and what these new vulnerabilities might mean for the coming year.