Invite a friend
Recommend WhyTelecom to:
Contents

Building a Telecom Data center

telecom data center
The room or the facility housing the critical telecommunication system servers/switches along with associated equipments is called a telecom data center. The telecom data center being critical to the organization’s operation has special provisions for its environment, power supply system, data communications connections and high security.
 
The era of dot.com bubble brought the boom in data centers. The Internet presence, a must for the companies, required fast Internet connectivity and reliable 24/7 operations. This was a very expensive proposition for smaller companies with limited resources. To overcome the imitation faced by small companies, companies built large scale facilities called Internet data centers, which had large number of servers & equipment and the part of facilities were given on hire to whosoever required Internet presence. The evolution of this new business of data center brought about some new innovations in design, construction & management of these large scale data center facilities.
 

2.   How critical is telecom data center?

 
A telecom data center houses critical equipments such as OSS (Operation support system), BSS (Business support system), Master control center (MSC) etc. The equipment installed in telecom data center form the backbone of operations for operators and their 24/7 service require uninterrupted operations of systems installed. The security of data is also very important in today’s competitive market. All this requires the telecom data center to work under high standards, ensuring data integrity, safety & availability. This requires sound design, redundancies of power, equipment and path.
 

3.   Telecom Data center classification

 
As per The Telecommunications Industry Association (TIA) standard TIA942:Data Center Standards Overview[1],  the data centers can be classified into 4 types from Tier 1 data center to Tier 4 data center based on their availability. The Tier 1 data center is taken to be a normal computer room with some basic system installation. The Tier 4 is for the data center with stringent specification for housing mission critical systems with redundancies and strict security measures.
 
As per TIA, these tiers have been defined based on information from the Uptime Institute, a consortium dedicated to providing its members with best practices and benchmark comparisons for improving the design and management of data centers.
 
The table 1 below gives the comparative requirements for all the type of data centers:
 

Table 1 - Comparison of Data center types

 

Parameters

Tier I

Tier 2

Tier 3

Tier 4

Main feature

Basic

Redundant Components

Concurrently Maintainable

Fault Tolerant

Availability

99.671%

99.741%

99.982%

99.995%

Activity planning

Susceptible to disruptions from both planned and

unplanned activity.

Must be shut down completely for perform

preventive maintenance

Less susceptible to disruption from both planned and unplanned activity.

Maintenance of power path and other parts of the

infrastructure require a processing shutdown

Enables planned activity without disrupting computer

hardware operation, but unplanned events will still

cause disruption

Planned activity does not disrupt critical load and data

centre can sustain at least one worst-case unplanned

event with no critical load impact

Power systems

Single path for power and cooling distribution,

no redundant components (N)

Single path for power and cooling disruption, includes

redundant components (N+1)

Multiple power and cooling distribution paths but with

only one path active, includes redundant components

(N+1)

Multiple active power and cooling distribution paths,

includes redundant components (2 (N+1), i.e. 2 UPS each with N+1 redundancy)

Infrastructure requirements

May or may not have a raised floor, UPS, or generator

Includes raised floor, UPS, and generator

Includes raised floor and sufficient capacity and

distribution to carry load on one path while

performing maintenance on the other.

Includes raised floor and sufficient capacity and

distribution to carry load on one path while

performing maintenance on the other.

Time to implement

3 months

3 to 6 months

15 to 20 months

15 to 20 months

Max permissible Annual down time

28.8 hours

22.0 hours

1.6 hours

0.4 hours

 

The classification of data center into four tiers, based on reliability, allows the designers as well as users to objectively compare one data center to another.

 

 

 

4.   Standards for building and managing telecom data centers

 

The Telecommunications Industry Association (TIA) is the leading trade association representing the global information and communications technology (ICT) industries. TIA is accredited by ANSI. TIA's product-oriented divisions – User Premises Equipment, Wireless Communications, Fiber Optics, Network, and Satellite Communications – address the legislative and regulatory concerns of product manufacturers. TIA-sponsored committees of experts prepare standards dealing with performance testing and compatibility [4].

 

TIA-942 is one of the first standards for data center. Numbers of standards related to safety, cabling, environment, power, fire protection etc. are applicable for data centers. The main standards are given as follows:

American Standards

ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers)

ANSI (American National Standards Institute)

ANSI/TIA/EIA

BICSI (Building Industry Consulting Service International)

NFPA(National Fire Protection Association – US)

ISO (International Organization of Standardization)

IEEE (Institute of Electrical and Electronics Engineers)

IEC (International Electro technical Commission)

 

British Standards


BSI (British Standards)

 

European Standards

 CENELEC (European Committee for Electrotechnical Standardization)

EN (European Standards)

 

5.   Data Center Planning & Design

 

5.1      Design Philosophy

The following things should be kept in mind while designing data center. These include the Sun Microsystems’s recommended design guidelines [2] foe data center implementation.

5.1.1      Keep the Design as Simple as Possible 

A simple design will make the systems easy to understand and manage.
 

5.1.2      Design for Flexibility

The design should be flexible to take care of:

 

·         The future changes in requirement,

·         The technology up gradations likely to come.

·         To make system cost effective for implementing changes required.

5.1.3      Design for Scalability

 

The data center design should be flexible to take care of future expansion required.

 

5.1.4      Use a Modular Design

A modular design makes the complex system like data center easy to understand and manage. The design should be in small modular units which can be added to make a bigger system.

 

5.1.5      Plan ahead

 

The proper planning in advance leads to no slippages in deadlines & agreements.

5.1.6      Use RLUs, not square feet

Move away from the concept of using square footage of area to determine capacity. Use RLUs (rack location units) to define capacity and make the data center scalable.

5.1.7      Worry about weight.

 

Servers and storage equipment for data centers are getting denser and heavier every day. Make sure the load rating for all supporting structures, particularly for raised floors and ramps, is adequate for current and future loads.

5.1.8      Use aluminum tiles in the raised floor system.

 

telecom datacenterCast aluminum tiles are strong and will handle increasing weight load requirements better than tiles made of other materials. Even the perforated and grated aluminum tiles maintain their strength and allow the passage of cold air to the machines.

 

5.1.9      Label everything.

 

Labeling particularly cabling is important. The time lost in labeling is time gained when you don't have to pull up the raised floor system to trace the end of a single cable. And you will have to trace bad cables!

5.1.10  Keep things covered, or bundled, and out of sight.

 

If it can't be seen, it can't be messed with.

5.1.11  Hope for the best, plan for the worst.

 

That way, you're never surprised.

5.2      Data Center Site Selection

5.2.1      Geographic Location

The location of a data center is very important. The site selection should take into consideration the following factors:

 

5.2.1.1            Natural Hazards

The site selected should be away from hazards like tornados, flooding, hurricanes, and seismic disruptions such as earthquakes and volcanic activity. In case data center is located in these natural disaster prone areas, adequate protection should be available to counter the hazard know to be available e.g. a selecting/building earthquake proof building in earth quake prone area.

5.2.1.2            Man-Made Hazards

The data center location should be away from man made hazards such as:

  • Pollution, which can disrupt data center environment,
  • Vibrations caused by some factories, airports, railways, tunnels etc. can disrupt the working of data center.
  • EMI/RFI : The Electromagnetic Interference or Radio frequency interference should not be there. In case there is no option proper shielding need to be provided to take care of the problem.

5.2.1.3            Emergency Services and Vehicle Access

The data center should be located so that emergency services such as fire tenders etc. can reach the data center fast in case of any exigency. The data center may need heavy equipment requiring access for large trucks carrying the equipment.

5.2.1.4            Utilities

The necessary power, water, gas, Internet connectivity and any other necessary utilities required should be available in the area.

5.2.1.5            Human resources

 

As far as possible the data center should be located in an area where required human resources are available. Availability of local trained manpower can substantially reduce the running cost of a data center.

5.2.2      Structural Considerations

Following should be considered while selecting the space for data center.
 

5.2.2.1            Location of data center within building

telecom data center 

The data center should be located at a place where there is no chance of flooding, should be away from damp locations, adequate protection is available from natural or man made disasters.

The space for housing equipment should have sufficient height to cater to false ceiling, air conditioning ducts, light fixtures, fire suppression systems, racks etc.  

The building should be able take load of the heavy data center equipment.  

The aisle, staircase and lifts available should have enough width & load capacity for transportation of heavy racks and equipment etc.  

 The space should be available for housing UPS systems, generators (if required), HVAC systems etc.  

 There should be space for expansion, if required.  

 

5.3      Data Center space and layout

 

The proper space allocation is important for data center operations as well as future expansion. A balance must be struck between initial deployment and anticipated future space requirement to economize on cost. The design should have plenty of flexible space to take care of future requirements. According to TIA-942, a data centre should include the following key functional areas:

 

5.3.1      One or More Entrance Rooms

 

This room houses access provider equipment and acts as demarcation points as well interface with campus cabling systems. For better security the room should be just pout side room housing data processing equipment. For large data centers there can be multiple entrance rooms.

 

5.3.2      Main Distribution Area (MDA)

 

The MDA is located centrally and houses the main cross-connect as well as core routers and switches for LAN and SAN infrastructures. The horizontal cross-connect for a nearby equipment distribution area may also be part of MDA. The TIA standards require at least one MDA. The standard also requires installing separate racks for fibre, UTP, and coaxial cable in this location.

 

5.3.3      One or More Horizontal Distribution Areas (HDA)

 

The HDA contains the distribution point for horizontal cabling and houses cross-connects and active equipment for distributing cable to the equipment distribution area. The separate rack for fibre, UTP, and coaxial cable should be installed in HDA. The switches and patch panels should be located to minimize patch cord lengths and facilitate cable management. The limit of connections for HAD is 2000 connections & more connections require additional HAD.

 

5.3.4      Equipment Distribution Area (EDA)

 

The horizontal cables are terminated with patch panels in the Equipment Distribution Area. The racks and cabinets should be installed in an alternating pattern to create "hot" and "cold" aisles. This is done to effectively dissipates heat generated by equipment.

 

5.3.5      Zone Distribution Area (ZDA)

 

The Zone Distribution Area is an optional interconnection point in the horizontal cabling between the HDA and EDA. The ZDA is used as a consolidation point for reconfiguration flexibility or for housing freestanding equipment like mainframes and servers that cannot accept patch panels. Only one ZDA,  with a maximum of 288 connections, is allowed within a horizontal cabling run  and it can not contain any cross-connects or active equipment.

 

References

 

[1].              Data Center Design Philosophy

 

[2].              TIA-942 Data Center Standards Overview - 102264AE

 

[3].              Data Center Projects: Establishing a Floor Plan

 

[4].              http://www.tiaonline.org/about/