Open Compute Project
As enterprises move their compute requirements out to the Cloud, and telecommunication operators convert their central offices and telephone exchanges to data centers via the work of initiatives such as The Telecom Infra Project (TIP) and the OCP Telco project, the operators of colocation facilities such as Kao Data have already understood that to thrive as a business and to be able to support the new applications being developed, such as for IoT, they will need to be able to provide edge, metro and centralised cloud computing. It is envisioned that all of these methods of cloud computing will run on an Open Compute hardware design, which will then enable colocation facilities and their tenants to meet the increasing demand for their services at Scale at the lowest possible CAPEX and OPEX cost.
To assist colocation facilities and their tenants across the world with understanding the facility requirements that will be needed to enable smooth and trouble free deployment of Open Racks an OCP Data Center Facility project – sub project was formed and tasked to produce a colocation facility guidelines for Open Racks quality checklist document.
‘OCP Ready’
During the life of the project the interest in it has grown from both the operators of colocation facilities, who see the value of it as an aid to transform their data centers, into one that is capable of handling the next generation of cloud computing. And also from enterprises that want to be sure that the colo facilities that they are looking to deploy their OCP IT Gear into are ‘OCP Ready’.
The initial project work to create a guidelines and checklist document, which has now been published and is available from Data Center Facility Project WIKI, has been focused on defining the data center sub system requirements that a European colocation facility would need to provide to accommodate the latest Version 2 design for the Open Rack, that when populated could weigh up to 500 kg and have a maximum IT load of 6,6 kW. Although, within the Open Rack design there is the capability to deploy it in all regions of the world and support a much higher IT load e.g. 36 kW and up to 1400 kg in weight, it was decided that to create a minimal viable product (MVP) document as quickly as possible, it would be best to restrict the checklist objectives to one that was less complex. Also the project team considered that if the minimum ‘must-have’ requirement was set at this lower level, it would allow up to 80% of the existing colo facilities in Europe to be able to accommodate an Open Rack, and therefore aid in the adoption of OCP.
Classification headings
Within the checklist the attributes of each data center sub system have been assessed and listed in rows, below one of the classification headings of ‘must-have’, ‘nice-to-have’ or ‘considerations’. The parameters of each attribute have then been inserted into one of two columns, with the headings of ‘acceptable’ or ‘optimum’. The ‘must-have’/ ‘acceptable’ attributes have been considered by the project team as the minimum requirement needed to be provided by the colo to accommodate an Open Rack V2, which weighs a maximum of 500kg when populated, and a maximum IT load of 6,6 kW.
The ‘nice-to-have’ attributes are viewed as not essential for a deployment, but could be beneficial based on a particular scenario. The attributes under the classification heading of ‘considerations’ are those which are usually tenant specific requirements. There is also guidance information within the checklist for an attribute’s parameter to be considered as optimum, and if implemented by the colo or tenant would enable the full benefits of the Open Rack design to be achieved.
Segments
The checklist has been segmented into the sub system areas below for consideration by the colo facility or tenant:
- Architectural
- Data Center Access
- White Space
- Electrical Systems
- Cooling
- Mechanical
- Climate
- Telecommunication Cabling, Infrastructure, Pathways and Spaces
- Cabling Pathways & Spaces
- Cabling
- Network Infrastructure
Architectural / Data Center Access
This section of the checklist considers the requirements needed to allow a fully packaged/crated rack to be brought into the data center from the point of off-loading from the delivery vehicle, and then brought into the facility via the loading bay or dock to the goods-in area. The many attributes that have been considered and included in the checklist range from a ‘must-have’/ ‘acceptable’ parameter of the delivery at road level with no step and threshold free, to a ‘must-have’/ ‘optimum’ which is a loading dock with an integral lift that would allow packaged racks on pallets to be transported directly from inside the truck level to the data centre goods in area.
The ‘must-have’/ ‘acceptable’ parameter for the delivery pathway would be 2,7 m high x 1,2 m wide, as this would provide sufficient height and width clearance in the doorway leading to the goods-in and unboxing locations. It is also typical for ramps to be found in colo facilities, so it is important that the gradient of any ramp in the delivery pathway is known, as a fully populated Open Rack weighing 1500 kg would prove very difficult to move up a ramp that was steeper than a 1:12 incline.
Other ‘must-have’ attributes that have found their way on to the list that can be very important to enable a smooth deployment include specifications for the delivery pathway within the data center, such as height and width of door openings in corridors, and the maximum weight a lift can carry.
Architectural/ White Space
In the checklist, a number of structural attributes for a data center have been considered, with many classed as ‘must-have’. Open Racks are heavy in nature and many of the traditional colos built even as recently as 10 years ago were not designed to accommodate Pods of 24 racks, with each rack weighing between 500 kg to 1500 kg, so a ‘must-have’ / ‘acceptable’ parameter for the access floor uniformed load to support a 500 kg rack would be 732 kg/m2 (150 lb/ft2)(7,17 kn/m2).
Electrical Systems
The IT gear within an Open Rack is powered by one or two rack mounted power shelves, containing AC to 12V DC rectifiers, which distribute 12V or 48V via busbars in the back of the rack to the equipment. This power shelf can also contain lithium Ion batteries that would act as the battery back up (BBU) and therefore providing a benefit for a colo to not have to provide a centralised upstream UPS supply.
For a colo in the EU to be able to accommodate an Open Rack that has an IT load of 6,6 kW, a ‘must-have’ / ‘acceptable’ requirement would be to provide a rack supply, fed by a central upstream UPS with a capacity of 3 phase 16Amp, with a receptacle compatible with IEC60309-2 5 wire. The ‘nice-to-have’ attribute, which has been categorised as ‘optimum’ within the checklist, as it provides an opportunity to be more energy efficiency and resilient, would be for the colo to provide a supply to the rack that was not from the central upstream UPS, but from the UPS input distribution board. Considerations for a colo and tenant would be to understand the generator start-up time if the racks were reliant on the battery backup unit (BBU) of the power shelf to be the UPS, so as to ensure that there was sufficient autonomy time to keep IT gear functioning before the generator set comes online.
Cooling
One of the many advantages of the Open Rack design is that all servicing and cabling of the equipment in the rack can be carried out at the front, so if the racks are contained in a hot aisle then maintenance personnel will need never enter that space, which is normally very uncomfortable to work in. Therefore it has been considered as a ‘nice-to-have’/ ‘optimum’ arrangement to have a hot aisle containment system. The ‘must-have’ attributes in this section of the check list include either hot aisle or cold aisle containment, front to back air flow and inlet temperature, and humidity within the Ashrae recommended limits.
Telecommunication Cabling, Infrastructure, Pathways and Spaces
The ‘must-have’/ ‘acceptable’ arrangement for routing network cabling into an Open Rack would be either top or bottom entry and to the front of rack. A ‘nice-to-have’/ ’optimum’ parameter for routing cabling into racks for network connectivity would be to be feed from only the top of the rack and to the front.
Network Infrastructure
In this section of the checklist there are only ‘considerations’ listed, as this aspect of the design is very much specific to the needs of the tenant’s use case. Attributes to be considered by the tenant include maximum link distance between Spine & Leaf network switches, transmission speeds of Top of Rack (TOR) switches, media type for TOR to Leaf and Leaf to Spine connectivity.
Becoming ’OCP Ready’
Following on from the sub project to create a suitability guide and quality checklist document a new program has been developed for colocation facility operators who are interested in having their data center branded ‘OCP Ready’.
To start the process the colo operator reaches out to the data center facility (DCF) Project lead Brevan Reyher (Rackspace) or DCF team member Mark Dansie (InflectionTech). The colo operator is then asked to join the mailing list if they haven’t already subscribed.
The DCF project lead or DCF team member assigns a scorecard/checklist to the colocation facility operator to complete. Once the scorecard/checklist has been completed and is ready for review by the community, the DCF Project lead will arrange for the colocation facility operator to attend a monthly DCF project call, to present their data center to the community.
During the call, the operator presents the results of the checklist and supporting evidence (drawings, commissioning data, etc). The community can then ask clarifying questions, etc. Once the DCF project members and DCF project lead are satisfied, the process moves up to the OCP Incubation Committee (IC) for further review.
The colocation facility operator presents again to the Incubation Committee, and if no issues, the process is complete and the operator can start using the term ‘OCP Ready’ on their website and in other marketing material.
At the point when the operator can start using the term ‘OCP Ready’, there will be an optional opportunity for the operator to pay the yearly fee to OCP to join the community as a corporate member (if not already a member) and an additional yearly fee to have their facility listed on the OCP market place.
More information
If you would like to know more here are some useful links.
- Visit the OCP website: http://www.opencompute.org/
- Data Center Facility Project Wiki https://www.opencompute.org/wiki/Data_Center_Facility
- Data Center Facility Project Mailing List http://lists.opencompute.org/mailman/listinfo/opencompute-datacenter
Or ask Mark Dansie at InflectionTech, who is an OCP data centre facility subject matter expert: mark.dansie@inflectiontech.net
@markdansie
@inflectiontech