Content at the speed of your imagination
In the past, one port of 10GbE was enough to support the bandwidth need of 4K DPX; three ports could drive 8K formats and four ports could drive 4K-Full EXR.
Yet, Mellanox Technologies, a leading supplier of high-performance, end-to-end intelligent interconnect solutions that are distributed across 20 African countries by Networks Unlimited, says the recent evolution in the media and entertainment industry requires a higher resolution.
"This trend continues to drive the need for networking technologies that can stream more bits per second in real-time," says Motti Beck, director of marketing: EDC market segment at Mellanox. "However, these number of ports can drive only one stream of data. New films or video productions today include special effects that necessitate the need to support multiple streams simultaneously in real-time."
He explains that this creates a major "data size" challenge for the studios and post-production shops, as 10GbE interconnects have been maxed-out and can no longer provide an efficient solution that can handle the ever-growing workload demands.
Dealing with the dark side of rapid digitisation in financial services
"This is why IT managers should consider using the new emerging Ethernet speeds of 25, 50 and 100GbE," continues Beck. "These speeds have been established as the new industry standard, driven by a consortium of companies that includes Google, Microsoft, Mellanox, Arista, and Broadcom, and recently adopted by the IEEE as well."
An example of the efficiency that higher speed enables, he points out, is Mellanox ConnectX-4 100GbE NIC, which has been deployed in Netflix's new data centre. This solution now provides the highest-quality viewing experience for as many as 100K concurrent streams out of a single server.
Beck says another important parameter that IT managers must take into account when building media and entertainment data centres is the latency that it takes to stream the data. "Running multiple streams over the heavy and CPU-hungry TCP/IP protocol will result in lower CPU utilisation – as a significant percentage of the CPU cycles will be used to run the data communication protocol and not the workload itself, which will reduce the effective bandwidth that the real workload can use," he says.
"This is why IT managers should consider deploying RoCE, which is remote direct memory access (RDMA) over converged Ethernet."
Beck highlights that RDMA makes data transfers more efficient and enables fast data movement between servers and storage without involving the server's CPU. Throughput is increased, latency reduced, and CPU power freed up for video editing, compositing, and rendering work.
RDMA technology is already widely used for efficient data transfer in both render farms and large cloud deployments such as Microsoft Azure, and can accelerate video editing, encoding/ transcoding, and playback.
According to ATTO technology, RoCE utilises advances in Ethernet to enable more efficient implementations of RDMA over Ethernet. It allows for widespread deployment of RDMA technologies in mainstream data centre application. RoCE-based network management is the same as that for any Ethernet network management, eliminating the need for IT managers to learn new technologies. Using RoCE can result is twice the higher efficiency since it doubles the number of streams compared to running over Ethernet.
Designing data centres that can serve the needs of the media and entertainment industry has traditionally been a complicated task that has often led to slow streams and bottlenecks in the pure storage performance, and in many cases has required the use of very expensive systems that resulted in lower-than-expected efficiency gains. "Using high performance networking that supports higher bandwidth and low latency guarantees a hassle-free operation and enables extreme scalability and higher ROI for any industry-standard resolution and any content imaginable," concludes Beck.