Key takeaways from SRG’s Open RU/DU testing

Testing the various components of Open Radio Access Networks—the open control unit, distributed unit and radio unit, or O-CU, O-DU and O-RU—has been the focus of multiple plugfests and other extensive testing as operators, vendors and test companies work out the intricacies of interoperability, specification conformance and various features in multi-vendor O-RAN environments.

Signals Research Group recently took Open RUs and DUs to the lab and tested their performance under different, simulated radio conditions, using test tools from Spirent Communications.

“The scope of this study was particularly unique in the context of Open RAN testing, which has historically focused on compliance or in some cases energy efficiency,” SRG said in an accompanying report based on the testing. “It is all fine and good to have multi-vendor tests to demonstrate different radio components can ‘talk’ with each other and even establish a voice/data call. However, that capability is table stakes in today’s environment. Mobile operators want networks that not only work but which work well by delivering good user data speeds with realistic/challenging conditions and which can scale by supporting a large amount of data traffic in the most efficient means possible.”

“To get to the next phase [of Open RAN], we need to focus on a new set of metrics, to really prove that the technology is ready for scale, and wide-scale adoption,” said Spirent’s Anil Kollipara, VP of product management for test and automation, during a session at Open RAN Global Forum discussing the DU test results with SRG President Michael Thelander.

What did SRG test?

The company tested two Open RAN distributed units (O-DUs) and accompanying control units (CUs) at a Spirent facility in Texas, and three Open RAN RU reference platforms at Spirent’s facilities in New Jersey.

What type of tests were conducted?

The tests included emulation of multiple UEs interacting with the DUs and RUs via uplink and downlink testing under ideal/static radio conditions, as well as uplink receive sensitivity tests and a “multitude” of fading channel scenarios (using scenarios specified by 3GPP) in the uplink and downlink, including simulation of movement from walking to driving at highway speeds.

“We want to really understand how these different vendor implementations perform under these real-world conditions,” said Kollipara.

What were the results?

Under favorable RF conditions, SRG said, “there weren’t any meaningful differ-
ences in performance.” That changed, however, with more challenging radio conditions.

SRG’s results indicate some pretty stark performance differences between different vendor implementations of Open RAN RUs and DUs (and companion CUs). But the performance differences were not as simple as, one vendor performed better than another—it was more a situation of, the vendors performed better than one another under specific network conditions.

For example: While the specific vendors weren’t named, SRG found there was a 27-28% difference in “goodput” performance (or the measured throughput adjusted for the associated bit error rate) between the two DU platforms—but each one performed better under different conditions. In near-cell conditions with little path loss, the “Platform A” performed 28% better than Platform B; at the cell edge, “Platform B” performed 27% better than Platform A, including being able to better maintain coverage/connections with the UEs, despite more path loss, according to Thelander. That suggests the platform from the first vendor is more optimized for enhanced mobile broadband use, Thelander said, while the second platform is arguably more optimized for an IoT-type deployment. Those would be important considerations for any operator weighing its Open RAN options.

SRG found that in the RU testing, meanwhile, the differences in the receive sensitivity among the three RUs ranged from high single-digits to low double-digits of dB, while differences in “goodput” ranged from the mid-to-high double digits, or even triple digits, across the various fading scenarios. That has direct implications for deployment. “This difference in sensitivity could translate to double the cell radius or four times the cell size,” SRG pointed out, adding that the outcome also “indicates there are RUs on the market today that are Open RAN compliant but by no means worthy of deploying in a commercial network, even if they are given away.”

What are the implications?

SRG noted in its report that sizable performances differences are to be expected across the Open RAN ecosystem, which was meant to put new network equipment and software vendors in play—so it shouldn’t be a surprise that testing can reveal differences among such vendors, or differences between new players and established NEMs who have also embraced Open RAN.

As SRG pointed out, the performance and coverage metrics that an Open RAN platform can achieve, determine the cell size that an operator can reliably plan for, and heavily influence the customer experience. And if available Open RAN platforms can deliver performance only under optimal conditions or in the lab, but falter in real-world performance testing, that’s an existential problem for Open RAN.

“At the end of the day, what really matters is, can you deploy a solution that delivers on its promises?” Thelander said in the Open RAN Global Forum session. What SRG testing illuminated, then, was the “stark differences” in the capabilities of Open RAN RUs and DUs delivery under the same network scenarios.

More information available from Signals Research Group here, and you can watch the Open RAN Global Forum Session on-demand here.

Comments are closed.