Open RAN fronthaul testing considerations—automation is key

End-to-end fronthaul testing, as well as isolation testing, provide the big picture and granular data needed to drive Open RAN adoption

Given one of the primary goals of Open RAN—providing architectural flexibility by breaking up the centralized unit (CU), distributed unit (DU) and radio unit (RU), and opening up the interfaces used to connect the different parts of the radio system—it’s not surprising that fronthaul interfaces have been a focal point since the beginning. And in the 5G era when massive MIMO radios became key to optimizing capacity/coverage in mid-band spectrum, fronthaul became even more vital given the integration of radio, antenna and some of the intelligence usually associated with the baseband into one unit. 

The work has come along, driven by the O-RAN Alliance, albeit with some compromises along the way that are arguably contrary to the idea of Open RAN as a vector for decreasing RAN capex. Needless to say, RAN vendors have generally aligned around the 7.2 fronthaul specifications for simpler radios, up to eight transmitters and receivers, called Cat A; then Cat B which is geared toward massive MIMO radios. Within Cat B there are two sub-options: operation mode A puts uplink functions like the equalizer in the RU, whereas operation mode B keeps the equalizer in the DU.

Okay. So the specifications are there, there are plenty of real world examples of these radios working in commercial networks, and the long-term outlook for Open RAN is positive. But fronthaul testing is still tricky work because, in part, the optionality within the specifications then maps to complexity in testing multi-vendor Open RAN. During the recent Open RAN Global Forum, experts from test specialists Litepoint and VIAVI talked through key fronthaul testing considerations, particularly as it relates to massive MIMO.

Looking at Open RAN from a macro perspective, Litepoint Director of Marketing Adam Smith said big issues around cost, integration complexity and power consumption are continuously improving but, “We’re basically deploying a new technology, new capability, new architecture, in a very mature market at this point…If we look at where we are on the capabilities of products that support Open RAN architecture, we’re still a bit on the steep part of the curve.”

With regard to the longer-term push from single-vendor Open RAN to true multi-vendor Open RAN, he said our present state (single-vendor) isn’t surprising. “It makes a lot of sense,” from the operator’s perspective. But, “The risk in that, of course, is that we don’t head towards a single-vendor O-RAN; you might just call that RAN 2.0…I think for the ecosystem, we need to get to a multi-vendor ecosystem to get to what the goal was for O-RAN…I do think we’ll get there. We’re just going to be on different timelines.” 

The Litepoint point of view, Smith said, is about focusing on real-world RU performance in a way that balances end-to-end system testing, and isolated component-level testing. He acknowledged that the end-to-end look helps you “see the network as the user sees it. But what do you do when you see problems?…One thing we’re focused on is actually performing isolated testing.” 

This involves a test process and solution that emulates the DU side of the fronthaul link, including vector signal generators and analyzers, the additional elements that come with MIMO radios, and the software that glues the system together. “It’s a relatively complex setup,” Smith said, “but I think most importantly there’s a huge domain expertise problem in this picture.” He sees a disconnect between the 3GPP specifications on the air interface side and the lack of uniformity on the fronthaul side. Smith said RUs are the “perfect case” of needing combined RF and Ethernet domain expertise which is solved through automation and simplification. 

From a solution perspective, Litepoint has integrated fronthaul uplink and downlink communication and synchronization, MIMO signal generation and analysis, and C/U/S/M-plane fronthaul conformance, all wrapped up with automation. By taming complexity with simplification, Smith said the resulting test setup is reliable, repeatable and provides granular control of multiple variables. 

VIAVI’s Ammar Khalid called fronthaul an absolutely “critical element…of the O-RAN infrastructure.” However, “The development work has taken a lot of shortcuts or workarounds” that put it outside of compliance with O-RAN specifications and lead to vendors influencing commercial reality by getting well ahead of the specification work. In sum, he said, this leads to a good deal of inconsistency that has to be accounted for in the test process. 

“Because of that development variability and compliance gaps,” Khalid said, “it leads to the third challenge which is the interoperability parts, which means that once this solution in the fronthaul from the oRU perspective and oDU, which is a multi-vendor environment, is being put into a test.” These variations and gaps “leads to a significantly higher risk of failures.” 

In terms of a path forward, Khalid laid out the following: 

  • Functional testing and extensive feature validation and test scenarios beyond conformance standards.
  • Conformance and certification with rigorous testing for O-RAN WG4 and 3GPP compliance.
  • Performance testing, including dynamic load testing with advanced SU-MIMO, MU-MIMO, and 256 QAM configurations.
  • E2E testing with validation of multi-vendor fronthaul integration with seamless end-to-end testing. 

To do all of that, Khalid said it has to be a function of automation. “We need to ensure that there is less manual intervention and the testing is as credible as possible. The crux to this solution would be starting from a very streamlined CI/CD workflow. Beyond that, he called for automated test suites that are single click to minimize human error, and zero-touch automation for API-driven test scheduling, deployment, configuration, execution and reporting. Further, comprehensive reporting and software management provide consistent, automated verification, reporting and database management. And, from the end-to-end perspective, a single interface for managing the complete setup with dashboards providing real-time insights adds efficiency, as does decoupling scripts to ensure test portability across different system testing configurations. 

For more from the Open RAN Global Forum, register to view the event on demand.

Comments are closed.