Skip navigation
close up of a computer processor Alamy

HPE Boasts Algorithm To Cut Open RAN Energy Use

Patented technology developed by HPE can power down central processing units, but only if they are not supporting the baseband software.

This article originally appeared on Light Reading.

Two rival science labs come up with different answers to the same problem. On paper, each claims to have the best solution, but hardly anyone has even fired up the Bunsen burners, let alone begun to experiment. This fairly describes the situation in the nascent open radio access network (RAN) market, where bickering has broken out between two camps over what to do about the baseband.

Also known as Layer 1, this is the most computationally demanding bit of the RAN software stack. Running it on general-purpose processors would check all sorts of boxes to do with virtualization and the cloud. But it wouldn't be very energy efficient. To get around that problem while boosting performance, RAN vendors want to reintroduce some custom chips. Known as accelerators, they would supposedly handle the most cumbersome RAN functions and free up the central processing unit (CPU) for other tasks. But there is no consensus on how acceleration should be done.

Two options have emerged, and each has powerful sponsors. With lookaside, the accelerator supports only some of the baseband software and the CPU continues to do much of the work. Its biggest champions are Intel and Ericsson, both of which have been highly critical of inline, the alternative. This would take the CPU entirely out of the Layer 1 mix, moving all the baseband functions onto the accelerator. Nokia and a contingent of chipmakers producing custom RAN silicon are inline's main backers.

Each camp insists its technology is more power efficient, and yet the current immaturity of the open RAN market means there is a dearth of evidence to support either's case. "Everyone is showing their solution on paper," said Mark Atkinson, the head of RAN for Nokia. "The real comparisons are going to happen in the second half of 2024 when the leaders have pulled their solutions together and put them into networks. Then we'll see what really performs well and we are really confident in the position we've taken."

Layer Cake

His confidence may now receive a boost from HPE, one of the world's biggest manufacturers of the servers that would host baseband software in open RAN deployments. Thanks to an algorithm developed internally and patented, HPE says it has been able to cut power consumption by switching off CPU cores, the smaller processors that make up the chip, and even put the CPU into a temporarily dormant state when inline accelerators are in use. The same energy savings were not available through lookaside, said Geetha Ram, HPE's head of telco compute.

"If the Layer 1 is in the CPU, the CPU has to be active all the time," she told Light Reading, explaining that Layer 1 itself must permanently be awake. "That Layer 1 software has to be actively listening to the fronthaul traffic no matter what, because a 911 call can come in and that is the first point of entry."

Essentially, the tight integration of the lookaside accelerator with the CPU meant HPE could never turn that CPU off during lab trials. The difficulty may have been compounded by Intel's move to put the lookaside accelerator on the same die as the CPU. Inline accelerators, by contrast, tend to be provided on separate PCIe cards that can be slotted into any server offering support for the PCIe standard.

"In the lookaside architecture we couldn't tell what part of the CPU is actually processing the Layer 1 and therefore we couldn't put the whole CPU into a dormant state because that would kill the whole thing," said Ram. The best HPE could do was reserve some CPU cores for Layer 1 and power the remainder down.

"However, the fundamental issue still remains that some number of CPU cores have to be active and that means it consumes power even when there is no traffic just to make sure that a call doesn't get lost," said Ram. The same considerations do not apply to Layers 2 and 3 of the RAN software stack because they gobble far less power and do not have to be permanently engaged.

What difference HPE's algorithm could make to the lookaside-versus-inline debate is not clear at this stage. Although HPE is not currently able to share percentage figures about the savings a telco could realize, Ram said energy consumption might drop significantly if the algorithm were combined with a very low-power card. One developed by Qualcomm, she said, runs at "only 40 watts or so."

Secret Sauce

Despite all this, she harbors doubts about inline, and reckons lookaside could be more suitable in some conditions. One of her complaints is about the high prices she has seen attached to PCIe cards. HPE's own breakdown and examination of components found no reason why costs should be so high, she told Light Reading during a previous interview.

Intel and Ericsson continue to argue that lookaside is superior on the energy front, as well as in other respects. In a white paper co-authored with Verizon last year, Ericsson said the need with inline acceleration for a separate PCIe card "creates considerably more power consumption than a standard network interface card." Lookaside, it added, brings "an opportunity for application design to use a larger pool of available CPU cores efficiently."

Energy is not the only factor, however. Joe Madden, the founder and president of an analyst firm called Mobile Experts, believes Ericsson's software may simply have provided a better fit with lookaside architecture. "Ericsson has a lot of algorithms, proprietary secret sauce that they have developed over 40 years, tweaks to the scheduler and things like that," he previously told Light Reading. "I believe lookaside allows them to do some things with their algorithms better than with an inline approach."

Nokia, meanwhile, says its inline cards use the same Layer 1 software and silicon – based on the blueprints of Arm, a UK-based company – as its traditional RAN products, arguing that lookaside would at least require different software. "This leads to parallel development and challenges in reaching feature parity with different product releases, ultimately leading to cumbersome rollout constraints with inconsistent end-user experience and a higher total cost of ownership," it said in its own white paper.

Madden remains skeptical of many claims, pointing out that vendors tend to view their latest breakthroughs alongside things their rivals did in the past. "Everybody is comparing what they see in the lab to what somebody else announced a year ago," he said. But his research so far does suggest one of the two camps has an edge. "Based on the data I have, I believe that inline is going to be more efficient in the end."

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish