Oppo claimed to have become the first company to implement so-called Mixture of Experts (MoE) AI architecture on device, a move which apparently enhances processing efficiency and lays the foundations for further integration between the technology and mobile hardware.
The Chinese vendor stated device-based MoE represents a “significant breakthrough” as it activates specialised sub-models to handle specific tasks, “thereby significantly improving processing efficiency and cutting down computing and data transfer consumption”.
It cited laboratory tests which reveal on-device MoE architecture accelerates AI tasks by approximately 40 per cent, thus reducing resource demands and improving energy efficiency.
This results in faster AI responses, longer battery life and enhanced privacy, as more tasks are handled “locally” on the device, explained Oppo.
The vendor collaborated with leading chipset providers to achieve the feat, responding to the challenges of using large AI models which require substantial computation power that impact performance, especially on devices with limited hardware resources.
“By lowering AI’s computational costs, MoE allows more devices, ranging from flagship to affordable devices, to perform complex AI tasks, accelerating AI’s adoption across the industry,” added Oppo.
Oppo stated it remains committed to advancing AI technology and making it more available to users, pointing to more than 5,860 patent applications in the field.
Indeed, the battle is heating up among handset manufacturers to bring viable on-device AI systems to the market.
Notably Oppo rival Honor made a big push around the area earlier this year, while chipmaker Qualcomm has also outlined why AI being placed on-device is preferable over a cloud approach.
Comments