Hewlett Packard Enterprise (HPE) beefed up the capabilities of its Aruba Networking Central management platform with its own internally developed generative AI (GenAI) large language models (LLMs) to improve network performance, accuracy and security for customers.

Instead of using OpenAI’s ChatGPT or other public LLMs, HPE Aruba is creating and training LLMs from its own internally sourced data. The self-contained LLMs are designed with guardrails “to improve user experience and operational efficiency, with a focus on search response times, accuracy and data privacy”.

The company stated HPE Aruba Networking has collected telemetry from nearly 4 million network-managed devices and more than 1 billion unique customer endpoints, which also power its ML models for predictive analytics and recommendations.

The new GenAI-based LLM functions will be blended into the Aruba Networking Central’s AI search feature “to provide deeper insights, better analytics and more proactive capabilities”.

The vendor explained HPE Aruba Networking Central is a cloud-based network platform that was introduced in 2014 to configure, manage, monitor and troubleshoot networks across wired and wireless LAN, WAN and IoT using a SaaS model.

To protect the data, the LLMs are “sandboxed” within the networking central platform running on the HPE GreenLake cloud. HPE stated it ensures customer data security with proprietary, purpose-built LLMs that remove personal and customer identifiable information (PII/CII) data.

The new GenAI LLM-based search engine will be available later this year.

It also announced that Verizon Business was adding HPE Aruba Networking Central to its managed services portfolio.