Dell Technologies has sharpened its focus on AI deployments in enterprises by unveiling powerful new Nvidia-powered servers, storage devices and AI laptops last week in its yearly customer event in Las Vegas.
The range of products shown off by the long-time PC vendor also reflects its recent transformation into a comprehensive enterprise technology provider – one that is clearly targeting the infrastructure needed for AI.
Tech analyst David Vellante, founder of theCube, a recognised voice in the technology sector, said the developments highlight Dell’s strong market position.
“Every aspect of the tech stack is changing, the compute stack is being parallelised, storage is being disaggregated, and networking is moving to low-latency benchmarks,” he pointed out, on the sidelines of Dell Technologies World 2025.
This shift towards extreme parallel processing defines the AI era. Furthermore, increasing enterprise concerns around data sovereignty and privacy are driving a significant resurgence in on-premise cloud initiatives.
This is a trend Dell appears well-equipped to capitalise on, due to its foundational strength in hardware infrastructure, now augmented by Nvidia-powered graphics processing units (GPUs) and a sophisticated software stack.

Dell’s ecosystem of technology partners, including companies like enterprise search firm Glean, further enhances its ability to deliver a wide array of enterprise AI applications, said Vellante.
To seize this opportunity, Dell is rolling out its AI Factory approach. This strategy offers compelling cost efficiencies, potentially making large language model (LLM) inferencing between 60 per cent and 62 per cent more cost-effective than public cloud alternatives. At the same time, it also promises enhanced security and ease of deployment.
The Dell AI Factory with Nvidia, initially announced at Nvidia’s GTC event in March, is an integrated solution aimed at streamlining AI adoption.
It combines an enterprise’s data and specific use cases with Nvidia processors, a curated software stack (from both Nvidia and third-party providers), and comprehensive infrastructure from Dell and Nvidia.
Notably, Dell AI Factory server hardware will also support AMD and Intel variants, ensuring broader compatibility for diverse enterprise environments.
Enterprises seeking to accelerate their AI projects can choose between tailored integrated capabilities or pre-validated services for specific applications like digital assistants.
Michael Dell, the company’s chief executive officer, is upbeat about the AI business, noting that a significant transition of customer AI projects from proof-of-concept stages into production environments has occurred in the past 12 months. He revealed that an initial rollout of AI Factory has seen over 3,000 projects deployed across various sectors.
It is in Asia-Pacific that Dell is seeing rapid growth in GenAI spending. Thirty-eight per cent of AI spending in this region is focussed on GenAI, compared to 33 per cent worldwide and only 29 per cent in North America.
Its research, coupled with insights provided by IDC, also showed that AI will contribute over US$5 trillion to Asia’s economy by 2030, accounting for 3.5 per cent of GDP, said Peter Marrs, Dell’s president for the Asia-Pacific, Japan and Greater China region in a briefing for the region’s media.

Beyond the traditional confines of IT firms and hyperscalers, the region’s market has significantly broadened its reach, to include sectors such as banking and financial services, manufacturing, healthcare, retail, and energy.
These industries are actively leveraging AI for diverse, high-impact applications. In finance, think of enhanced fraud detection and anti-money laundering measures in finance. For manufacturers, AI is helping to optimise supply chains, improve quality control, and refine demand forecasting.
Energy grids use AI to enable predictive maintenance for energy grids while healthcare organisations turn to AI to advance diagnostics, predictive health analytics, and real-time patient monitoring.
In the region, Dell is also supporting educational institutions, enhancing their internal capabilities for research and entrepreneurship.
A prime example is the collaboration in South Korea, where the education practice platform, Elice, has successfully implemented affordable GPU services, powered by the Dell AI Factory with Nvidia.
Jaewon Kim, CEO of Elice, said this initiative directly contributes to upskilling the workforce in AI technology by providing scalable, affordable, and secure data centres specifically for machine learning services.
Elice is also hosting digital textbooks from the Korean Ministry of Education, impacting a substantial five million students.
New product offerings to boost AI deployments
At Dell Technologies World, the company unveiled several key products including:
- PowerEdge XE9680L: This new server model supports eight Nvidia Blackwell Ultra chips. Its design incorporates direct liquid cooling, addressing the intensive thermal management requirements of high-density GPU deployments. These servers are will significantly accelerate AI model training, offering up to four times faster performance than previous generations, and are available in both air-cooled and liquid-cooled configurations. It can support up to 256 Nvidia Blackwell Ultra chips.
- PowerScale F910: Optimised for AI workloads, this all-flash file storage system offers improved performance and density.
- Project Lightning: A forthcoming parallel file system software framework intended for integration into PowerScale. This development aims to substantially enhance file storage performance, particularly for high-performance computing and AI training.
- PowerStore Prime: This new PowerStore model delivers higher performance and is aimed at general-purpose storage needs within an AI-driven enterprise.
- Dell Pro Max Plus laptop: With a Qualcomm AI-100 NPU with 32 AI cores, it enables AI engineers and data scientists to execute large AI models directly on the device, reducing reliance on cloud services for specific inference tasks. This capability is designed to support edge AI inferencing and facilitates the local processing of models with up to 109 billion parameters, such as Meta Llama 4. The platform’s overall design supports data ingestion and processing from the edge to the data centre or cloud.