LEAP26

Don’t be lazy: Stop treating NVIDIA as a chip company.

Mo Salah
Mo Salah

5 min

The “NVIDIA is a chip company” take is the laziest tech analysis of this tech cycle.

Yes, NVIDIA designs chips. In the same way Apple makes phones. Technically correct, strategically useless.

Jensen Huang has been saying this out loud. “We stopped thinking of ourselves as a chip company long ago,” he told shareholders on June 25, 2025.

NVIDIA is a full-stack computing platform that turns data centers into AI factories. It sells the hardware, the networking fabric, the software stack, the deployment blueprint, and increasingly, the operational playbook that enables AI to work in production. That is why it keeps escaping whatever box people try to put it in.


The numbers explain the strategy.

Let’s anchor this in reality.

As of January 21, 2026, NVIDIA is roughly a $4.45T company. Its latest reported quarter delivered $57.0B in revenue, $31.9B in net income, and a 73.4% gross margin. NVIDIA guided the next quarter to about $65.0B in revenue.

Those are not “we sell components” numbers. Those are “we are the platform that the entire industry is standardizing on” numbers.

And that is the point: NVIDIA is moving up the stack, from selling parts to selling outcomes.


“Full stack” is not a buzzword here.

When people say “full stack,” they usually mean “hardware plus some libraries.” That is not what is happening.

NVIDIA’s full stack looks more like this:

  • Compute: GPUs and accelerators that define the frontier for training and inference at scale.
  • Systems: reference architectures and integrated machines that reduce deployment friction.
  • Networking: fabrics that make clusters behave like one computer, not a pile of servers.
  • Software: CUDA, libraries, compilers, runtimes, optimization, and enterprise-grade tooling.

If you are building serious AI, you are not buying a GPU. You are buying time to value. NVIDIA is selling the shortest path from “we have data” to “this runs reliably, fast, and at scale.”

That is why it wins even when competitors show up with a faster widget in a benchmark. Benchmarks do not run businesses. Systems do.


CUDA is still the most underestimated moat in tech

CUDA is not “a programming toolkit.” CUDA is the language layer that binds developers, frameworks, research, and production deployments to NVIDIA’s architecture.

A modern AI organization has years of CUDA-optimized infrastructure, kernels, profiling, training code, inference optimization, and deployment pipelines. Replatforming is not a shopping decision. It is an organizational trauma.

This is why “the next GPU competitor” narrative keeps failing. The hard part is not building silicon. The hard part is building a software ecosystem that developers actually live in, and making it the default across every major AI workflow.

CUDA is the reason NVIDIA is not competing on chips. It is competing on gravity.


The real product is the “AI factory”.

If you want the correct mental model, stop thinking “chip vendor” and start thinking “AI factory builder.”

An AI factory is a data center designed to produce intelligence. The input is data plus electricity. The output is trained models, fast inference, and deployed AI products that keep improving.

NVIDIA’s systems, like DGX and HGX, became the default infrastructure for serious AI work because they compress complexity. Enterprises do not want to assemble 15 different vendors into something that might work. They want something that works, now, and scales later.

This is also why NVIDIA keeps investing in reference designs, deployment stacks, and enterprise tooling. It is deliberately making AI deployment feel less like research and more like operations.

If you are an enterprise, your question is not “Which chip is faster?” Your question is “How quickly can I go from zero to production, and how painful is it to maintain?”

NVIDIA’s answer is: faster, and less painful.


Networking is the weapon nobody talks about.

Most people still talk about NVIDIA as if it is compute only, while NVIDIA just invested $1B in Nokia, and, funny enough, it was barely mentioned in the media.

Modern AI is not a single-GPU problem. It is a cluster scaling problem. Once you are training or serving at serious scale, your networking fabric becomes your system performance.

NVIDIA’s networking push is not a side business. It is the second act of the platform.

If compute is the engine, networking is the drivetrain. Without it, your expensive cluster behaves like traffic.

This matters because it makes NVIDIA harder to displace. Replacing a component is annoying. Replacing the fabric your entire AI factory runs on is existential.


Top clients and customers:

NVIDIA clients are basically every major Cloud and hyperscalers (Microsoft, Amazon, Google, Oracle), every leading AI Lab (OpenAI, Anthropic, xAI), and almost all Enterprise and infrastructure ecosystem (Dell, HP, Lenovo, Accenture, and Red Hat), and are slowly adding all automotive manufacturers and mobilty leaders (Mercedes, Geely, Lucid, Stellantis, Uber).

If we learned anything from the Cloud cycle, the hyperscalers plus the frontier labs will lead the way seting de facto standards for the entire industry. And then enterprises will follow, and NVIDIA will be there to gain most of the benefits.


My 2 cents:

NVIDIA is poised to dominate the next phase of AI, even in the future when GPUs become less of a monopoly.

Why? Because the competition is shifting from “whose chip is faster” to “whose system ships outcomes.”

NVIDIA is not trying to be a supplier. It is trying to be a stakeholder in the AI economy. It wants to own the architecture of AI factories, the operating substrate of inference, the fabric that connects clusters, and the developer ecosystem that makes it all usable.

Once you build your AI operation on top of that, you are not just buying hardware. You are buying a way of building.

Read next

That is why calling NVIDIA a chip company is like calling Apple a phone company.

It misses what is actually being built.

Read next