Hype Or History? Nvidia And The “Big Bang” of AI

These days, few topics are as hotly debated as the rise of artificial intelligence (AI) and its likely effects. Its tendrils reach into countless industries and ways of life, with tech bros and regular Joes alike proclaiming it the future and the downfall of mankind in equal measure. Did you know, though, that one company claims to be responsible for the “Big Bang” in AI? After all, it reckons that it created the technological advance upon which this entire industry is built. Is this claim accurate? And is AI truly the future, or just another tech-related flash in the pan?

Jensen Huang

In tech circles, Jensen Huang is a legend. The Taiwanese-born, leather jacket-wearing AI pioneer is worth a truly staggering $70 billion. He grew up in Oregon and studied electrical engineering at Oregon State University.

After graduation, he began plying his trade in Silicon Valley as a microchip designer, before forming his own company, Nvidia, in 1993. Inspired by invidia —the Latin word for “envy” —it turned out to be a very apt moniker: three decades later, Nvidia would be the envy of almost every other tech firm in the world.

The “Big Bang” of AI

In a 2020 Barron’s magazine podcast, Huang made the claim which sent ripples through the tech industry. While talking about his company’s revolutionary graphics processing unit (GPU) microchips, he stated his position.

He argued, “The combination of this algorithm, and this processor that we were making, sped up that algorithm from months to days, [and] made it possible for this approach to even be viable. We kind of created the Big Bang of artificial intelligence.”

Outside of tech circles, who has even heard of Nvidia?

To the layman, this mightn’t make much sense, and it sounds like an outlandish claim, to boot. After all, who has even heard of Nvidia anyway? In truth, though, the general public’s lack of knowledge about Nvidia is to be expected. 

You see, the company doesn’t make flashy tech items we use every day, like Apple with its iPhones or Microsoft with the Xbox gaming system. Instead, Nvidia’s GPU chips can be found inside countless computer products made today by a host of companies, powering their main functions.

It’s one of the four biggest corporations in the world

In reality, the omnipresence of Nvidia’s GPUs — and their vital importance to the alarmingly swift development of AI — have led to the company’s value skyrocketing in recent years. In 2019 Nvidia was worth around $100 billion, and for context in 2023 the entire AI market was valued at$200 billion worldwide.

And as of March 2024 Nvidia has hit the stratosphere: it is now worth more than $2 trillion. It’s a status only three other companies can claim: the aforementioned Apple and Microsoft, as well as oil company Saudi Aramco. 

ChatGPT

What exactly boosted Nvidia’s worth so dramatically, though? Well, in May 2023 it was revealed that ChatGPT — the chatbot which had become the talk of the town on the internet, even in non-tech circles — had learned via an Nvidia supercomputer.

Then industry insiders remembered Huang telling his investors several months earlier that Nvidia had supplied half of America’s top 100 companies with comparable supercomputers. Suddenly, the company’s stock shot through the roof, with its value jumping by a mind-boggling $200 billion in just one day.

“The only arms dealer in the AI war”

By the end of that fateful day — May 25, to be exact —- Nvidia had become the sixth-biggest corporation in the world. To the disbelief of many observers, it was now worth more than ExxonMobil and Walmart combined, and it had essentially achieved that status overnight.

This delighted many but was also a cause for concern to others. For instance, because of its ubiquity in the artificial intelligence arena, one Wall Street business analyst claimed, “There’s a war going on out there in AI, and Nvidia is the only arms dealer.”

Questions, questions

Does Huang’s claim of starting the “Big Bang” in AI stand up to scrutiny, though? Can Nvidia stay at the top of the food chain in that arena, or will its competitors make up ground? For a company whose rise was so meteoric, will it be able to sustain and even evolve, or will its fall be just as swift?

On top of these topics, what are the implications of AI going forward, and how does Huang feel about people who caution against its use? These are all questions we need to address in this article and to do so, we’ll need to look at the firm’s origins.

Nvidia started in gaming

Way back when Nvidia was founded, its initial focus was on the gaming industry. Its first graphics card microchip was what enabled the first convincing 3D images to be shown on computer screens. As the company made better, more technically advanced, chips — which became known as GPUs — the images on-screen could be rendered quicker and quicker.

This made the chips vital both for gaming and for editing videos. In 1999 — when the company went public — its future was actually dependent on more computer applications coming along which required 3D visuals.

A missed opportunity

As we now know, the videogame industry went from strength to strength in the subsequent three decades, with the incredible graphics of each new generation of games requiring more advanced GPUs. Along the way, though, Huang realized that Nvidia couldn’t rely on gaming alone, so it tried to diversify its portfolio.

In 2008 the company made new “Tegra” chips aimed at being used in “tiny PCs” — or, as we know them, cell phones. But this didn’t pan out: Apple uses its own proprietary chips, and Android phones use chips made by different companies.

Shoulda, woulda, CUDA 

Yet one diversification did work, and it turned out to be vital to Nvidia’s future in AI. In 2006 the company created a programming language which it dubbed CUDA, and that enabled the GPUs to be applied to general computing applications, instead of simply graphics.

Basically, Nvidia discovered its chips were capable of multitasking much more efficiently than most central processing unit (CPU) chips. Processes involving calculation, such as data-mining and machine learning, were now within the GPU’s remit.

Neural networks

Fast-forward a few years, and a huge breakthrough in “neural networks” would convince Huang that AI was the wave of the future. In essence, a neural network is a computer process that mimics how a human brain works and, up to that point, there hadn’t been much success in getting one to operate effectively.

Bryan Catanzaro, Nvidia’s chief deep learning researcher, told The New Yorker, “I was discouraged by my advisers from working on neural nets because, at the time, they were considered to be outdated, and they didn’t work.”

Geoffrey Hinton and AlexNet

But a University of Toronto professor named Geoffrey Hinton persisted, and in 2009 he used CUDA and an Nvidia GPU to program a neural network to recognize the human voice. He then showed off his findings at a conference.

The professor subsequently emailed Nvidia to say, “Look, I just told a thousand machine-learning researchers they should go and buy Nvidia cards. Can you send me a free one?” But the company refused! It would take another few years of research, and a neural network dubbed “AlexNet” — which Hinton used to recognize images — before Huang would become interested.

Huang finally becomes interested in deep learning

By 2013 neural networks using Nvidia GPUs were capable of image recognition to 96 percent accuracy — outperforming human beings in tests. Huang told The New Yorker, “The fact that they can solve computer vision, which is completely unstructured, leads to the question: ‘What else can you teach it?’”

By this point, Huang was becoming more and more convinced by deep learning, so he turned to Catanzaro, his resident expert. He told Fast Company, “All of a sudden Jensen started caring a lot. It seemed too good to be true.”

Was it time to pivot the company to AI?

Huang was toying with the idea of pivoting Nvidia to AI, but a decade ago — before others even thought it was viable — the chances of the pivot being profitable seemed low. Catanzaro admitted, “I didn’t think that it was actually even possible to focus Nvidia on something like this because, at the time, it was a different company.”

Huang was ready to throw caution to the wind, though. He now believed neural networks could help society and revolutionize computing. Oh, and let’s not forget that he knew CUDA meant he held all the cards in the AI realm.

Nvidia’s GPUs were ideal for deep learning

In 2016 Huang told Forbes, “Deep learning is almost like the brain. It’s unreasonably effective. You can teach it to do almost anything. But it had a huge handicap: It needs a massive amount of computation.”

He added, “And [yet] there we were with the GPU, a computing model almost ideal for deep learning.” The knowledge that Nvidia was already producing a chip seemingly tailor-made for AI purposes sealed the deal, and Huang made the biggest call of his professional life.

Putting all his eggs in the AI basket

Nvidia vice president Greg Estes told The New Yorker that Huang “sent out an email on Friday evening saying everything is going to deep learning, and that we were no longer a graphics company. By Monday morning, we were an AI company. Literally, it was that fast.”

The passionate CEO was now a man on a mission: he told Catanzaro that he had free rein to build a new deep-learning team. In essence, he could pick any of the company’s 8,000 employees to help him shape the future.

Every AI startup was working off Nvidia’s tech

By 2016 — only three years after Huang pivoted Nvidia to AI — there were 3,000 AI startups all seeking venture capital investment. And they were all using Nvidia’s GPUs as the basis of their research.

Investor Marc Andreessen told Forbes, “We've been investing in a lot of startups applying deep learning to many areas, and every single one effectively comes in building on Nvidia's platform. It’s like when people were all building on Windows in the ’90s or all building on the iPhone in the late 2000s.”

OpenAI and the GP

It was in 2016 that Nvidia debuted the DGX-1 — its first AI supercomputer — and sold it to an OpenAI research group. At that point, the chairman of OpenAI was none-other-than Tesla supremo Elon Musk; he personally opened the package to reveal the computer, while Huang watched on!

By 2017, OpenAI had created the first generative pre-trained transformer (GPT) — a new gizmo for neural net training. This GPT used an Nvidia supercomputer as its basis.

Demand skyrockets, and Nvidia charges accordingly

ChatGPT eventually emerged from this research in 2022 and, as previously mentioned, when the tech world got wind of the fact that it ran on Nvidia’s chips, all bets were off. Suddenly Nvidia couldn’t even keep up with the demand for its GPUs and supercomputers.

Naturally, given thedemand levels and the capitalist society in which we live, the models Nvidia is able to push out the door come with eye-watering price-tags. For example, its current DGX H100 training module costs a cool $500,000. It runs five times quicker than the computer which trained ChatGPT.

Data centers abound

These supercomputers are now commonly massed together at enormous data centers, stacked on top of one another like books in the library or shipping containers at the docks. In places like this, with tens of millions of dollars-worth of computing power, there’s no theoretical ceiling to what AI can do. 

Expert Ilya Sutskever told The New Yorker, “If you allow yourself to believe that an artificial neuron is like a biological neuron, then it’s like you’re training brains. They should do everything we can do.”

We already use AI more than we think

In many ways, this AI “future” has already arrived: in our day-to-day lives, we use AI more than we might think. Social-media apps such as X, Facebook, and Instagram feature AI functions, while Slack recently introduced the ability to use AI to compile summaries and recaps of lengthy chat threads.

The cloud-computing services that companies such as Netflix and Tesla use all run on Nvidia GPUs, while many laptops contain them too. Heck, even games consoles like the Nintendo Switch are powered by Nvidia’s chips.

What can AI do now, and what will it learn to do?

Consider this: GPT-4 — the next generation of ChatGPT — is basically capable of turning a sketch hastily scribbled on a napkin into a fully functional website. As time goes on, Nvidia’s AI hardware will be able to teach other pieces of hardware how to accomplish tasks.

As The New Yorker’s Stephen Witt suggested, “Some will manage investment portfolios; some will fly drones. Some will steal your likeness and reproduce it; some will mimic the voices of the dead… Some will write music; some will write poetry.”

Storm clouds on the horizon?

Now, even though Nvidia has experienced incredible success, there are a few potential storm clouds on the horizon. For one thing, the company’s sky-high prices haven’t sat well with everyone in the industry. While it can command up to $40,000 for a chip, competitors like AMD sell similar products for $10,000 to $15,000.

The Wall Street Journal recently suggested Nvidia charges what it does simply because it can: the corporation knows how integral its chips are to all of its consumers’ work, and its customers are scared Nvidia will punish them if they look elsewhere.

Nvidia is facing competition from its own customers

The potential problem here is two-fold. Even though companies like Amazon, Microsoft, and Google buy from Nvidia, they are also trying to cut them out by manufacturing their own chips. Financial analyst Gil Luria told Vox, “The biggest challenge for Nvidia is that their customers want to compete with them.”

On top of this, in the long term, some companies simply can’t afford Nvidia’s prices. As Luria pointed out, Microsoft “went from spending less than 10 percent of their capital expenditure on Nvidia to spending nearly 40 percent. That’s not sustainable.”

The FTC was forced to step in

Nvidia was even the subject of an antitrust investigation when it tried to buy Arm Limited, a manufacturer of chip architecture. The Federal Trade Commission blocked the deal because it would have been another brick in the wall of an Nvidia monopoly.

Erik Peinert of the American Economic Liberties Project cautioned Vox, “That acquisition was pretty clearly intended to get control over a software architecture that most of the industry relied on. The fact that they have so much pricing power, and that they’re not facing any effective competition, is a real concern.”

Will hype lead to disillusionment?

Some observers also believe that AI could turn out to be a fad which was simply driven into the mainstream consciousness by ChatGPTs prominence on social media. In this sense, it could be seen by many as little more than a novelty, as opposed to the wave of the future those in the industry see it as.

Luria argued, “Every big technology goes through an adoption cycle. As it comes into consciousness, you build this huge hype. Then at some point, the hype gets too big, and then you get past it and get into the trough of disillusionment.”

How much more can be done with a microchip?

Vox writer Whizy Kim wondered whether microchip technology might soon reach a point where further advancements proved impossible. She wrote, “It has moved at a blistering pace in the last several decades.”

Yet she warned, “But there are signs that the pace at which more transistors can be fitted onto a microchip — making them smaller and more powerful — is slowing down.” If Nvidia reaches the limits of chip tech and can’t keep demonstrating incredible leaps in AI, will interest in it begin to die off?

Huang wants to improve society

For someone like Huang, though, the idea that AI is a passing fad or simply a way to make stacks of cash is offensive: he has dedicated over a decade of his life and his entire business to it.

He once told Forbes, “There has to be a connection between the work you do and benefits to society. The work we do has to benefit society at some level that’s almost science fiction. We want to be able to advance the discovery for the cure for cancer. That sounds incredible.”

Back to where it all began

When Huang sat down with The New Yorker in late 2023, he made sure to do it at the very San Jose Denny’s restaurant he ate in while putting together the paperwork to form Nvidia in 1993.

He revealed that he’d once worked at the restaurant as a young dreamer with no idea how far his ideas could take him. He joked with a waitress, “You know, I used to be a dishwasher here. But I worked hard! Like, really hard. So, I got to be a busboy.”

Nvidia changed the principles of digital computing

Huang extolled the virtues of what his GPUs are capable of today and then extrapolated it out to what they will be capable of in the future. He claimed that, up until Nvidia came along, the basic principles of digital computing used by every tech company were virtually the same as the ones used by IBM in the ‘60s.

But with Nvidia’s investment in deep learning, things are rapidly changing — and he won’t stop trying to evolve. He insisted, “I do everything I can not to go out of business. I do everything I can not to fail.”

Processing data, or preparing to replace human beings?

Witt tried to rein Huang in by telling him about watching a video of a robot — running Nvidia’s deep-learning software — which categorized some color blocks. What worried Witt, though, was how the robot gazed at its own hands before it began. As Witt wrote, “The video had given me chills; the obsolescence of my species seemed near.”

Huang totally laughed off this fear, though, saying, “All it’s doing is processing data. There are so many other things to worry about.” He even claimed, “It’s no different than how microwaves work.”

Huang doesn’t buy it

In fact, Huang doesn’t have much time for anyone who voices worries about how AI will affect the world going forward. When hundreds of tech industry players signed a statement likening the potential impact of rampant AI to that of nuclear war, he refused to add his signature.

When people pointed out that the Industrial Revolution actually lowered horse populations all over the world, and wondered if the same could happen to human beings after introducing AI, he scoffed, “Horses have limited career options. For example, horses can’t type!”

No AI should be able to learn “without human input”

Continuing on this theme, when Huang was asked at a conference about AI leading to an apocalyptic scenario for the world, he was similarly dismissive. He joked, “There’s the doomsday AIs.”

He continued, “The AI that somehow jumped out of the computer and consumes tons and tons of information and learns all by itself, reshaping its attitude and sensibility, and starts making decisions on its own, including pressing buttons of all kinds.” He shook his head and clarified, “No AI should be able to learn without a human in the loop.”

Hype or history?

Overall, the question of whether Nvidia’s pioneering work in AI s history in the making or hype that will soon die out isn’t easy to answer. The topic is so technical, with advances being made all the time at a dizzying rate, that the future is difficult to predict.

There is also so much fear of the unknown involved that sometimes it seems a judgement that isn’t based on emotion is hard to come by. In theory, AI should be a tool like any other, but it seemingly has the potential to change everything.

Meet “Diane”

We’ll leave you with Witt and his encounter with “Diane,” a digital avatar created by Nvidia. The company utilized AI to comb through millions of video clips of real people in order to create Diane — an uncannily human-like face complete with almost invisible hairs on her upper lip and blackheads spotting her nose.

As Witt noted, the only indicator that the face wasn’t human was a strange shimmer in her eyes, but Nvidia’s specialist assured him, “We’re working on that.” This did little to quell Witt’s worries!

Will we soon be able to create entire worlds with AI?

Diane is indicative of where Huang sees AI going. As Witt wrote, “Image-generation AIs will soon be so sophisticated that they will be able to render three-dimensional, inhabitable worlds and populate them with realistic-seeming people. At the same time, language-processing AIs will be able to interpret voice commands immediately.”

Once these elements are combined with “ray tracing” — which reproduces how light bounces off objects — people will theoretically be able to create entire video games, movies, and shows with simple voice commands.

VR and the “Omniverse”

Huang sees these digital creations being able to train robots to accomplish tasks and hopes his AI will be integral in the future of self-driving vehicles. He also wants to create the “Omniverse” — a digital recreation of the real world which users access using Virtual Reality headsets.

Witt was shown a fully digital ramen shop, and marveled, “As the demo cycled through different points of view, light reflected off the metal counter and steam rose from a bubbling pot of broth. There was nothing to indicate that it wasn’t real.”

“As if I were questioning the utility of the washing machine”

For Witt, it was all a bit much. He revealed, “I felt dizzy leaving the product demo. I thought of science fiction; I thought of the Book of Genesis. I sat on a triangular couch with the corners trimmed, and struggled to imagine the future that my daughter will inhabit.”

But when he once again shared his worries with the Nvidia scientists about AI rendering artists obsolete and potentially even killing a human being, he admitted, “They looked at me as if I were questioning the utility of the washing machine.”

Could an AI ever truly become sentient?

No doubt inspired by the apocalyptic Hollywood scenarios put forth in movies like the Terminator franchise, Witt also couldn’t help invoking the spirit of Skynet. He asked Huang if an AI could ever truly become sentient and begin thinking on its own, completely separate from human command.

The Nvidia visionary responded, “In order for you to be a creature, you have to be conscious. You have to have some knowledge of self, right? I don’t know where that could happen.” The world will just have to take him at his word — for now at least.