Tech stocks are getting slammed right now, and Nvidia may be one of Wall Street’s biggest losers in the sell-off that began last month and continued into this week. Nvidia’s stock has seen a 30-day high of $292 and a whiplash low of $176 – equaling a 40% plunge in the matter of four weeks. Today, it stands at $197.60.
Economic indicators and earnings from tech companies have not exactly warranted this reaction from the market. Fears the semi-conductor industry is slowing down based off Advanced Micro Devices earnings report were negated when Intel reported strong Q3 earnings. And while Apple may be on the precipice of a capped out $1 trillion-dollar market cap due to possible iPhone saturation, Nvidia’s outlook is quite the opposite in regards to public-company growth trajectory. The market may continue to have volatility, but Nvidia investors who are patient will be rewarded due to competitive advantages in GPU-powered cloud performance and developer adoption of Nvidia’s platform.
Brief Overview of Nvidia’s Revenue Segments
To summarize, gaming claims the majority of Nvidia’s revenue at $1.81 billion, up 52% YoY. Gaming will get a nice boost in 6-12 months from the new GeForce RTX 2070, RTX 2080 and 2080 Ti chips, which introduce the possibility of hybrid rendering through ray-tracing. In layman’s terms, ray-tracing mimics how light behaves in the real world by mapping out rays from 3D illumination sources. The imagery is much more realistic as a result. Electronic Arts released the first raytracing game today (November 14th) whereas 6 months ago, the gaming industry did not think raytracing would even be possible. Companies who have signed up for the new Turing architecture include Adobe, Pixar, Siemens, Black Magic, Weta Digital, Epic Games (maker of Fortnite) and Autodesk.
Data center revenue has been picking up speed at 83% YoY, or $760 million, as GPU chips are powering more of the cloud for machine learning and artificial intelligence applications. Data center revenue, once a small blip, claims 24% of the company’s total sales. This will continue to grow steadily into the near future due to the computing power and flexibility GPUs provide over CPUs, which is what Intel sells, or TPUs and FPGA, which are custom machine-learning chips by Google and used by Microsoft that are too specific to one platform for widespread adoption – more on these points below.
Smaller segments by Nvidia include professional visualization and automotive, which grew to $281 million and $161 million, respectively, up 20% and 13% year over year.
Two Impenetrable Moats: GPU-Cloud and Developer Adoption
Revenue segments are your typical Nvidia stock coverage. But can Nvidia take market share from Intel? Will Google, Microsoft, Facebook and Apple design their own custom chips to compete with Nvidia? This is what investors need to answer for themselves especially if we continue into correction territory.
Regarding Intel, the cloud is too competitive to forego the performance and efficiency that Nvidia delivers. Recently, the Turing T4 GPU became the fastest adopted server GPU of all time in just two short months of hitting the market. Prior to the release of the Turing T4 GPU, Nvidia’s data center growth was 3x compared to Intel. Intel posted 26% growth YoY whereas Nvidia posted 83% YoY. However, Nvidia’s data center revenue is 1/6th compared to Intel’s at $760 million vs. $6.1 billion. This revenue segment will continue to grow as the GPU-powered cloud is built out. Unfortunately for Intel, GPUs are the better choice for cloud customers as the usage pattern is constantly in flux, demanding a wide variety of models and different software frameworks. Intel’s CPU Xeon Processor cannot compete with the performance-per-watt of what Nvidia offers in the cloud. Per the announcement on September 13th, 2018, Microsoft, Google, Cisco, Dell EMC, Fujitsu, HPE, IBM, Oracle and Supermicro plan to release servers with Nvidia’s T4 GPU on board.
My newsletter subscribers get this information first. Sign up here.
Google and Microsoft have both made chips for their data centers. Microsoft adopted the field-programmable gate array (FPGA) which is used for AI apps. And Google has built a custom chip called the Tensor Processor Unit (TPU) for Google’s TensorFlow deep learning framework. Competing, customized chips will become the new norm as tech giants prefer to use proprietary tech. The biggest weakness that competing customized chips face like TPUs, from Google, and FPGA, used by Microsoft, is that they may be too specialized for developers to adopt. The drawbacks will continue to be price and difficulty, as programming for FPGA is an area not many engineers have expertise in. The same goes for Google Cloud Platform (GCP). They’ll have to get developers to adopt GCP and keep them locked into TensorFlow. Even so, there are alternate frameworks such as PyTorch from Facebook which add further to the fragmentation of developer frameworks. In addition, even if Google uses TPUs for inferencing, it may still use Nvidia’s GPU for training neural networks.
Let’s use mobile application development as an example. One of the reasons mobile is a duopoly between Android and iOS is that developers can only learn so many tools and development environments before the process becomes inefficient. In order to truly excel at a language, it has to be universal. For instance, Microsoft attempted to launch a Windows phone, which was met by resistance as developers did not care to learn a new operating system that could not prove itself with user adoption. In turn, mobile users did not buy the Windows phone because their favorite applications were not available to download. iPhone’s success was due to iOS developers who learned tools like XCode to create applications. Android became the competing universal language for the remaining manufacturers, such as Samsung, LG, Sony, Pixel, etcetera. The next wave of AI applications and machine learning inferences will follow the same path of limited competition due to development bandwidth. Developers will self-regulate the number of competitors for processing units due to a need for a universal platform that supports all frameworks.
Here’s a quote from Marc Andreessen of Andreessen-Horowitz, one of the most successful venture capitalists in Silicon Valley:
“We’ve been investing in a lot of startups applying deep learning to many areas, and every single one effectively comes in building on Nvidia’s platform. It’s like when people were all building on Windows in the ’90s or all building on the iPhone in the late 2000s.”
There is an even greater need to simplify artificial intelligence and machine learning than exists for mobile standards. There are thousands of variants emerging each year in AI as neural networks evolve and expand in depth, complexity and architecture. There are multiple frameworks supported by major industry players and Nvidia’s GPUs are flexible enough to accelerate all of these frameworks and workflows including Caffe2, Cognitive Toolkit, Kaldi, MXNet, PaddlePaddle, Pytorch and TensorFlow.
In addition, AI occurs beyond the cloud and Nvidia’s GPUs are available in what is called edge devices, such as self-driving cars, desktops, workstations, data centers and across all major cloud providers.
Nvidia is already the universal platform for development, but this won’t become obvious until innovation in artificial intelligence matures. Developers are programming the future of artificial intelligence applications on Nvidia because GPUs are easier and more flexible than customized TPU chips from Google or FGPA chips used by Microsoft. Meanwhile, Intel’s CPU chips will struggle to compete as artificial intelligence applications and machine learning inferencing move to the cloud. Intel is trying to catch-up but Nvidia continues to release more powerful GPUs – and cloud providers such as Amazon, Microsoft and Google cannot risk losing the competitive advantage that comes with Nvidia’s technology.
The Turing T4 GPU from Nvidia should start to show up in earnings soon, and the real-time ray-tracing RTX chips will keep gaming revenue strong when there is more adoption in 6-12 months. Nvidia is a company that has reported big earnings beats, with average upside potential of 33.35 percent to estimates in the last four quarters. Data center revenue stands at 24% and is rapidly growing. When artificial intelligence matures, you can expect data center revenue to be Nvidia’s top revenue segment. Despite the corrections we’ve seen in the technology sector, and with Nvidia stock specifically, investors who remain patient will have a sizeable return in the future.
Sign up for Analysis on the Best Tech Stocks
I’m an industry insider who writes free in-depth analysis on public tech companies. In the last 12 months, I predicted Facebook’s Q2 crash, Roku’s meteoric rise, Uber’s IPO flop, Zoom’s IPO success, Google’s revenue miss and more. Be industry-specific. Know more than the broader markets. Sign up now. I look forward to staying connected.
If you are a more serious investor, we have a premium service that offers institutional-level research and entry/exit options. This membership offers a competitive edge in identifying growth opportunities and reducing risk in the tech sector. Learn more here.
Join 3,003 other tech investors who receive weekly stock tips:
Ms. Kindig, Beth, I admire your work. The clarity, the confidence, the insight, and your erudite treatment of the important issues. I could go on and on, but I’ll fight the compulsion to lavish the praise I think you very much deserve. I regret I hadn’t found you sooner, but I plan on studying your articles and look forward to the future. Long NVDA, and now more confident myself than ever.
Thank you for the encouragement. It’s readers like you that keep me going. Much appreciated.
I could not agree more and that, too, whole heartedly! I have come across the work of Beth only recently and have been thoroughly impressed with whatever little I have had a chance to peruse in a short amount of time. Since, I have been a technology person and investor for over 2 decades, I took liberty to quickly assess superficially and I find so much depth in the analysis! Kudos to you, Beth! Please keep it up and I plan on studying your work more and more as the time allows! Thanks a bunch!
Nice article, Beth. I admire your clarity, your confidence, and your style. I’m looking forward to learning a lot from you.
Dr. Joe Haluska
Thank you, Joe! I appreciate the comment and encouragement.
I liked your insight in Nvidia very much! I’m hoping in the future that you can address autonomous autos, robotics, and the new tech darling Xilinx., as to why many think they are a direct threat to Nvidia? Hopefully Nvidia can overcome the crypto inventory issue, and the poor decision to offer an expensive video card that very few game titles supporting ray tracing.
Hi Steve, Thanks for the comment! Apologies for the delay. I was at a conference last week. I met with Xilinx briefly at MWC last week and am going to cover them next month. I’m keeping a close eye on FPGAs and how they compare to GPUs.
I will be looking forward to said articles. Perhaps you might have some insight on how Intel is progressing in implementing FPGA’s since buying Altera, & Mobileye?
I like the simplicity and right on point in your article. I like your focus and realistic prediction based on facts. looking forward to read more about your finding. I really do appreciate your work .
Hi Beth – Can you comment on the recent controversy surrounding AMD which competes with INTC and NVDA? I invested in AMD recently based on its new chips and the savvy strategies of its CEO since 2014. However Lisa Su seems to have partnered with Chinese tech companies, mainly Sugon, that at least transfers inportant IP aiding China’s push to become a/the leader in semiconductors, et al. Even if it weren’t a natinal security issue, AMD’S biz practice giving IP to a competitive business is hghly irregular and maybe illegal. Wheat was Su thinking? Or do I have it all wrong . . . Will the tiff be good for INTC and NVDA? Thanks.