Nvidia fell off a cliff last October from a high of $290 to a low of $130. Meanwhile, the challenger Xilinx remained unharmed by the tech rout, and despite unfavorable macro conditions. Nvidia popularized GPUs in 1999 and Xilinx invented FPGAs in 1985, and both are chips that will define the computationally-intensive future.
GPUs originated from the advanced computations required in gaming and FPGAs originated from electronics engineering. There are strengths and weaknesses to both, however, these are the two that will power the artificial intelligence and machine learning-driven economy. The size of this AI and ML economy is expected to reach $15 trillion by 2030 up from $2 trillion this year.
Keep in mind, that long before technologies go public, they are incubating across the startup ecosystem. By the time AI and ML companies reach the public markets, the technology powering and developing this wave of companies was already decided in the years prior. We are in those critical years where startups must quickly design and develop AI if they want to have the first-mover advantage. This is creating a battle between FPGAs and GPUs.
Below, I break down the differences between Xilinx’s FPGAs and Nvidia’s GPUs before analyzing the financials and theories on how the two will perform in the future.
Note: Previously, I discussed how Nvidia stock has two impenetrable moats: the developer ecosystem and GPU-powered cloud. This previous analysis was written during the height of the panic sell-off, which I negated as being overly-pessimistic due to Nvidia’s strong fundamentals.
AI and Machine Learning
On many technical levels, FPGAs (Xilinx) are considered superior to GPUs (Nvidia). They offer a higher amount of on-chip cache memory to help reduce the bottlenecks from external memory, and are flexible enough to be reconfigured for various data types, such as binary, ternary, and custom data types, whereas GPUs must be modified at the vendor level.
FPGAs are also known for power efficiency, and often test at 10x better in power consumption than GPUs and also 4x better than GPUs for general purpose compute. Reconfigurability for FPGAs also helps provide this efficiency beyond deep learning for a large number of end applications and workloads. The architecture of FPGAs is very adaptable as the chips allow a user to address all of the needs of a workload with the resources provided by FPGAs, such as reconfiguring the data path during run time and with partial reconfiguration. Meanwhile, GPUs are restricted as the architecture is a Single Instruction Multiple Thread (SIMT), which provides an advantage over CPUs but can result in lower performance efficiency when enough parallels cannot be found while mapping the workload.
As pointed out in my previous analysis on Nvidia, software developers prefer GPUs as their frameworks are easier to develop on. Nvidia’s CUDA architecture, for instance, does not require an in-depth understanding of underlying hardware. FPGAs require knowledge of machine learning algorithms at the hardware level, in addition to the software development, and this has been a barrier to entry for FPGAs. FPGAs are a reconfigurable integrated circuit (hence the strengths on being easily reconfigured), which requires specifying a hardware circuit, whereas GPUs are configured via software.
“Nvidia, thanks to the CUDA software stack (which AMD cannot match), has a much more unassailable position than does Intel with Xeon CPUs (where an X86 application just runs on either a Xeon or an Epyc).”
– software developer on Reddit
Section takeaway: FPGAs result in faster and more efficient compute but are harder to program due to hardware circuit configurations when compared to GPUs for machine learning, which are more universal and require less engineering resources.
Nvidia and Xilinx power more than data centers, of course. Nvidia’s top revenue segment is gaming, the origin of GPUs, and this drives about $1 billion per quarter in revenue. Xilinx’s top segment is Communications with many investors using Xilinx as a global bet on 5G with communications revenue increasing 41% year-over-year as reported in the most recent quarter. Xilinx also was not as affected by crypto as the Broadcast, Consumer & Automotive category was 17% of revenue compared to 15% of revenue in the same quarter YoY. (Xilinx classifies crypto as consumer in this 10-K).
Xilinx has a direct competitor with Intel, who acquired Alterra for $16.7 billion. Intel is keen to solve the development uptake issues with FPGAs with the release of Stratix 10 hardware, which has a software layer to simplify development. Microsoft Azure is partnered with both Xilinx and Intel/Alterra on FPGAs although there is some indication that MS is leaning more towards Xilinx in the near future after announcing they will replace Intel chips with Xilinx in over half of their servers.
Developers favor Xilinx over Intel as a brand, and Microsoft is doing quite a bit to court developers right now including the acquisition of Github – read more tech stock analysis here. Therefore, the shift towards Xilinx was not unexpected.
While Xilinx reported double digit increases, Nvidia reported double digit declines with revenue down 24 percent, earnings per share down 48 percent to $0.92 and operating income down a shocking 73 percent year-over-year in fiscal Q4. The annual numbers ended on a better note with revenue increasing 21 percent to $11.72 billion, and GAAP earnings per share increasing 38 percent to $6.63. Of Nvidia’s revenue segments, gaming was hit the hardest due to the crypto bust flooding the market with GPUs, which in turn, caused reduced unit shipments overall. In addition, the new Turing architecture and real-time ray tracing, while impressive from a graphics perspective, are ahead of their time and are seeing slow adoption (At release, I had originally put Q3 2019 for these to find early adopters and this timing still looks accurate or maybe Q4).
This upcoming quarter is not likely to be the comeback quarter for Nvidia with guidance of $2.20 billion, which is flat from last quarter and represents a 31 percent decline year-over-year. As you’ll see in the takeaway paragraph below, I am very bullish on Nvidia in the long term as crypto causing temporary GPU saturation offered an opportunity to enter the stock below its value.
Gaming is a foundation for Nvidia, but most certainly, this is not the growth story. The GPU-powered cloud is the future due to AI and ML. If you can get Nvidia below a $100 billion market cap, then my prediction is you will be resting easy by 2022 and 2023 with a stellar return as it’s understated presence across cloud data centers and AI applications should have a firm hold on the market.
Xilinx’s revenue growth is at 34% year-over-year in Q3 2019, with 63% growth in operating income YoY in the same quarter, and 42% net income growth. It’s important to mention that Xilinx is a small fish in a big pond and this quarterly growth of 34% and 42% equals $200 million to the top line and less than $100 million to the bottom line. Meanwhile, Xilinx commands a PE ratio of 38, at time of writing.
Guidance for the upcoming quarter is revenue of $815 to $835 million compared to $800 million in the previous quarter. One reason Xilinx’s stock price continued to climb, while Nvidia fell off a cliff, is that the smaller fish did not have enough market share to reflect a big impact, whereas Nvidia’s crypto business alone exceeded Xilinx’s net income for the entire year (at around $500 million per quarter). In addition, one year ago Xilinx posted negative net income of $12 million but is now at a net income in the range of $200-$250 million the last two quarters.
In other words, Xilinx is more of a trout than a tuna, but is a pure play option that is likely to see very solid returns as the AI economy is built out. (This is why I don’t invest in Intel; I prefer pure plays when possible).
Nvidia is one of my favorite companies from a fundamental standpoint, and it is worth repeating that I was not fair weathered during the crypto bust, rather encouraged readers to look at the developer moat and GPU-powered cloud as future drivers of growth. As I stated to a reader over email two days before the Mellanox acquisition: “Can Xilinx’s FPGA disrupt Nvidia GPU’s at 4x faster? My best guess (and it’s only a guess) is that Nvidia will continue to release the right chips that the market demands.” In this case, Nvidia is acquiring the right company that the market demands. You can read my analysis on Mellanox acquisition published on FATRADER here.
I want to point out that Xilinx will make a solid investment, as well. Xilinx is priced a minimum of 25-30% higher than Nvidia when looking at PE ratio, Price to Sales, and EPS. Quarter-over-quarter growth for Xilinx right now is in the single digits, and for this reason, I’d like to see Xilinx priced 20% cheaper before I build a position or I’d like to see more than single digit QoQ revenue growth in a highly competitive market for a 30+ PE ratio. Due to Nvidia’s upcoming flat quarter (per guidance), Nvidia is also likely to trade sideways for a quarter or two. I bought Nvidia in 2017 and cost averaged down to $160, and am comfortable here for the long term.
I am a technology analyst who writes fundamental tech analysis for FATrader. This is not investment advice.
Sign Up to Receive Bi-monthly Insider Analysis:
I’m an industry insider who writes free in-depth analysis on public tech companies. This year, I predicted Facebook’s Q2 crash, Roku’s meteoric rise, Oracle’s slow decline, and more. Be industry-specific. Know more than the broader markets. Sign Up Now. I look forward to staying connected.