A problem with SRAM is causing CPU-based costs for graphics cards and processors.
SRAM is used as high-voltage working memory (RAM) in the GPUs and in the CPUs and works with much more processors. Despite its relatively high extent to scale, memory seems to be difficult to scale, but in the transition to five and three-dimensional chips, SRAM only hardly shrinks. More space for a wafer per die makes the cost of chips increase.
The TSMC and Intel both have to deal with this problem. During the transition from 5 to 3nm processes, TSMC concentrates on the logic circuits present in the chips. At Intel, they would’ve been facing the same problem in the meantime. Wikichip states that developers have hit a hard wall when ramming SRAM memory.
Chiplet is an intermediate solution.
In the meantime, a possible solution is being sought to ensure that SRAM memory doesn’t hinder the transition to smaller revolving nodes, but yield the wafers. A solution for chiplets has already been found. We know that term from AMD Ryzen processors and Radeon GPUs, which use chiplets.
With such a type of chipset design, certain parts of chips are separated from the logic parts of a chipset. The underlying issue at AMD is, among other things, the IO die that was separated from the other components. The manufacturer can save money by having components that don’t require a smaller node fabricated on a smaller wafer. The lower nodes have very few benefits for some component such as SRAM and IO control.
As such, such a chiplet does take very large space on the whole die of the CPU or GPU; however, the chiplet also has other limits. Because the logical part of a processor is relatively large, such a chiplet design can cost computational power. AMD tries to solve this problem in its chips, but can connect different chiplets to the Infinity fabric. It’s still unknown if an interconnect is enough for the benefit of reducing the performance.