The Perfect Risk
There is no such thing as a risk-free design, but how does a company balance risk against the threat of commoditization? The equation is changing and nobody has the answer yet.
The development of semiconductors is an act of risk management. Very simply put, if you take on too much risk, it could lead to product failure or a missed market window, both of which can cost $M. For a company that only produces one or two products a year, that can spell total disaster.
If you do not take on enough risk, you are probably not going to end up with a competitive product that has moved far enough ahead of your previous generation or added enough new features to make someone want to trade up. Quite simply put, this is the path to commoditization.
Successful companies are the ones that manage to balance those two. But what creates risk?
For many products, the risk was moving to a new, untested production node every 18 months. This added uncertainty related to new design rules, potential yield issues and almost untested EDA tools. Along with the new node came double the number of transistors. While some of those would be gobbled up by more memory, hardly an increase in risk, more functionality meant more places for bugs to hide. At the same time, more gates meant slower simulation.
To balance this, most of these designs were incremental in nature, minimizing the number of changes between generations and increasing confidence levels in everything that had been reused. The adoption of 3rd party IP was also a way to reduce risk – although in the early days it was often questioned if 3rd party IP was better quality than in-house design. Having blocks used and verified in multiple designs meant that they had to be robust.
Every now and then there was a bump in risk. One such example happened when the industry stopped using one core and transitioned to multi-core. This added a layer of complexity brought about by concurrency. In general, verification tools stayed the same, while concurrency added huge burdens to functional verification and broke many of the methodology assumptions that were in place. Many of those still have not even been fixed, although changes are coming.
But the industry is getting to a very interesting place where the risk equation is changing. For those who continue to chase the cutting edge – good luck, because things are getting hairier at a faster pace than ever before and design costs will rocket.
Others may choose to remain on the cutting edge in a different way, by using new packaging technologies that allow them to stack multiple chips in a package. While risk levels are reduced because each of the die may be verified individually and may become another market for 3rd party products, there are many new packaging challenges and added verification complexity.
For everyone else, for those who decide that hanging back a node or two, or sticking to one that is adequate to the task is the right path for them – where is your risk going to come from? You will be playing with the same number of transistors that you had before and the same amount of memory. If you take that level of risk away, then you have to take on some other risk or you will quickly become a commodity.
I just saw a worrying figure from Semico. They predict that SoC design costs will only rise by only 1.7% by 2023. If that is true, then I hope engineers suddenly became a lot more productive, because that figure suggests to me that the market is not yet looking at the future aggressively enough.
The universities have already caught on to this. They are now producing fairly complex chips for tens of thousands of dollars, each being designed by small groups of students. They are successfully producing market acceptable IP. Just look at the rate of adoption for RISC-V, which is displacing Arm in certain application areas. Now that they have tasted success, do you think they are going to stop?
So, competition may now come from other companies or from universities. What are you going to do differently? Perhaps look at new architectures and re-question assumptions that were made decades ago. Perhaps stop leaving as much guard-banding in the design and be more aggressive on optimization of performance or power. Perhaps investigate newer EDA tools that would enable designs to be completed faster or optimized in different ways.
This is an opportunity for new EDA companies that can solve problems in better ways than currently being offered. Verification costs have been steadily rising and it is not going to be solved by faster execution engines.
This is an issue that every semiconductor company is going to have to tackle. If it feels like the pressure has been taken off your next design, it may be time to freshen up the resume.
It is always exciting times when things change, especially when it is not clear how companies will decide to increase risk.