3 min. read

April 20, 2020

We're gonna be fine

Should Developers Fear the End of Moore’s Law?

Here's why the computer chip changed the world.


Doug Neale, Senior Software Engineer

Computers have revolutionised the modern world. But what has contributed more? Hardware or software? As much as I’d like to claim my field, it has been the computer chip that has changed the world. 

For the past fifty years, the silicon chip has improved at an exponential rate. This trend is known as Moore’s Law, because Gordon Moore, the founder of Intel, correctly predicted in 1975 that the number of transistors on a computer chip would double every two years. This so-called “doubling effect” has resulted in faster, cheaper and more power-efficient computer chips. It’s because of Moore’s law that we have all our favourite modern tech, including personal computers, laptops and smartphones.

However, as hardware got faster, software got slower. A big, new toolshed was built for us, so we naturally crammed new things in. We added new features. We made computationally expensive graphics. We created easier-to-use programming languages so we could have more, sooner. And so software is slowing down, but no one notices because the hardware keeps up with the demand.

But there’s a problem: it can’t keep up much longer. Computer chips only get faster if we can make the transistor smaller. Sometime this decade we will hit that limit. We are down to the atomic level, and until we see a breakthrough in other transistor technologies, we’ll be stuck with the speeds we’ve got.

This means we need to rethink the way we make software, and The MIT Technology review believes we are not prepared. Our society depends on technological advancement and we need software to get better, now that hardware no longer can. Does this mean we all have to labour over 20th-century style coding, meticulously optimising every line of code? Perhaps the cushy ride is over for developers.

Entrepreneur Marc Andreessen is not so worried. In his interview “Why should I be optimistic about the future”, he reassured us that we are prepared. 

To begin with, we’ve got cloud computing at our disposal. Unlike decades ago, we can now scale an application across many servers automatically. Instead of focusing on the output of one chip, we can focus on getting “good at using lots of chips to do things” according to Andreessen. He says that utilising cloud computing for efficiency is what we’ve seen in the AI and cryptocurrency worlds, suggesting that more and more use cases will depend on distributed processing architectures. 

This aligns with the expectation that mobile phone processing power will shift to the cloud, once network technologies like WiFi 6 and 5G reduce latencies. Phones would become “thin client” devices where most of the hardware is not in the device but on a server.

While we may not find ourselves returning to soul-crushing, low-level codebases, the next generation of developers will still need to adapt. Modern technologies like neural networks and the blockchain may be commonplace in architecture diagrams. These techniques will continue to drive progress, even without the transistor doubling effect.

And so we, just like Andreessen, should remain confident that with these approaches “we’ve got decades of advances ahead, which aren’t purely dependent on classic Moore’s law.”