Will the End of Moore’s Law Halt Computing’s Exponential Rise?

15,522 39 Loading

This is the first in a four-part series looking at the big ideas in Ray Kurzweil's book The Singularity Is Near. ​Be sure to read the other articles:

“A common challenge to the ideas presented in this book is that these exponential trends must reach a limit, as exponential trends commonly do.” –Ray Kurzweil, The Singularity Is Near

Much of the future we envision today depends on the exponential progress of information technology, most popularly illustrated by Moore’s Law. Thanks to shrinking processors, computers have gone from plodding, room-sized monoliths to the quick devices in our pockets or on our wrists. Looking back, this accelerating progress is hard to miss—it’s been amazingly consistent for over five decades.

But how long will it continue?

This post will explore Moore’s Law, the five paradigms of computing (as described by Ray Kurzweil), and the reason many are convinced that exponential trends in computing will not end anytime soon.

What Is Moore's Law?

“In brief, Moore’s Law predicts that computing chips will shrink by half in size and cost every 18 to 24 months. For the past 50 years it has been astoundingly correct.” –Kevin Kelly, What Technology Wants


Gordon Moore's chart plotting the early progress of integrated circuits. (Image credit: Intel)

In 1965, Fairchild Semiconductor’s Gordon Moore (later cofounder of Intel) had been closely watching early integrated circuits. He realized that as components were getting smaller, the number that could be crammed on a chip was regularly rising and processing power along with it.

Based on just five data points dating back to 1959, Moore estimated the time it took to double the number of computing elements per chip was 12 months (a number he later revised to 24 months), and that this steady exponential trend would result in far more power for less cost.

Soon it became clear Moore was right, but amazingly, this doubling didn’t taper off in the mid-70s—chip manufacturing has largely kept the pace ever since. Today, affordable computer chips pack a billion or more transistors spaced nanometers apart. 


Moore's Law has been solid as a rock for decades, but the core technology's ascent won't last forever. Many believe the trend is losing steam, and it's unclear what comes next.

Experts, including Gordon Moore, have noted Moore’s Law is less a law and more a self-fulfilling prophecy, driven by businesses spending billions to match the expected exponential pace. Since 1991, the semiconductor industry has regularly produced a technology roadmap to coordinate their efforts and spot problems early.

In recent years, the chipmaking process has become increasingly complex and costly. After processor speeds leveled off in 2004 because chips were overheating, multiple-core processors took the baton. But now, as feature sizes approach near-atomic scales, quantum effects are expected to render chips too unreliable.

This year, for the first time, the semiconductor industry roadmap will no longer use Moore's Law as a benchmark, focusing instead on other attributes, like efficiency and connectivity, demanded by smartphones, wearables, and beyond.

As the industry shifts focus, and Moore's Law appears to be approaching a limit, is this the end of exponential progress in computing—or might it continue awhile longer?

Moore's Law Is the Latest Example of a Larger Trend

“Moore’s Law is actually not the first paradigm in computational systems. You can see this if you plot the price-performance—measured by instructions per second per thousand constant dollars—of forty-nine famous computational systems and computers spanning the twentieth century.” –Ray Kurzweil, The Singularity Is Near

While exponential growth in recent decades has been in integrated circuits, a larger trend is at play, one identified by Ray Kurzweil in his book, The Singularity Is Near. Because the chief outcome of Moore’s Law is more powerful computers at lower cost, Kurzweil tracked computational speed per $1,000 over time.

This measure accounts for all the “levels of ‘cleverness’” baked into every chip—such as different industrial processes, materials, and designs—and allows us to compare other computing technologies from history. The result is surprising.

The exponential trend in computing began well before Moore noticed it in integrated circuits or the industry began collaborating on a roadmap. According to Kurzweil,  Moore’s Law is the fifth computing paradigm. The first four include computers using electromechanical, relay, vacuum tube, and discrete transistor computing elements.


There May Be 'Moore' to Come

“When Moore’s Law reaches the end of its S-curve, now expected before 2020, the exponential growth will continue with three-dimensional molecular computing, which will constitute the sixth paradigm.” –Ray Kurzweil, The Singularity Is Near

While the death of Moore’s Law has been often predicted, it does appear that today's integrated circuits are nearing certain physical limitations that will be challenging to overcome, and many believe silicon chips will level off in the next decade. So, will exponential progress in computing end too? Not necessarily, according to Kurzweil.

The integrated circuits described by Moore’s Law, he says, are just the latest technology in a larger, longer exponential trend in computing—one he thinks will continue. Kurzweil suggests integrated circuits will be followed by a new 3D molecular computing paradigm (the sixth) whose technologies are now being developed. (We'll explore candidates for potential successor technologies to Moore's Law in future posts.)

Further, it should be noted that Kurzweil isn’t predicting that exponential growth in computing will continue forever—it will inevitably hit a ceiling. Perhaps his most audacious idea is the ceiling is much further away than we realize.

How Does This Affect Our Lives?

Computing is already a driving force in modern life, and its influence will only increase. Artificial intelligence, automation, robotics, virtual reality, unraveling the human genome—these are a few world-shaking advances computing enables.

If we’re better able to anticipate this powerful trend, we can plan for its promise and peril, and instead of being taken by surprise, we can make the most of the future.

Kevin Kelly puts it best in What Technology Wants:

“Imagine it is 1965. You’ve seen the curves Gordon Moore discovered. What if you believed the story they were trying to tell us…You would have needed no other prophecies, no other predictions, no other details to optimize the coming benefits. As a society, if we just believed that single trajectory of Moore’s, and none other, we would have educated differently, invested differently, prepared more wisely to grasp the amazing powers it would sprout.”

To learn more about the exponential pace of technology and Ray Kurzweil's predictions, read his 2001 essay "The Law of Accelerating Returns" and his book, The Singularity Is Near.

Image credit: Shutterstock.com (banner), Intel (Gordon Moore's 1965 integrated circuit chart), Ray Kurzweil and Kurzweil Technologies, Inc/Wikimedia Commons/CC BY

Jason Dorrier

Jason is managing editor of Singularity Hub. He cut his teeth doing research and writing about finance and economics before moving on to science, technology, and the future. He is curious about pretty much everything, and sad he'll only ever know a tiny fraction of it all.

Discussion — 39 Responses

  • JordanV March 8, 2016 on 10:47 am

    “After processor speeds leveled off in 2004 because chips were overheating, multiple-core processors took the baton.”

    At least for x86, it was more the shift away from Intel’s Netburst architecture back to the more efficient P6 design that allowed Intel to overcome thermal limits and continue the growth in processing power. The Pentium D, introduced in 2005 as the multiple-core version of the Pentium 4, ran significantly hotter than its single core predecessors.

    Although multiple-core designs can increase processing power, their benefit is limited to computing tasks that can be made parallel whereas nearly all computing tasks benefit from higher single-core performance. As a function of IPC and maximum clocks under ambient cooling, performance has been comparatively stagnant since 2006.

  • Jon Roland March 8, 2016 on 12:57 pm

    This trend is for integrated circuits, which are ultimately limited by a molecular or atomic scale. But at that scale we get quantum computing, which is good for several more orders of magnitude, but the “chips” won’t keep getting smaller. We will then have massively parallel processors (and memory) that will start growing in size.

  • Rami Molander March 8, 2016 on 1:48 pm

    Moore’s Law was supposed to end 20 years ago. And 10 years ago. Why would this time be different?

  • dobermanmacleod March 8, 2016 on 2:06 pm

    The Singularity Feedback Loop is the paradigm: Intelligence creates technology, Technology improves intelligence. You can see this everywhere in our high technology society, where a new piece of technology is introduced, and leads to a score of new insights, that in turn lead to even better technology. Particularly when you consider that technology is actually the science of improved techniques, and isn’t just electronic devices.

    By the way, I am convinced that a main break on our current speed of technology improvement is psychological, since cultural baggage caused by being stuck in a previous obsolete paradigms is very stubborn and inefficient. Hopefully, AI will substitute for human psychology, and the pace of technological improvement can be further optimized. Furthermore, humans integrating with their technology will probably be a gigantic booster.

    Let me give one example which is particularly galling to me: cold fusion. Our scientific establishment is very bureaucratic, and people are very slow to change their minds concerning a mistaken notion. This is very very expensive when it comes to a energy production technology that gives around 10 to the 5th boost in energy density over burning fossil fuel, is not polluting, and doesn’t emit radiation nor create radioactive waste. Two weeks ago an open source fully transparent scientific organization, the Martin Fleischmann Memorial Project (MFMP) published a recipe for cold fusion. This is fully replicatable, and thus there ought to be absolutely no controversy.


    Instead, due to human psychology, there is a stubborn refusal to adopt and investigate it. This is a prime example of human psychology being a primary obstacle to technological improvement.

  • bobdc10 March 8, 2016 on 2:19 pm

    Since large advancements must be market driven and processor speed is no longer the bottleneck that it once was, the demand for faster processors for the average user has shrunk. We average users are waiting for faster internet speeds, not faster processors.

    Cold fusion? I’m still waiting for my CF generator. Does Sears carry them?

  • PhiloWork March 8, 2016 on 3:52 pm

    My prediction is that the halt in computing power will take 10 years and economic stagnation looms in the meantime. I wrote in January 2016 an article about the impact on our lives, Intel, Kurzweil and politics.
    Read more: http://philosophyworkout.blogspot.de/2016/01/a-decade-of-economic-stagnation-looms.html

  • DSM March 8, 2016 on 4:23 pm

    Given that minds, computers, and even life, are all just forms of self interacting patterns the laws of physics leave us with enormous room for improvident and the simple mechanistic view of computers as machines, limited by the characteristics of atoms, should have been throw out the moment quantum computing showed that so much more was possible.

    The limits may be even further away than even Ray Kurzweil suggests.

    So how small can computers be? Well much of the storage and computation can be done at a sub atomic level.

    e.g. The simplest computer is just a loop of memory cells where the data and address lines alternate so that the data output from one is the address to select the data from the next.

    So how small can that be? Well that depends on how small you can make an electron delay-line with spin encoded data and how many electrons you can fit into it. Whatever the exact value you are definitely talking about a data density of greater than 1 bit per 0.05 nanometers.

  • shin March 9, 2016 on 2:59 pm

    What’s going to happen:
    First the transistors will reach the state where the quantum effects create problems. That’s the wall. While we will invent something new after that years down the line, probably based on optics, genetics, or quantum theory, there will be a span of many years where the wall is in place.
    Second, during the period of the wall (which can also be experienced whenever a natural disaster destroys research or chip labs) the technology gradually evolves back into its 20th century interpretation of giant computers covering walls and filling rooms. But,
    Third, we have learned something thanks to cellphones: The ever shrinking computer is not always useful. There’s a limit to the human interface size, just like there’s an optimal range to view theater screen sizes. A wrist watch, for instance, can’t have fonts too small to read. Elderly people typically need even larger fonts. But pocket computers, smart phones, etc., need to still fit into pockets and hands. HOWEVER,
    Fourth: That’s not always a problem. There’s a cut off point for minimum processing requirements for various tasks. After that, there’s a clarity and efficiency issue, but then that also reaches peak. Word processing and the power to play music or video, for instance. Features that have already reached something of a pinnacle. With electronics like raspberry pie, people now make their own portable console video game platforms based on models from before they were born. That’s important to Moore’s law, because like the transition from pong and atari to nintendo and PSPs, people eventually realized there’s a maximum graphics density for a portable device, and extra graphics power doesn’t improve the experience (which is why 16-32 bit games are typically the most popular on screens smaller than 4 square inches, and chibi graphics will never die).

    But all of that’s OK, so the portable devices are eventually going to saturate all the core entertainment processing requirement slots: TV, Videos, Audio, Radio, Phones, Video Conferencing, Video Games, and Multiuser Video games.

    Virtual Reality will have ever increasing demands for processing, but those will become dedicated server rooms, not much different from today. They may become as large as the old mainframe arrays of the 1950s and 60s, particularly for cooling. These though, will be tied through cloud computing and torrent relationships through Wifi and Lifi and Fiber Optics whenever someone needs really powerful processing (or just want more bitcoins).

    But then the wall will eventually be torn down by an entirely new technology, and that will be the processors we need to make replicators less like 1980s Hasbro/Matel Toy Shops, and more like Captain Picard’s daily request for “Tea, Earl Grey, Hot”.

    • DSM shin March 9, 2016 on 3:52 pm

      There is no wall and transistors have never been required to make computers, they were just the most economic way of mass producing logic circuits at the time they were adapted.

      For examples af genuine alternatives see:

      Recent Intel work: http://arxiv.org/pdf/0711.2246.pdf

      Work going back almost two decades! http://science.sciencemag.org/content/277/5328/928.short

      Specialised computers were already operating at speeds of tens of gigahertz in the 1980’s but they required cryogenic cooling.

      One problem is that it becomes meaningless to apply Moore’s Law widely and it’s relationship to cloud servers vs commodity devices is very different too. Even the nature of the computation dictates what computing architecture is most appropriate and this area is becoming more and more diverse.

  • sheera March 11, 2016 on 1:51 pm

    Frankly I think the Kurzweilian view of this article is ridiculously over-optimistic.

    It ignores how incredibly fast an exponential trend with the doubling time of Moore’s law is, compared to the pace of technology development in general. That’s been achieved based on one thing and one thing only over the past century as shown in Kurzweil’s charts — miniaturization. There is a talk somewhere by an Intel guy that shows that *all* of the architectural improvements since the inception of the integrated circuit have resulted in maybe a 50x increase in performance over the entire 50+ year period (IIRC, he might have quoted an even lower number, but I don’t remember exactly). Everything else has come from making smaller and smaller components. We are reaching the limit of that process, and the reality is that there is nothing “waiting in the wings” that would replace miniaturization to create another comparable exponential trend. Making things smaller is the only trick we’ve ever known to create such a rapid doubling time.

    Stacking chips into 3D structures might be good for another order of magnitude improvement or so, just by reducing the length of interconnects, but again that’s not really miniaturization anymore. Quantum computing only provides a speedup for a limited set of problems, and there’s no indication that it will ever get much faster than conventional computers in general, since, again, where is the speedup going to come from if we can’t miniaturize any further? Carbon nanotubes, graphene, etc, all the same deal; maybe we can get a couple more orders of magnitude over the next 10-20 years, but that’s it. Building larger and larger data centers will make faster computers, but only linearly (or even sub-linearly) with respect to cost and time, not exponentially.

    Moore’s law is done, folks. It’s true that many technological trends are exponential, but the scale factor of the trend tends to be very, very slow. We’ve been spoiled by the “low-hanging fruit” of making smaller transistors, but that was literally a once-in-a-civilization event. Welcome to the new normal. 🙂

    That said, we can still have a singularity, if we manage to get human-level AI. That’s pretty much the only technology that could enable such a thing, by removing the rate limits of a finite human population of scientists and engineers from the equation. That still won’t necessarily get us much faster computers, but it can certainly make our linear or very-slow-exponential progress in technology in general a lot faster. Of course, there’s no indication that progress in AI is following an exponential trend at all, so you can’t really forecast if/when that will happen.

    • DSM sheera March 11, 2016 on 11:06 pm

      Perhaps you are underestimating the usefulness of quantum computing? Imagine a neural network where the nodes are cells of qbits, wouldn’t that have it’s computational speed only limited by propagation delays through the network? Also if you can build this in a 1000 x 1000 array, then make it 3D and add 1000 layers that is 3 orders of magnitude improvement, and a billion “q-neurons”.

      If I can within 10 years have each q-neuron take up the size of the transistor in a current Intel CPU, but with those 1000 layers I have a neural chip with the complexity of 100 human brains, in the area of your current desktop CPU.


      It is very “current” but people are looking at the combination of the two ideas,

      Posner computing: a quantum neural network model


      Quantum perceptron over a field and neural network architecture selection in a quantum computer


      • sheera DSM March 12, 2016 on 1:59 pm

        I might have sounded more down on quantum computing than I actually am — I don’t doubt that it will provide some great speed ups, I just question whether it can constitute a “sixth paradigm” that will continue our current exponential trend significantly into the future.

        Obviously two or three orders of magnitude of improvement will be a huge deal. What’s new is that it will probably take us significantly longer to achieve it than it has historically, and we’ll have more and more diminishing returns as these improvements happen. So we’ll fall off of that nice exponential trend Kurzweil likes to plot permanently, but we’ll still be making progress.

        Progress in making faster computers isn’t likely to ever really stop, it will just look a lot more like progress in other areas of technology instead of the downright surreal speed of Moore’s law.

        • sheera sheera March 12, 2016 on 2:10 pm

          One thing to point out though is that AFAIK there is nothing about quantum computing that makes its speed any less limited by sources of traditional integrated circuit slowness than current technology. It’s not magic, it just enables some algorithmic speedups on specific problems (that set is called BQP, which is a strict subset of NP so NP-hard problems are still NP-hard even on a quantum computer). I think you still need some sort of clock for a general purpose quantum computer, and even analog circuits aren’t infinitely fast. I’m actually not sure what the benefit of implementing a neural network on quantum hardware would be versus conventional hardware, though some guys at Google seem to think there would be some benefit.

          Of course I’m hardly an expert on quantum computing so take what I say with a grain of salt, but my main point, which I’m 100% confident of, is that quantum computing is actually a quite special purpose technology that speeds up solving a specific set of problems, and definitely not some magical thing that will make everything super-fast. It has a lot of potential in certain scientific areas (molecular dynamics in particular I think), as well as cryptography, but in general I think people tend to overhype it because they’re thrilled by the quantum spookiness.

          • DSM sheera March 12, 2016 on 2:33 pm

            But a QNN could achieve cognition, be able to host an AGI that would be able to compute symbolically and therefore not be limited by the constraints dealt with in traditional computational complexity theory.

            How is building a mathematical genius, with the combined knowledge of every mathematician that came before it, not going to create an accelerating advance in the speed of problem solving? This would be the conceptual or knowledge facet of a cultural singularity, a point we cannot interpolate beyond because the concepts we have now cannot encapsulate what the AGI mathematical genius will discover. Further more, advances in all other sciences flow from the expansion of mathematical theorems and understanding, therefore it is this new mathematical knowledge that will mark the exact threshold of the singularity.

            • sheera DSM March 12, 2016 on 2:51 pm

              I agree that creating an AGI would likely eventually lead to a singularity. As I said in my original post, we have a finite population of scientists and engineers, limited learning and information sharing capabilities, coordination and organization difficulties, etc. Being able to create a large population of AGIs would greatly reduce those problems, creating a huge speedup in scientific progress.

              However, as far as I know there hasn’t been any serious suggestion that quantum computing would lead to AGI. Quantum computing and AGI are two pretty much unrelated fields of research as far as I can tell. Fortunately, I don’t think quantum computing is required for AGI either, just lots and lots of basic research with traditional computing and perhaps neuroscience. I’m sure we’ll get there, but it’s very hard to say when and there is no real exponential trend in this field to help us predict it.

              As far as computational complexity theory goes, there is no getting around those constraints, ever. They are as fundamental as pure mathematics, because they *are* pure mathematics. Our own brains, computers, quantum computers, and then universe itself are all limited by computational complexity. Computing symbolically is just as subject to computational complexity as anything else.

              • DSM sheera March 12, 2016 on 2:55 pm

                You dismiss my observations without offering an ounce of proof or a single reference, in fact you ignore the references I have provided and actually contradict what they suggest is possible, why?

                • sheera DSM March 12, 2016 on 3:07 pm

                  What? Your references are about quantum neural networks. I didn’t say quantum neural networks aren’t possible, just that they aren’t AGI, and I don’t see why they would bring us closer to AGI than regular neural networks.

                  I’m not sure what kind of proof or references you’re looking for from me… all I’m saying is that (1) quantum computing isn’t magic and only speeds up certain problems, (2) nobody has any idea yet how to build AGI so there isn’t any evidence that quantum effects would help, and (3) you can’t get around computational complexity.

                  These aren’t exactly bold claims I’m making here; you can come to the same conclusion by learning just the very basics of computational complexity, quantum computing, and where we currently are on AGI (pretty much nowhere, as far as I can tell).

                  • DSM sheera March 12, 2016 on 3:17 pm

                    1) a QNN speeds up all problem solving by being able to compute at the mathematical proof level. There is a big difference between working with proofs and calculating the results from a function that has been proven to be valid.

                    2) Plenty of people know how to “How to Create a Mind” and one way may involve QNNs because of a phenomena in human brains that one of the references provided actually talks about!

                    3) See point 1, you confuse computation with cognition. The fact that even human mathematicians can provide new insights years before formal proofs confirm their validity indicates that minds are not so strictly constrained by mechanistic limits as you suggest.

                    • sheera DSM March 12, 2016 on 4:02 pm

                      I think we’re kind of talking past each other here.

                      1) You’re still confusing QNNs with AGI — I don’t see anything in your references that suggests a QNN can “compute at the mathematical proof level” the way an AGI might be able to. They certainly can’t today, and maybe they will be able to with some new breakthroughs, and maybe they won’t, and maybe plain old neural networks will be able to compute this way eventually! It’s just wild speculation regardless.

                      3), and this goes with point 1, the entire point of AGI is that cognition is grounded in computation. If it weren’t, AGI would be impossible. Just because an AGI run an algorithm that creates cognition, doesn’t mean that the underlying algorithm is suddenly magically exempt from computational complexity constraints! It doesn’t matter how smart or clever you are, the traveling salesman problem will always be NP-hard and unless P=NP, there will be no way to compute it in polynomial time. That’s a property of the *problem itself*, not the strategy you use to solve it. That’s literally the whole point of the entire field of computational complexity.

                      2) We’ll have to agree to disagree on this one. I can’t really prove a negative, but if people know how to create a mind, why aren’t they doing it? The computational capacity should be there already, according to Kurzweil’s argument…

                    • DSM sheera March 12, 2016 on 4:15 pm

                      Why do you avoid key points that I make and just suggest that I am confused. It is you who are confused if you do not understand the quantum nature of natural intelligence and how it can be emulated in-silico.

                      You don’t see to understand what a mind really is and what it can do that a function set cannot. A mind can decide on a course of action before it has a complete result from any computation. The mind can leap from peak to peak without having to traverse computational valleys between. It is not limited by the constrains that are key to your argument and the fact that we can act on a mathematical insight before we can confirm it’s validity with a mathematical proof is the perfect example of this. We can transcend logic, without being illogical. This is profound and must be understood if you wish to understand humans and the future of true thinking machines. If you can’t do this you can’t know what the future holds at all.

                    • sheera DSM March 12, 2016 on 4:04 pm

                      Wow, this stylesheet is awful for deep nested comments. Is there a way to view comments without squishing them against the right edge of the screen?

        • DSM sheera March 12, 2016 on 2:12 pm

          The more I research the different technologies and computing logics that have been explored in the last 50 years the more I am confident that as a general principle Moore’s Law will remain a valid indicator until such time as the entire idea becomes irrelevant and that will happen within another 50 years. The entire idea of using general purpose Von Neumann architecture in every application is already starting to be superseded which undermines the sanity of applying Moore’s observation so widely.

          The thing people just don’t get is that the nucleus of every cell in your body is already a massively parallel computing engine. This engine evolved to serve the needs of evolution, but there is nothing to stop humans from consciously programming it to serve their higher level computational needs, which would create a form of meta-evolution.

          To make the point memorable, your turds (specifically the bacterial cells they contain) have more computational potential than every computing device ever constructed by humans combined.

          • sheera DSM March 12, 2016 on 6:21 pm

            Cells are not general purpose computers. An inert lump of rock is also a “massively parallel computing engine” in exactly the same way — after all, the behavior of all the subatomic particles in there needs to be computed in order for it to sit there and be a rock. That doesn’t mean much for Moore’s law, and if we’re talking about the next 50 years I’d offer good odds on a bet that biology isn’t going to help us much as far as improving our collective processing power goes.

            • DSM sheera March 12, 2016 on 11:35 pm

              You have no idea what you are talking about, the mechanisms in the cell nucleus are Turing complete. Life is a Turing Complete process.

              Once a many papers from about 20 years ago. http://link.springer.com/article/10.1007/s002240000112

              N.B. “Turing Complete”

              Your rock comment is idiotic, so much so that I am tempted to call you out for being nothing more than a troll.

              • sheera DSM March 13, 2016 on 12:10 am

                I’m honestly not sure if you’re being intentionally obtuse here or what. You seem to be sincerely debating with me on this, but you keep jumping to unrelated topics.

                Who cares if you can construct a Turing-complete model from a DNA-related mechanism? We’re talking about whether or not Moore’s law is going to continue over the next 50 years, remember? Turing-complete doesn’t mean tractable or in any way comparable in performance to our current hardware.

                My point is that cells are not general purpose computers. I’m sure you can *construct* a general-purpose computer from components of cells, but it will be incredibly inefficient and will not help us continue Moore’s law. So saying that cells are “massively parallel computing engines” is just as irrelevant to Moore’s law as saying that rocks are “massively parallel computing engines”. No, I’m not trolling.

                • DSM sheera March 13, 2016 on 1:23 pm

                  “but you keep jumping to unrelated topics.” = “this I will not consider because it does not suit me”, well that is the sort of strategy a fool or a troll would use.

                  Performance means “operations per second per joule”, so how many more or less of them do we get from an Intel CPU Because that is all that matters due to the fact that any Turing complete process can emulate said Intel CPU and it’s peripherals.

                  You honestly don’t know do you? Go and do the maths. How many Ops per Cell nucleus per Second per Joule per Turd. How many levels of abstraction are there between the DNA processes, which are Turing complete, and the Intel instruction set, and how much does this reduce the overall performance. Work that out and you can in fact make a direct comparison of performance.

                  If you aren’t a fool you are a Troll, so if you continue to dismiss the concept of computational equivalence, which are you? I say you are a troll.

      • sheera DSM March 12, 2016 on 4:42 pm

        Gonna reply here so that my comment isn’t squished.

        DSM: ”
        Why do you avoid key points that I make and just suggest that I am confused. It is you who are confused if you do not understand the quantum nature of natural intelligence and how it can be emulated in-silico.

        You don’t see to understand what a mind really is and what it can do that a function set cannot. A mind can decide on a course of action before it has a complete result from any computation. The mind can leap from peak to peak without having to traverse computational valleys between. It is not limited by the constrains that are key to your argument and the fact that we can act on a mathematical insight before we can confirm it’s validity with a mathematical proof is the perfect example of this. We can transcend logic, without being illogical. This is profound and must be understood if you wish to understand humans and the future of true thinking machines. If you can’t do this you can’t know what the future holds at all.

        Sorry, but this strikes me as pure “quantum woo”. My background is in computer science and I’m very interested in AI, so I have at least a little more than a pure layman’s knowledge of this stuff. Your assertions about the power of minds are pretty much an appeal to magic. No, we cannot “transcend logic”, and yes, we are absolutely limited by those constraints, just like everything else in the universe. Acting on mathematical insights without formal proof does not enable us to bypass computational complexity issues. But I’ve never been able to convince anyone with these kinds of beliefs that they shouldn’t believe in magic, so I’m not going to try here. Take some theoretical computer science courses online, I think you’d find them interesting.

        • DSM sheera March 12, 2016 on 6:23 pm

          It isn’t “quantum woo” when you realise that there is no difference between a mathematical or conceptual distance and that of a physical one, then remember that spooky action at a distance is in fact a demonstrated reality, how the universe works. Traditional maths and logic are powerful tools, but they are incomplete in their ability to describe reality but a mind’s substrate can be described by them even if they cannot be used to prove the limits of what the mind is capable of.

          Absolutely no mysticism is required (nor are your insults justified). Your ability to understand the implications of what is already know is inadequate but you refuse to even consider this in the light of the evidence, how are you not a fool for behaving that way?.

          I can formulate an understanding or insight and use that as tool to allow me to traverse a conceptual landscape, without ever having to formally check that the insight I use is logically valid. I can move across that landscape at a speed that is faster than a logical path permits therefore you cannot use logic to define what my conceptual speed limit is.

          All your Comp Sci studies has achieved is to reduce your mind down to the level of a machine, rather than teach you how to raise a machine up to the level of a mind. I doubt you even understand computing machines well enough to design a CPU anyway, assuming I cannot would be a mistake. In fact if you were able to recognise the pattern in the logo that accompanies my posts you would respect my knowledge far more than you evidently do.

          • sheera DSM March 13, 2016 on 12:18 am

            Ok friend, I think I’m done with our conversation. I’m prepared to have a discussion of the prospects for Moore’s law or AGI, but this has gone way off the rails and I don’t see anything rational in what you’re saying anymore. Have a good night (or day)

            • DSM sheera March 13, 2016 on 1:24 pm

              Good riddance little troll.

              • Matthew DSM March 20, 2016 on 4:00 pm

                you sound like a dilettante hippie who only half understands what you’re talking about. and you seem to be projecting that mistake onto others. on top of your convoluted understanding and inability to communicate with people, the insults are just ludicrous. and nobody cares what you’re little icon means when you’re arguing with such disdain. how enlightening! and actually, people do get the biology harbors inconceivable levels of computation. you didn’t invent this idea. I know you know that but when you say people don’t get it, it comes off as intellectually elitist. I get it, and i’m no genius.

                • DSM Matthew March 20, 2016 on 5:40 pm

                  You just made a fool of yourself, see my other comment in reply to you today.

                  Furthermore using the bully term “nobody cares” is idiotic because it suggests that you have delusions of grandeur and are a self appointed representative for “everybody”.

                  “you didn’t invent this idea.”, nor did I imply that I did, but I did provide references that prove the point. So tell me should I view your comment as simply foolish or deliberately deceitful? Either way you seriously devalued yourself in a comment where you are also trying to pose an authority and pass judgement on others. Oh the irony.

                  If you are not “a genius” is is only because you waste time trying to judge others when you are not qualified or knowledgeable enough to do so, and when you should be trying to improve yourself instead.

                  • Matthew DSM March 31, 2016 on 8:58 am

                    not sure what you’re rambling about in such a condescending manner but I was just calling you out on using the word troll, basically. and yes you said people don’t get that biology contains computation. they actually do. you’re not the only one aware of this. that is what I was implying when i made that slightly hyperbolic statement that you didn’t invent the idea. and i am pleased that you can admit that. (just kidding. i really don’t care. but hey why don’t you comment and explain to me why i am so disinterested in you, please enlighten me). oh and i am so worried about “seriously devaluing myself” though lol. thanks for all the insults and good luck learning to communicate more effectively. call me a fool a few more times that’s constructive.

    • ahier sheera October 9, 2016 on 1:34 am

      I think you might be missing part of the point here: Humans are going to live longer and longer and that will allow greater and greater opportunity for technological advancement.

      Perhaps you have seen this?


      It is sourced by a recent article in Nature:


  • Matthew March 20, 2016 on 3:36 pm

    “The integrated circuits described by Moore’s Law, he says, are just the latest technology in a larger, longer exponential trend in computing—one he thinks will continue. Kurzweil suggests integrated circuits will be followed by a new 3D molecular computing paradigm (the sixth) whose technologies are now being developed.”

    I am kinda annoyed with the tone of this article. it, like many mainstream media outlets, paints Kurzweil as this philosophical quasi spiritualist who “believes” in or “says” or “suggests” things. Kurzweil is a scientist and inventor. and has a historical record of accurately reciting (not really predicting…) when certain technological innovations will take off. based on developing fields of science and technology, economics, and other data that lead to extremely educated guesses. not fortune telling. the singularity is near is one of the most well cited books I have ever seen in my life. every other sentence is linked to a footnote that explains in the back of the book the peer reviewed scientific paper or statistical data to which he is referring.

    he does not just sit there wishing things for fun and hoping he’s right. he lists, as you said, already in progress technology. you briefly listed one (of many) examples of silicon chip’s successors. 3d molecular computing. off the top of my head, from having read the book 8 years ago, I recall a few more examples, already in development, and have read news articles on their development since. in addition to 3d molecular computing, there is light computing, computing with dna solutions (i am no computer scientist, but from what i remember this kind of computing is apparently inapplicable for certain calculations, but my theory is that perhaps somehow in the future, the computational power of dna computing solutions could be harnessed to translate to those other computational operations that currently seem inapplicable. perhaps with some new programming language). he also talked about computing with brain cells, which we have been doing for about 15 years now. Kurzweil also talked about latent computation existing in inanimate objects like stones, or water. I remember one part of his book measuring the level of computation in a single stone and it’s astronomical, and dynamic. it’s not just an object sitting there frozen in static. it is brimming with dynamic energy and exchanges of information on a molecular and quantum level. perhaps someday we could tap into the virtually infinitely complex mathematics of systems in nature to do our computing. and then there is quantum computing. whatever substrate incarnates this level of computing, will blow everything else combined out of the water.

    I will say that I disagree with Kurzweil on the ceiling of computation. like I said, i’m no computer scientist, or quantum physicist, but it seems to me that weather the universe if digital or analogue, it is definitely mathematical and hence intrinsically computational, and if infinite parallel universes exist than I don’t understand how there could be any limit … to anything.

    • DSM Matthew March 20, 2016 on 5:00 pm

      Talking about the computational potential of inanimate objects is a little silly because it causes fools to think that biochemical life processes are just like rocks, they just have potential, when really they are already powerful computational engines that are Turing complete. See one of the other commentators here for example. 🙁

      However if you are going to just talk about the potential of things to be arranged into self interacting patterns (what a computer essentially is) then the limit is the patterns of interactions in virtual particles that constitute the quantum foam, or perhaps even lower at the “String” level, in which case the entire universe may already be a computer (in the way that DNA mechanisms already are) and it is just waiting for us to work out how to run our own programs on it.

      This is the point at which the universe itself becomes sentient in the same what that biochemistry did with the advent of hominid brains defined by DNA.

  • Matthew March 20, 2016 on 3:47 pm

    it seems to me, the question of weather or not there is a limit to computation is like asking weather or not infinite parallel universes exist in the same space and if it’s possible to travel between them. if there is a limit to computation, it would be all the matter in our universe. an already inconceivable amount of data, that we may never even need.

  • cat1092 May 10, 2016 on 4:24 pm

    Back to the CPU’s dropping power measured by GHz between 2004-2012, what happened? It was appearing that 4GHz CPU’s would be the norm for everyone, and I know that Intel (as well as AMD) has the technology for this & to make these run cooler while consuming less wattage per GHz.

    For example, back in 2006, I had upgraded a Dell Dimension 2400 to it’s max, a 3.06GHz CPU (Northwood) that included H/T technology, among the first widely group of Intel CPU’s to have this wide scale. The only problem with the chip (other than it uses nearly as much power as a i7-4770 & doubled as a space heater), was that the L2 cache was limited to .5MB. There were mobile CPU’s at the time that had a 2MB cache, so there was little reason to bump this up more with the Intel 3.06GHz desktop chip. Would had been a sure fire game changer, and the chips would still hold value today.

    Moving forward a few years to 2008-09, the Intel Core 2 Quad 9650 (3.0GHz) with a 12MB L2 cache was a dang fast CPU & still is, I don’t care what is published in articles, what I do know is that I can fire up the PC, open two Web browsers, the AV+AM apps are updating & running a startup (or missed scheduled) scan, and Secunia PSI is auto updating other software, such as Adobe Flash & 3rd party browsers. Have never had it to freeze due to load & have never used overclocking to enhance performance.

    Fast forward a couple more years, many of the 1st gen Intel Core ‘i’ series CPU’s cannot keep up with the Q9650, especially the dual core models, have many apps running or open, the entire PC will freeze.

    Moving onto the 3rd gen i series, there were a few very powerful quads, even in notebooks, of which I have one that runs at 2.4GHz with up to 3.4Ghz Turbo Boost. Heat is an issue though, and some OEM’s may have addressed the issue better than Samsung, which by chance are no longer building Windows computers, they’re making enough from SSD & OEM RAM sales to have left this behind.

    Now for the 4th gen ‘i’ series (Haswell). like the 3rd gen in Ivy Bridge, has a great lineup of high powered desktop CPU’s (though this is where the watering down of the notebook CPU’s really began showing), and for the quad core desktop PC’s, I have the cream of the crop quad core in the i7-4790K, which is a temporary replacement in my XPS 8700 that shipped with an i7-4770. Until I can save for the rest of the needed components for a future build, it’s being broken in. One major issue Intel cheaped out on was the internal cooling of these chips themselves (3rd & 4th gen), they used thermal paste rather than fluxless solder underneath the cap where the consumer places a drop of thermal solution, taking the cheap route out, while many loyal Intel enthusiasts would had paid $15 more for the proper solder. If either of my 3rd/4th gen CPU’s doesn’t run after 10 full years, they’ve forever lost a customer in me, as I have CPU’s still running that’s 10, 12 & 13 years old.

    Plus had I known Intel were doing this, would had never bothered with the i7-4790K.

    Now for the main issue. Intel can build 4GHz chips (as we see in ads of major retailers for desktop PC’s), they also did with the 6th gen with the i7-6700K, so why are they building puny i7 dual cores for notebooks that doesn’t break the 2.0GHz mark? I’d expect this for an i3, and maybe an i5, yet surely not an i7, and a meager 20W of power? My 1st gen i7-620M can outperform these low power chips, and uses only 15W more in doing so. The only obvious difference may be graphics quality, yet have only one 1st gen notebook w/out a discrete nVidia card. Plus Intel HD graphics ranks very low on the Passmark scale anyway (under 500), so most any discrete card (even 5+ year old ones on desktop PC’s) will beat out the onboard graphics that Intel’s CPU’s offers. It’s just there so that Intel can claim they provide graphics, just as AMD does with their APU’s, yet the bottom line is a CPU is supposed to be just that. For years, PC & MB OEM’s included their own onboard graphics & a few of the latter still does. The tradeoff resulted in lost CPU performance, where maybe up to 1GHz was lost.

    Moore’s Law has already passed in some ways, and the notebook (or mobile) area is the greatest one affected. When it appeared there were finally going to be mobile 3.0-3.5GHz chips as stock equipment for most all notebooks, Intel backed off. If power savings or extended battery life were an issue, then there were other ways to address it. Build more 12 cell high capacity batteries, use the optical drive slot to install an optional (or reserve) battery, I had one of these in my old (11 year old) Dell Latitude D610. Or better yet, consumers taking responsibility for their own needs & purchasing one of many of the portable rechargable battery packs offered at Newegg, Amazon, Costco & other sites. Some of which will run for as long as the one built into the notebook, and when I say ‘built into’, many of today aren’t quickly swappable with a like type battery, these are sealed in the case. While replaceable by one who knows what they’re doing, this means the end of low cost replacement batteries (many higher rated than the OEM supplied unit) that one could carry as a spare.

    We’ve sacrificed power for longer battery life, when many normally keeps the notebook plugged in most of the time as a desktop replacement, using the battery only while out of the office or out to lunch, or sitting outdoors on a nice day performing work. Few are going to be away from an AC outlet (or DC adapter) long enough to drain one battery, let alone a spare or two.

    Also, there’s been no real price reductions since 2010, while there were massive reductions between 2005-09, so the bottom line is that we’re paying more for less. So Moore’s Law is already vanishing before our eyes & many have already missed the passing.