A Conversation With Tracy Atkins, Author Of ‘Aeternum Ray,’ A Book About Humanity’s Future

4 11 Loading

Tracy Atkins has seen the future, and wants us all to know that we should feel pretty good about it.

His just released book, Aeternum Ray, is a formidable sweep of all the major technological advances that have got us to where we are today, and an educated guess as to where we’ll be in the future – 204 years from now, to be exact. With a career in IT, Singularity Hub member Tracy Atkins is someone who not only has been passionate about technology his entire life, but he’s been an active participant. By fifteen he had already written and sold software and, years before the internet, run a bulletin board system connecting the residents of his small hometown of Madison, West Virginia.

Aeternum Ray is a collection of letters written by William Samuel Babington to his son Benjamin. But you can think of William as a resident of the year 2216 sending messages to the world of 2012 about what life will be like 10, 50, 200 years from now. Of course, it wouldn’t be a singularity book if artificial intelligence didn’t factor big. Like Kurzweil, he places the technological singularity – when AI matches human intelligence and promptly leaves it in the dust – around 2050. But that was Atkins’ own projection, the timeline he’d come up with based on his own research. He told me that he didn’t read Kurzweil’s The Singularity Is Near until after writing the book.

Tracy Atkins, author of Aeternum Ray.

Why should we feel good about the future? The prominent AI in Atkins’ book, a robot named Ray, tells us so on December 31, 2049. He tells us that he’s here to solve all of humanity’s problems, to cure all diseases, to wipeout hunger and poverty. He does this, but later in the book we find out that even seemingly unlimited intelligence can’t solve all of our problems. Ray changes our technology overnight, meanwhile, we stubbornly remain human.

Aeternum Ray is a marked contrast to the numerous future depictions in popular media in which AI turns out to be the bane, rather than boon, of our existence. Atkins doesn’t buy into Terminator. “There’s a lot of talk about existential risk, AIs becoming evil or bad. There’s a lot of talk about AIs becoming uninterested in humanity altogether. But if you look at the optimistic view that we’re going to build AIs that are helpful and are going to take an interest in humanity…that’s the ultimate goal and I think that future is more likely.”

Aside from selling more tickets at the box office, Atkins thinks our penchant to create a doomsday future is human nature. “I think a lot of the negativity comes from self-reflection. It’s easy to point out all the flaws we have in ourselves, and we seem to want to transpose those onto our artificial intelligence creations. I wanted to craft, not necessarily a best case scenario even but, say, an optimistic and hopeful look toward the future.”

Aeternum Ray (‘aeternum’ derives from the Latin word for eternity) has all the fixings of a good singularitarian book: AI, exponential progress, virtual realities, downloading minds into computers. Atkins is passionate about technology and the singularity and he wanted to write a book that appeals to people like himself and to others who, for whatever odd reason, don’t read websites like Singularity Hub.

“I don’t think the public has a good grasp about what’s just about ready to happen. Augmented reality, virtual reality, artificial intelligence, all of these things are happening rapidly. There are thousands of academics working on this. And then you’ve got a small community of people that are interested in the singularity, maybe three million worldwide. You look at 99.97 percent of the population, they haven’t got a clue.”

Discussion — 11 Responses

  • Matthew Price November 24, 2012 on 9:53 am

    Picked up the book last week free from the Kindle borrowing library. I’m not terribly surprised he did not read “The Singularity is Near” until after reading the book. Honestly, it reads as though he discovered the idea of a singularity, and got so excited about it he wrote a book before learning anything about what that singularity would entail.

    Such puzzling anachronisms as people working middle management jobs in a world full of AI; An uploaded man discovers his wife has just died and uploaded, and he has to wait on travel arrangements to reach her; Most damning is that no one is any smarter in any way. A whole chapter is spent dealing with “dark-hearted” people who are ass-holes to everyone else in VR… even though people can ignore them, block them, make them invisible, and cannot be harmed in any way whatsoever…. right, I guess they’ll continue to glower at people forever! Let’s consider permanently murdering them!

    Seriously, in a world so far in the future in a virtual world with unlimited processing power, people are not allowed to augment themselves in any way. No intelligence augmentation, no genetic variation allowed (what would happen to your soul???) and even the AIs (the all powerful Ray included) seem no smarter than your average human 1.0

    The book isn’t believable. Not because it’s too futuristic, but because the author doesn’t seem to understand the implications of his futurism.

    • Steve Pender Matthew Price November 28, 2012 on 7:02 pm

      Ah, imperfect parallel progressions of technology ruin sci-fi for me. It’s like a world with teleportation, and they use it to reach their hand into the next room to advance a vinyl record to the next track. The series H+ on youtube screwed up by having a world with both implanted computer integrated eye-screens and manual parking in a parking deck. I think an understanding of human action and austrian economics would solve these mistakes, the basis being that technology trickles up from solving the easiest problems to the most difficult. Every easy problem would be targeted and eradicated first, so, like you said, a world with AR/VR would also be a world in which you could block those who abuse it. Problems would be solved proportional to how many people are annoyed by them and how easy they are to solve.

  • Matthew Price November 24, 2012 on 9:57 am

    Sorry for the double post, couldn’t find a way to edit my previous post.

    My biggest complaint with the book is that nothing -absolutely nothing- happens between now and the singularity. Smartphones become more wearable and then becomes a patch… but the whole of the rest of technology seems to be completely blank and stagnant for decades until Ray shows up and suddenly infinite progress. The most incredibly exciting decades in the history of the world are about to unfold, but in the book they are glossed over as “more of the same”

    • Homer Matthew Price November 24, 2012 on 3:05 pm

      I didn’t read the book yet, but I can understand your criticism. That seems to be the belief of most folks who hear about the singularity — everything will be more or less stagnant, and then in 2045 suddenly…bam!

      Another comment I’ll make is that Ray Kurzweil doesn’t set 2045 as the time of human-level AI. He predicts this will be achieved in 2029, ushering in the singularity 16 years later. Presumably during the intervening years, progress will accelerate until normal humans will be unable to keep up. Personally I don’t think it will even require 16. I’d say 5 years after the Turing test is passed, we will enter absolute science fiction territory.

  • Robert Schreib November 24, 2012 on 1:11 pm

    Yes, but does the book mention the possibility that we might eliminate aging in the near future? Or that such physical immortality might lead to a global class war of the mega-rich 1%er old surpressing the dirt poor 99%ers? That is, if nobody’s dying of old age, the next genration isn’t inheriting your estates., etc. If you live forever, you can amass vast wealth, but only at the cost of a majority of the Earth’s population having zilch!

    • Matthew Price Robert Schreib November 25, 2012 on 8:41 am

      If you were to instantiate an immortality technology that were sufficiently expensive, and then prohibit technological progress in every other way (forever) then yes, what you suggest might come to pass.

      That’s not how it’s going to happen however.

    • Steve Pender Robert Schreib November 28, 2012 on 7:10 pm

      Resources are zero-sum, but not “wealth”. There is far more wealth on the planet today than 10,000 years ago, but identical elements. Nobody can amass wealth without either trading superior value to be compensated with monetary units, or forcibly confiscating it. The realistic implication of anti-aging tech is ending poverty by ending the need to reproduce, allowing for longer spans of life to solve one’s problems. Poverty begins at birth, where we are a cost to parents, producing nothing. Thus, each new baby is an economic burden for at least 18 years. The problem is that much of that burden is not paid entirely by the parents in modern times, but by others primarily through confiscatory taxation and, more rarely now, voluntary charity. The solution to poverty is increasing individual self-sufficiency. Adults are easily more self-sufficient than children, so it would be advantageous to the wealthy to cheaply propagate anti-aging tech while coupling it with contraception.

  • Jan-Willem Bats November 24, 2012 on 3:24 pm

    Where does the three million figure come from?

    After well over a decade of talks from Kurzweil around the world, you’d think more people would know.

    If that’s not the case, then singularity awareness needs some exponential acceleration.

  • ceresian November 25, 2012 on 2:39 am

    I think he’s right about the 99.97 percent thing. I hope to help him start telling people that big things are coming.

    Looking forward to the book!

    -joe

  • Nick Klamecki November 30, 2012 on 3:42 pm

    12 years ago, with a friend we both saw a singularity event around 2050 just by following the trends. This was before we knew anything about Kurzweil. Its apparent things are changing. But I don’t think humanitys problems will be solved. Times change, but life stays the same.