The duo described how their companies are in the market for selfdriving cars but have been unable to find anyone that is as smart as the average human, and are in the process of building a machine of equivalent intelligence.

The duo described how their companies are in the market for self-driving cars but have been unable to find anyone that is as smart as the average human, and are in the process of building a machine of equivalent intelligence. Alibaba’s Ma then explained that self-driving cars would be much cheaper to build than human-level machine intelligence in a few decades, and are thus a superior solution to our current state of affairs.

With all of this talking about AI, it’s perhaps worth exploring other topics and areas. In a section titled “How To Deal with Deep-Mind,” Andy Weir focuses on a topic to which many people are already deeply concerned: the use of artificial intelligence to manipulate the behavior of humans. While exploring the potential value of technology to manipulate other humans for negative profit, Weir lays much more broadly on the ethical, social, and political implications.

Though this is one of the more intriguing sections in Weir’s book, it still feels like an afterthought. Worse, he appears more interested in describing science, technology, and science fiction in the style of science fiction than actually exploring the many possible uses of AI. While I agree that there surely are many more uses than just manipulating human actions and decisions, I would argue that Weir’s focus makes a very misleading reading of what is most important about current human-level AI.

What is most important about technology is about our ability to create new and better ways to use existing infrastructure, to create new ways for our current systems and architectures to work together. This is where the potential for disaster presents itself. The vast potential difference between using systems in the way we want them to perform, vs. using them to do something that’s bad for us. This is an argument that I’ve made repeatedly in my articles and writings: we need an ethical framework for technology that takes all of the potential for the bad things that can come from doing this very, very dangerous thing very seriously. We cannot get an ethical framework until we accept that the current use of AI and machines is incredibly dangerous and unethical.

“It is easy to be optimistic,” he concludes “but optimism can only lead us so far. With the right policies and the right support we will be able to make the changes we need.” I would argue that we do need to change the policies and support necessary for this kind of change to occur, but we also need to go back to the old ways, and not be afraid to make big changes where necessary, to preserve what we have (including creating a more humanized tech environment).

So, while I agree with Weir that we need to be very critical of current AI and its potential uses, but I do believe his “more humanized tech environment” is not the answer. Instead, I think an ethical framework that takes the potential benefits from the technology (as defined here) very seriously while also respecting the possibility of catastrophic disasters should be our next step.

The Raiders added a lot of veterans to their roster with many of them returning players of previous teams like the Redskins, Jaguars, Bucs, and Lions. This is not a healthy or a nutritional diet, and the person who is dying is having trouble getting all the protein that is required for him, and this can increase the odds of a thymus failure.
Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×