Trend Analysis: "Artificial Intelligence (Part 3)": Will We Humans Become the Pawns of Our Computers?
First, please enter the information requested below. The form must be filled out completely. Then, simply click on the "Download" button to download the file directly from our website. We guarantee that the personal data you submit will be used for internal purposes only, and will not be shared with any third parties.
Perhaps you still remember our trend analysis from back in 2013. During the run-up to Germany's parliamentary elections that year, our trend research institute, the 2b AHEAD ThinkTank, in an open letter that came to be known as the "Wolfsburg Declaration," challenged the top candidates of all parties to engage with the questions that really matter for our futures. Signatories included not only numerous executives and innovation managers, but also recipients of the Federal Cross of Merit and science journalists such as former ARD moderator Jean Pütz.
Together, we presented politicians with a catalog of the most important questions for the future. Not a single response was given by any politician from any party. Too bad - because had there been a dialogue back then, then many would not be so surprised at developments today.
One of the most important challenges given in our catalog from 2013 was to require all AI research projects to include a "veto function" that would guarantee that the superintelligent computers of tomorrow would always make their decisions in the best moral interests of human beings. Otherwise, we predicted back then, incalculable dangers could arise for humankind.
And now, three years later, this is exactly the tenor of the international debate on the future of artificial intelligence. Top managers and public figures from Bill Gates to Elon Musk and Stephen Hawking have all warned against the unchecked development of superintelligences. The top-research scene has started looking into feasible strategies. Politicians, unfortunately, continue to remain silent.
For this reason, we would like to continue our little series of trend analyses on the "Future of Intelligence" with our third and final section. While the first two parts discussed why 2016 will mark a new breakthrough for artificial intelligence and whether computers will take away our jobs, today's analysis will deal the core issue: Will we humans be able to control the superhuman AIs of tomorrow, or is our very existence at risk?
Or to put it more constructively: How can we seize the opportunities offered by computers with superhuman intelligence without risking the existential foundation for our children and grandchildren? Admittedly: Even among scientists you will find more shallow assumptions than solid answers. The determinists believe that the Singularity will mark the beginning of a merging of humans and machines which will mean a wonderful new stage in the development of humanity. The constructivists aren't so sure. They say: Depends on who is writing the code - and how fast. And if this entity deems it necessary to program a veto mechanism as part of the package...
In this trend analysis, we have compiled the forecasts prevalent in the discussion of superintelligence today. As you will soon read, on many points today we must first find the right questions before we can turn to answering them. So much, however, seems clear: The answers that we will find during the next 20-30 years will determine nothing less than the continuing existence of the human race. It will be worth it to start now.I wish you a sunny start to summer, and an inspiring read!