Can legal regulations prevent Judgment Day?

  • /Reviewed by: Matt Riley
  • BPPbranden-lsat-blog-artificial-intelligence-2
    First, let me clear up the click bait headline. When I say “Judgment Day,” I’m talking about the idea, most prominently popularized in the Terminator movies, that machines will rise up and enslave or annihilate the human race. We’ll leave the Revelation stuff alone for now — and, as least as far as this blog is concerned probably for all time.

    The luminaries Stephen Hawking, Elon Musk, and Bill Gates have all raised the alarm about the implications of Artificial Intelligence. In the not-too-distant future, it seems quite likely that technology will progress to the point that computers will be able to, at least in some respects, think more deeply and understand better than we do. This raises the prospect of what to do with, and especially how much autonomy to grant, machines that can, if used properly, enrich human lives and address previously intractable problems.

    The issues that arise were addressed as far back as 1950 in Isaac Asimov’s groundbreaking I, Robot. (Stick to the book, and not the near-fatal, 2004 savaging dealt this literary classic by director Alex Proyas and actor Will Smith.) In that book, Asimov announced three simple yet compelling Laws of Robotics, and explored the implications.

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    Spoiler: these laws do not represent a foolproof solution to the problems outlined above. The worst trouble seems to stem from the part about how a robot mustn’t allow a human to be harmed.

    While these or similar principles might be part of the overall regime necessary to address the threat, they’re far too simple to apply reasonably to our big, rowdy, messy world. Fully developed analyses and proposals for how we would go about such a daunting task are not hard to find, but they are varied and discordant about what’s important and how to address it.

    Without venturing into the weeds of these works (and turning this into a 5,000 word essay), it’s worth noting some of the concerns they purport to address. First, finding a definition of Artificial Intelligence is a preliminary hurdle, and it’s one that hasn’t been cleared yet. It may be that our definition evolves as the tech evolves and makes older thinking outmoded. How will regulation account for that? What kinds of limitations will be effective, and how do we implement them without stifling promising innovation? How do we address the ever accelerating superfluousness of human workers in the age of the robot? How do we build up and use the infrastructure to enforce regulations? How do we make sure that a developer can’t just cross into a different country where the regulations haven’t been adopted and set up shop there? Another question that’s probably more for ethicists and scientists and futurists than lawyers and government entities is what, if any, research should be forbidden outright? In an analogous manner, we have cloning tech today, but human cloning is entirely prohibited.

    These things are interesting to think about, and the wonderful and scary part about the dawn of AI is that our imaginations really are the limit, and, once computers think, their more powerful imaginations might prove to be the real limit. Feel free to drop a comment below about your thoughts on this issue.

    Written by: Blueprint LSAT Instructor

    Leave a Reply

    Your email address will not be published. Required fields are marked *