Phil 4.12.19

9:00 – 5:00 ASRC TL

  • Finished the BAA white paper(?), and asked for hours to write the full paper for the Symposium on Technologies for Homeland Security
  • These are appropriate:
    • Meaningful Human Control over Autonomous Systems: A Philosophical Account
      • In this paper, we provide an analysis of the sort of control humans need to have over (semi)autonomous systems such that unreasonable risks are avoided, that human responsibility will not evaporate, and that is there is a place to turn to in case of untoward outcomes. We argue that higher levels of autonomy of systems can and should be combined with human control and responsibility. We apply the notion of guidance control that has been developed by Fischer and Ravizza (1998) in the philosophical debate about moral responsibility and free will, and we adapt it as to cover actions mediated by the use of (semi)autonomous robotic systems. As we will show, this analysis can be fruitfully applied in the context of autonomous weapon systems as well as of autonomous systems more generally. We think we herewith provide a first full-fledged philosophical account of “meaningful human control over autonomous systems.”
    • The following is the preprint PDF of our paper on driver functional vigilance during Tesla Autopilot assisted driving: Human Side of Tesla Autopilot: Exploration of Functional Vigilance in Real-World Human-Machine Collaboration. It is part of the MIT-AVT large-scale naturalistic driving study
    • What I Learned from a Year of ChinAI
      • Finally, Chinese thinkers are engaged on broader issues of AI ethics, including the risks of human-level machine intelligence and beyond. Zhao Tingyang, an influential philosopher at the Chinese Academy of Social Sciences, has written a long essay on near-term and long-term AI safety issues, including the prospect of superintelligence. Professor Zhihua Zhou, who leads an impressive lab at Nanjing University, argued in an article for the China Computer Federation that even if strong AI is possible, it is something that AI researchers should stay away from.
  • And so ends a long, hectic, but satisfying week.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.