Human-Centered AI
Ensuring Human Control While Increasing Automation
Ben A. Shneiderman
Professor Emeritus, Computer Science
Founding Director, Human Computer Interaction Lab (Ret.)
University of Maryland
About the Lecture
The emerging synthesis of artificial intelligence (AI) technologies with human-computer interface (HCI) approaches to produce Human-Centered AI (HCAI) is gaining widespread acceptance. Advocates of this new synthesis pursue research that amplifies, augments, and enhances human abilities, so as to empower people, build their self-efficacy, support creativity, recognize responsibility, and promote social connections.
HCAI researchers build on AI-driven technologies, including generative AI, to design products and services that make life better for the users. These human-centered products and services enable people to better care for each other, build sustainable communities, and restore the environment.
The passionate community of HCAI researchers are devoted to new metaphors and visions, which include supertools, tele-bots, control centers, and active appliances, which make life better for people. These human-centered products and services enable people to better care for and learn from each other, build sustainable communities, and restore the environment.
About the Speaker
Ben Shneiderman is Emeritus Distinguished University Professor in the Department of Computer Science at the University of Maryland. He is also the Founding Director of the Human-Computer Interaction Laboratory at UMd.
He has made many widely-used contributions to computer science, information visualization and the design of human computer interfaces, including clickable highlighted web-links, high-precision touchscreen keyboards for mobile devices, tagging for photos, dynamic query sliders for Spotfire, treemaps for viewing hierarchical data, novel network visualizations for NodeXL, and event sequence analysis for electronic health records.
Ben is an author on several hundred technical publications and the author or lead author of several books, including Designing the User Interface: Strategies for Effective Human-Computer Interaction, Readings in Information Visualization: Using Vision to Think and of Analyzing Social Media Networks with NodeX, Leonardo’s Laptop, The New ABCs of Research: Achieving Breakthrough Collaborations, and Human-Centered AI. Among many awards for his publications, Leonardo’s Laptop won the IEEE book award for Distinguished Literary Contribution and Human Centered AI was the winner of the Association of American Publishers award for Computer and Information Systems.
Among honors and awards, Ben is a fellow of the AAAS, ACM, IEEE, NAI, the Visualization Academy and a Member of the U.S. National Academy of Engineering. He has received six honorary doctorates in recognition of his pioneering contributions to human-computer interaction and information visualization.
Ben earned a BS in Mathematics and Physics at the City College of New York and an MS and PhD in Computer Science at the State University of New York at Stony Brook.
Social Media
Personal website: http://www.cs.umd.edu/~ben
Human Computer Interface Lab website: http://www.cs.umd.edu/hcil
Wikipedia Entry: https://en.wikipedia.org/wiki/Ben_Shneiderman
Twitter: @benbendc
Linkedin: https://www.linkedin.com/in/ben-shneiderman-68004010/
Google Scholar: https://scholar.google.com/citations?user=h4i4fh8AAAAJ&hl=en
ORCID: orcid.org/0000-0002-8298-1097
DBLP: https://dblp.org/pid/s/BShneiderman.html
Minutes
On June 28, 2024, in the Powell Auditorium of the Cosmos Club in Washington, D.C., President Larry Millstein called the 2,499th meeting of the Society to order at 8:04 p.m. ET. He began by welcoming attendees, thanking sponsors for their support and announcing new members. Scott Mathews then read the minutes of the previous meeting which included the lecture by Mike Griffin, titled “Returning Humans to the Moon: How the United States can actually get there instead of watching China do it”. The minutes were approved as read.
President Millstein then introduced the speaker for the evening, Ben Shneiderman, of the University of Maryland. His lecture was titled “Human-Centered AI: Ensuring Human Control While Increasing Automation”.
The speaker began by presenting an optimistic view where “human needs drive the design” of AI. He showed examples of advances in human-computer interfaces which increased the effectiveness and ease of use of common computer applications. These included: touch screens, image tagging, colored hyperlinks, and data visualization. He then introduced and defined the term “Human Centered AI”: a set of processes related to human-computer interactions and a set of guidelines for designing computer-based products. He stated that the goal of these guidelines is to “Amplify, Augment, Empower & Enhance People”.
Shneiderman then indicated that the rest of the talk would be divided into three main topics: HCAI Framework, Design Metaphors, and Generative AI.
While discussing the HCAI Framework, he introduced the concept of balancing automation with human control. He indicated that this balancing actually constitutes a two-dimensional space, with the degree of computer automation on one axis and the degree of human control on the other. He gave examples of systems which occupy various regions of this 2D space: a music box and a landmine (low human control and low computer automation), an elevator and a digital camera (high human control and high computer automation), a bicycle and a piano (high human control and low computer automation), and a pacemaker and an airbag (low human control and high computer automation).
The speaker then discussed the transition from older, rationalist design metaphors to more modern design metaphors, which he listed as: Super Powers, Tele-Actions, Control Centers, and Active Appliances. As examples of “Super Powers”, he discussed digital camera controls and image manipulation, GPS navigation and navigation choices, text and search autocompletion, and spelling correction. As examples of “Tele-Actions”, he discussed the Mars Rovers, and the DaVinci Surgical System. As examples of “Control Centers”, he discussed Air Traffic control centers, Hospital control centers, and Terrorism control centers. As examples of “Active Appliances”, he discussed Google Nest, iRobot Roomba, cardiac pacemakers, and modern appliances like washing machines, dryers, dishwashers, etc.
Shneiderman then presented a discussion of generative AI. He indicated that many of the generative AI’s refer to themselves as “I”, as though they were people. Shneiderman feels that this is a mistake, and the AI’s should be designed in such a way as to be perceived as tools, not people. He showed several examples of artwork and “photographs” generated by AI. He indicated that many of these AI generated images required considerable human input. His further discussion of generative AI concentrated on risks, which he listed as: fake images and voices, undermining democracy, massive surveillance, and cyber-crime. He argued that job loss was not a realistic risk associated with AI.
The speaker concluded his talk by saying that human centered AI, properly designed, should reflect human values, help achieve the goals of individuals, and remain safe and trustworthy.
The lecture was followed by a Question and Answer session:
A member asked about the use of AI in the medical profession, specifically about anomalies vs. general trends. Shneidermann responded that you can use AI to look for anomalies, provided you can define the anomalies. He said that the current use of AI in medicine seeks to process large amounts of data and manage medical care “at scale”.
A member claimed that AI was more dangerous than nuclear weapons and asked when to expect a change in AI that would lead to the destruction of humanity. Shneiderman responded that he did not agree with that assessment, but that he recognized three real threats from AI: malicious actors, biased training data, and flawed software. He said he would rather spend his time thinking about and designing AI’s with positive outcomes.
A member asked about wage reduction associated with AI. Shneiderman responded that AI will vastly expand markets and increase demand. He remarked that AI will likely reduce the number of jobs, and therefore the wages, in specific fields, but that increase in jobs and wages in other fields will greatly outweigh the losses.
A guest asked what projects or future work Shneiderman was most excited about in human centered AI. Shneiderman responded that he is heavily involved in policy issues and government oversight. He said that he was fascinated by collaborations to ensure the safety of AI and the role of governments in this process.
A viewer on the livestream asked about humans providing goals and AI’s handling the details. Shneiderman objected to the idea of letting the AI handle the details. He gave the example of digital photography and digital image manipulation, stating that it was the human that handled these details, and that the human always has the ability to override the choices made by the AI.
A member stated that he believed that in the future, AI would be able to provide him with affection, attention, encouragement, and love. Shneiderman responded that he believed that AI would not be effective in providing emotional or social support.
After the question and answer period, President Millstein thanked the speaker and presented him with a PSW rosette, a signed copy of the announcement of his talk, and a signed copy of Volume 1 of the PSW Bulletin. He then announced speakers of up-coming lectures, made a number of housekeeping announcements, and invited guests to join the Society. He adjourned the 2499th meeting of the society at 10:01 pm ET.
Temperature in Washington, DC: 23.9° Celsius
Weather: Partly cloudy
Audience in the Powell auditorium: 79
Viewers on live stream: 27 …for a total of 106 live viewers
Views of the video in the first two weeks: 246
Respectfully submitted, Scott Mathews: Recording Secretary