TR Daily Senators Question Google Denial of Persuasive Technology Use
News
Tuesday, June 25, 2019

Senators Question Google Denial of Persuasive Technology Use

A Google LLC executive’s assertion at a Senate hearing today that the company does not use persuasive technology triggered skeptical responses from both sides of the aisle.

During a hearing before the Senate communications, technology, innovation, and the Internet subcommittee, Maggie Stanphill, director–Google User Experience, responded “no” when subcommittee Chairman John Thune (R., S.D.) asked her if Google uses persuasive technology.

Later, Sen. Brian Schatz (D., Hawaii) asked, apparently incredulously, “Did you say Google doesn’t use persuasive technology?”

Ms. Stanphill said that yes, she had.

Sen. Schatz appealed to another witness, Tristan Harris, co-founder and executive director of the Center for Humane Technology. “Is that true?”

Mr. Harris said that it depends in part on how one defines persuasive technology. He added that he didn’t think the hearing was “about accusing particular companies.”

Sen. Schatz sought clarification from Ms. Stanphill as to whether she was limiting the scope of her answer to some particular operations of Google and its affiliates. “You’re not talking about YouTube?”

Ms. Stanphill said that “the dark patterns in persuasive technology are not core to how we design our systems.” She added, “We build our products for the privacy of our users.”

Rashida Richardson, director–policy research at AI Now Institute, suggested that “there’s a business incentive to take a more narrow view of persuasive technology” that excludes content optimization.

Sen. Richard Blumenthal (D., Conn.) also expressed doubt about Ms. Stanphill’s “contention that Google does not build systems with persuasive technology involved. … Your business is to hold on to the eyeballs.” He added that “YouTube’s recommendation system has a notorious history” of recommending content in way that effectively means it is “acting as a shepherd for pedophiles.”

He asked her for the “specific steps” the company has taken to end the practice of recommending content that sexualizes children.

Ms. Stanphill said that such content is “now classified as borderline content.”

Sen. Blumenthal observed, “I don’t think ‘trust me’ can work anymore.”

Although she didn’t want question her on the issue, Sen. Marsha Blackburn (R., Tenn.) said, “I would just like to say to Ms. Stanphill that the evasiveness in her response to Sen. Blumenthal is inadequate.”

In his opening remarks, Chairman Thune told the witnesses, “Your participation in this hearing is appreciated especially as this committee continues to work on privacy legislation.”

He noted that he is working on legislation “to require Internet platforms to give users the opportunity to engage with a platform without having their interaction shaped by an algorithm.” He added, “Platforms should provide more transparency on how the content we view is being filtered.”

In his opening statement, ranking minority member Schatz said, “Something is wrong here. … Companies are letting algorithms go wild and only using humans to clean up the mess.” He added, “Companies need to be more accountable for the outcome they produce.”

During his prepared testimony, Mr. Harris said that the issues raised by the senators are “happening not by accident but by design” as platforms pursue their goal of user engagement from a position of “increasing asymmetry of power.”

Platforms design the experience to encourage continued use. “Pull-to-refresh has the same kind of addictive [effect] as pulling the handle on a slot machine,” he said, while “infinitely scrolling” pages has the same effect of a server automatically refilling glasses, because it “removes stopping cues,” he added.

Designing in the ability to accumulate followers “addicts you to getting attention from other people,” Mr. Harris said.

He also used the analogy of a “voodoo doll” or avatar created by the platform, which “sticks pins” into the virtual user trying to predict which content will keep it engaged longer.

In her testimony, Ms. Stanphill emphasized the Google Wellbeing initiative, which includes efforts such as a dashboard that lets users see how much time they’ve used a device, app timers to limit usage of specific apps, and a do-not-disturb setting to keep the advice from presenting alerts and notifications to get the users’ attention during certain times.

Stephen Wolfram, founder and chief executive officer of Wolfram Research, emphasized that modern AI (artificial intelligence) systems write their own programs, “and if you open them up and look, there’s embarrassingly little you can understand. … If you insist on explainability, you can’t get the full benefit” of modern AI.

“You can write computational contracts that limit what an AI can do,” he said.

He suggested “using technology to set up market-based solutions,” such as third-party providers competing to be selected by users to provide final-ranking of content or constraints on content.

In her testimony, Ms. Richardson said that algorithms present three types of harms: “(1) harm from biased training data, algorithms, or other system flaws that tend to reproduce historical and existing social inequities; (2) harm from optimization systems that prioritizes technology companies’ interests often at the expense of broader societal interests; and (3) the use of ‘black box’ technologies that prevent public transparency, accountability, and oversight.”

AI Now offered seven policy recommendations for Congress: (1) “require technology companies to waive trade secrecy and other legal claims that hinder oversight and accountability mechanisms”; (2) “require public disclosure of technologies that are involved in any decisions about consumers by name and vendor”; (3) “empower consumer protection agencies to apply ‘truth in advertising laws’ to algorithmic technology providers”; (4) “revitalize the congressional office of technology assessment to perform pre-market review and post-market monitoring of technologies”; (5) “enhanced whistleblower protections for technology company employees that identify unethical or unlawful uses of AI or algorithms”; (6) “require any transparency or accountability mechanism to include a detailed account and reporting of the ‘full stack supply chain’”; and (7) “require companies to perform and publish algorithmic impact assessments prior to public use of products and services.”

In response to a question from Chairman Thune about what companies can predict about users based on the data they collect, Mr. Harris noted that there is a connection between privacy and prediction. Without any of a user’s data, “just looking at your mouse movement and clicks,” an algorithm can predict a users’ “major personality characteristics,” he said.

Sen. Blackburn asked, “Is moving toward transparency a worthy goal, or should we be looking at something else?”

Mr. Wolfram said, “It depends on what you mean by transparency.” He contrasted wanting to understand how the algorithm is working with wanting to know what the results are.

Sen. Jon Tester (D., Mont.) asked whether YouTube uses personal data in shaping recommendations for users.

Ms. Stanphill said that she doesn’t know about those issues, emphasizing that her focus is on user experience.

Sen. Tester said, “I think you guys could sit down at your board meeting and decide who will be the next president.”

Ms. Richardson said, “I think your concerns are real,” but she added that she didn’t think “they’re talking about the risks you’re concerned about.” She suggested that different parts of these companies don’t talk to each other such issues.

Sen. Tester asked Ms. Stanphill, “Are conversations siloed?”

Ms. Stanphill said no.

“So how come you can’t answer these questions?” Sen. Tester asked.

Ms. Stanphill said that she heads up user engagement issues as they cut across the company.

Sen. Jacky Rosen (D., Nev.), a former software developer, said to Mr. Wolfram, “I know having written algorithms that there is a goal written into it,” adding that even if a human isn’t writing the actual code, a human should be providing the constraints for what the algorithm is trying to achieve.

Mr. Wolfram said, “The question is how you describe the constraints.”

In response to a question from Sen. Tom Udall (D., N.M.) about the “radicalizing effect” of social media platforms on children, Mr. Harris said that algorithms that are trying to increase engagement duration “are always going to send you to Crazytown” — content that houses conspiracy theories, for example —rather than “toward Walter Cronkite.”

Ms. Stanphill said that he was wrong and that the systems have changed. —Lynn Stanton, [email protected]

MainStory: FederalNews Congress InternetIoT

Back to Top

Interested in submitting an article?

Submit your information to us today!

Learn More