Artwork

Content provided by The EPAM Continuum Podcast Network and EPAM Continuum. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The EPAM Continuum Podcast Network and EPAM Continuum or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.
Player FM - Aplicație Podcast
Treceți offline cu aplicația Player FM !

The Resonance Test 90: Responsible AI with David Goodis and Martin Lopatka

32:48
 
Distribuie
 

Manage episode 414432519 series 3215634
Content provided by The EPAM Continuum Podcast Network and EPAM Continuum. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The EPAM Continuum Podcast Network and EPAM Continuum or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.
Responsible AI isn’t about laying down the law. Creating responsible AI systems and policies is necessarily an iterative, longitudinal endeavor. Doing it right requires constant conversation among people with diverse kinds of expertise, experience, and attitudes. Which is exactly what today’s episode of *The Resonance Test* embodies. We bring to the virtual table David Goodis, Partner at INQ Law, and Martin Lopatka, Managing Principal of AI Consulting at EPAM, and ask them to lay down their cards. Turns out, they are holding insights as sharp as diamonds. This well-balanced pair begins by talking about definitions. Goodis mentions the recent Canadian draft legislation to regulate AI, which asks “What is harm?” because, he says, “What we're trying to do is minimize harm or avoid harm.” The legislation casts harm as physical or psychological harm, damage to a person's property (“Suppose that could include intellectual property,” Goodis says), and any economic loss to a person. This leads Lopatka to wonder whether there should be “a differentiation in the way that we legislate fully autonomous systems that are just part of automated pipelines.” What happens, he wonders, when there is an inherently symbiotic system between AI and humans, where “the design is intended to augment human reasoning or activities in any way”? Goodis is comforted when a human is looped in and isn’t merely saying: “Hey, AI system, go ahead and make that decision about David, can he get the bank loan, yes or no?” This nudges Lopatka to respond: “The inverse is, I would say, true for myself. I feel like putting a human in the loop can often be a way to shunt off responsibility for inherent choices that are made in the way that AI systems are designed.” He wonders if more scrutiny is needed in designing the systems that present results to human decision-makers. We also need to examine how those systems operate, says Goodis, pointing out that while an AI system might not be “really making the decision,” it might be “*steering* that decision or influencing that decision in a way that maybe we're not comfortable with.” This episode will prepare you to think about informed consent (“It's impossible to expect that people have actually even read, let alone *comprehended,* the terms of services that they are supposedly accepting,” says Lopatka), the role of corporate oversight, the need to educate users about risk, and the shared obligation involved in building responsible AI. One fascinating exchange centered on the topic of autonomy, toward which Lopatka suggests that a user might have mixed feelings. “Maybe I will object to one use [of personal data] but not another and subscribe to the value proposition that by allowing an organization to process my data in a particular way, there is an upside for me in terms of things like personalized services or efficiency gains for myself. But I may have a conscientious objection to [other] things.” To which Goodis reasonably asks: “I like your idea, but how do you implement that?” There is no final answer, obviously, but at one point, Goodis suggests a reasonable starting point: “Maybe it is a combination of consent versus ensuring organizations act in an ethical manner.” This is a conversation for everyone to hear. So listen, and join Goodis and Lopatka in this important dialogue. Host: Alison Kotin Engineer: Kyp Pilalas Producer: Ken Gordon
  continue reading

165 episoade

Artwork
iconDistribuie
 
Manage episode 414432519 series 3215634
Content provided by The EPAM Continuum Podcast Network and EPAM Continuum. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The EPAM Continuum Podcast Network and EPAM Continuum or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.
Responsible AI isn’t about laying down the law. Creating responsible AI systems and policies is necessarily an iterative, longitudinal endeavor. Doing it right requires constant conversation among people with diverse kinds of expertise, experience, and attitudes. Which is exactly what today’s episode of *The Resonance Test* embodies. We bring to the virtual table David Goodis, Partner at INQ Law, and Martin Lopatka, Managing Principal of AI Consulting at EPAM, and ask them to lay down their cards. Turns out, they are holding insights as sharp as diamonds. This well-balanced pair begins by talking about definitions. Goodis mentions the recent Canadian draft legislation to regulate AI, which asks “What is harm?” because, he says, “What we're trying to do is minimize harm or avoid harm.” The legislation casts harm as physical or psychological harm, damage to a person's property (“Suppose that could include intellectual property,” Goodis says), and any economic loss to a person. This leads Lopatka to wonder whether there should be “a differentiation in the way that we legislate fully autonomous systems that are just part of automated pipelines.” What happens, he wonders, when there is an inherently symbiotic system between AI and humans, where “the design is intended to augment human reasoning or activities in any way”? Goodis is comforted when a human is looped in and isn’t merely saying: “Hey, AI system, go ahead and make that decision about David, can he get the bank loan, yes or no?” This nudges Lopatka to respond: “The inverse is, I would say, true for myself. I feel like putting a human in the loop can often be a way to shunt off responsibility for inherent choices that are made in the way that AI systems are designed.” He wonders if more scrutiny is needed in designing the systems that present results to human decision-makers. We also need to examine how those systems operate, says Goodis, pointing out that while an AI system might not be “really making the decision,” it might be “*steering* that decision or influencing that decision in a way that maybe we're not comfortable with.” This episode will prepare you to think about informed consent (“It's impossible to expect that people have actually even read, let alone *comprehended,* the terms of services that they are supposedly accepting,” says Lopatka), the role of corporate oversight, the need to educate users about risk, and the shared obligation involved in building responsible AI. One fascinating exchange centered on the topic of autonomy, toward which Lopatka suggests that a user might have mixed feelings. “Maybe I will object to one use [of personal data] but not another and subscribe to the value proposition that by allowing an organization to process my data in a particular way, there is an upside for me in terms of things like personalized services or efficiency gains for myself. But I may have a conscientious objection to [other] things.” To which Goodis reasonably asks: “I like your idea, but how do you implement that?” There is no final answer, obviously, but at one point, Goodis suggests a reasonable starting point: “Maybe it is a combination of consent versus ensuring organizations act in an ethical manner.” This is a conversation for everyone to hear. So listen, and join Goodis and Lopatka in this important dialogue. Host: Alison Kotin Engineer: Kyp Pilalas Producer: Ken Gordon
  continue reading

165 episoade

Toate episoadele

×
 
Loading …

Bun venit la Player FM!

Player FM scanează web-ul pentru podcast-uri de înaltă calitate pentru a vă putea bucura acum. Este cea mai bună aplicație pentru podcast și funcționează pe Android, iPhone și pe web. Înscrieți-vă pentru a sincroniza abonamentele pe toate dispozitivele.

 

Ghid rapid de referință