Rechercher dans ce blog

Sunday, November 1, 2020

The best artificial intelligence intentions are hitting corporate walls - ZDNet

doro.indah.link

Artificial intelligence and machine learning have come a long way in recent years, with solid business cases, powerful algorithms, vast compute resources, and rich data sets now the norm for many enterprises. However, AI managers and specialists are still grappling with seemingly insurmountable organizational and ethical issues that are hamstringing their efforts, or even sending things down the wrong path.

building-with-lights-windows-cropped-photo-by-joe-mckendrick.jpg
Photo: Joe McKendrick

That's the conclusion of a recent in-depth analysis that looked at the pressures and compromises faced by today's AI teams. The researchers, Bogdana Rakova (Accenture and Partnership on AI), Jingying Yang, (Partnership on AI), Henriette Cramer (Spotify) and Rumman Chowdhury (Accenture), found that most commonly, "practitioners have to grapple with lack of accountability, ill-informed performance trade-offs and misalignment of incentives within decision-making structures that are only reactive to external pressure." 

Still needed to achieve accountability with most AI initiatives are more use of organization-level frameworks and metrics, structural support, proactive evaluation and mitigation of issues as they arise.

AI teams not only need to have the skillsets to build, test and refine AI models and applications, but they also need to step up as transformational leaders, Rakova and her co-authors advocate. "Industry professionals, who are increasingly tasked with developing accountable and responsible AI processes, need to grapple with inherent dualities in their role as both agents for change, but also workers with careers in an organization with potentially misaligned incentives that may not reward or welcome change." This is new ground for most as well: "practitioners have to navigate the interplay of their organizational structures and algorithmic responsibility efforts with relatively little guidance." The researchers call this ability to balance organizational requirements with responsible and accountable AI as "fair-ML."

The four leading issues the researchers found impeding responsible and accountable AI adoption include the following:

  • How and when do we act? "Reactive. Organizations act only when pushed by external forces (e.g. media, regulatory pressure)"
  • How do we measure success? "Performance trade-offs: Organizational-level conversations about fair-ML dominated by ill-informed performance trade-offs."
  • What are the internal structures we rely on? "Lack of accountability: Fair-ML work falls through the cracks due to role uncertainty."
  • How do we resolve tensions? "Fragmented: Misalignment between individual, team, and organizational level incentives and mission statements within their organization."   

Rakova and her team make the following recommendations for striking a better balance between AI technological advancement and organizational adoption:

Educate the C-suite and board: Business leaders need to "understand, support, and engage deeply with fair-ML concerns, which are contextualized within their organizational context. Fair-ML would be prioritized as part of the high-level organizational mission and then translated into actionable goals down at the individual levels through established processes."

Educate employees at all levels: Every single person in the organization needs to "understand risk, teams would have a collective understanding of risk, while organizational leadership would talk about risk publicly, admit when failures happen."  

Open communication channels: The spread of information on AI goals and initiatives should "go through well-established channels so that people know where to look and how to share information. With those processes in place, finding a solution or best practice in one team or department would lead to rapid scaling via existing organizational protocols and internal infrastructure for communications, training, and compliance."

Consider a new advocacy role: Fair-ML reviews and reports should be required prior to release of new features, the researchers state. "New ML operations roles would be created as part of fair-ML audit teams. Currently, this work falls within ML engineering, but respondents identified the need for new organizational structures that would ensure that fair-ML concerns are being addressed while allowing ML engineers to be creative and experiment."

Assert veto power. Study participants mentioned that "it is crucial to ask whether an ML system is appropriate in the first place. It may not be due to risks of harm, or the problem may not need an ML solution. Crucially, if the answer is negative, then work must stop." The best approach, the researchers conclude, is "designing a veto power that is available to people and committees across many different levels, from individual employees via whistleblower protections, to internal multidisciplinary oversight committees to external investors and board members."




October 31, 2020 at 09:25PM
https://ift.tt/2HT6Mr7

The best artificial intelligence intentions are hitting corporate walls - ZDNet

https://ift.tt/2APQ0pp

No comments:

Post a Comment

Featured Post

Latest iPhone 13 Dummies Again Show New Diagonal Dual-Lens Camera Arrangement Coming to Standard Model - MacRumors

doro.indah.link More dummy models of the upcoming iPhone 13 have been shared online, this time by leaker DuanRui , indicating the new diag...

Popular Posts