Are your algorithms transparent those they impact? Is your technology reinforcing or amplifying existing bias? In January 2019, Gartner that AI adoption tripled in the last year alone, with an estimated 37% of firms now implementing AI in some form. Based on responses from over 3,000 CIOs across a wide range of industries, the survey strongly showed how is becoming a crucial component of enterprise strategies, regardless of the industry. reported AI AI is one of the most disruptive technologies of the modern era. Enterprises are constantly exploring and finding new ways to harness data and identify business opportunities using AI. But as AI becomes more commonplace in the enterprise, IT leaders need to consider the ethical implications. Since 2017, more than two dozen national governments have released AI strategies or plans to develop ethics standards, policies, and regulations. Organizations, universities, and governments are calling on companies using AI to discuss and outline ethical principles to govern their use of the technology. Since 2017, more than national governments have released AI strategies or plans to develop ethics standards, policies, and regulations. two dozen Today, major technology companies, such as and have published their own internal principles and are dedicating new to address ethical issues such as bias and lack of transparency. Other initiatives by the world’s largest tech companies include: Google Microsoft teams * A new partnership between Facebook and the Technical University of Munich (TUM) to form , with an initial investment of $7.5 million. The Institute for Ethics in Artificial Intelligence * Amazon and the National Science Foundation recently earmarked $10 million for AI fairness research. * Salesforce announced its tackling bias in AI with new additions to its . Trailhead developer education platform In a recent Deloitte survey, of executives named ethical risks as one of the top three potential concerns related to AI. However, organizing an ethics committee or research center is a costly undertaking. one-third Companies can take a more practical approach first, outlining the mission of their AI-related work and forming their ethics principles around that. One way business leaders can jumpstart this conversation is with the Ethical OS Toolkit. The Ethical OS Toolkit The Ethical OS Toolkit - - was developed by the , a Palo Alto-based think tank, and the , an initiative from the impact investment firm . The toolkit provides technologists with information about new risks related to the technology they are building as well as guidelines for how they can keep their customers, communities, and society safe. available to download for free here Institute of the Future Tech and Society Solutions Lab Omidyar Network The toolkit outlines seven ‘future-proofing’ strategies which help technologists prioritize identified risks, determine their biggest and hardest-to-address threats, and provides guidance on where and how to develop strategies to mitigate those risks. More than 20 technology companies, schools, and startups, including Techstars and Mozilla, are already using the toolkit to address and ensure ethical technology initiatives. Questions enterprises must address to ensure ethical AI For business leaders and developers seeking a practical way to ensure their AI efforts are in line with their company mission, the Ethical OS toolkit outlines several questions to consider. - Does this technology make use of deep data sets and machine learning? If so, are there gaps or historical biases in the data that might bias the technology? How could these have been prevented or mitigated? - Have you seen instances of personal or individual bias enter into your product’s algorithms? - Is the technology reinforcing or amplifying existing bias? Is there a lack of diversity in the people responsible for the design of the technology? - Who is responsible for developing the algorithm? (the assumption that AI-based systems and decisions are correct and don’t need to be verified or audited)? - How will you push back against a blind preference for automation Is there any recourse for people who feel they have been incorrectly or unfairly assessed? - Are your algorithms transparent to the people impacted by them? Taking the right steps toward ethical AI There are plenty of stories about AI ethics boards and committees almost as quickly as they were put together. And though many of these initiatives may be flawed, and in the absence of regulatory and industry AI standards, they are still a move in the right direction. falling apart and a good first step is for IT leaders and their teams to discuss the questions above openly and honestly together. As Stephen Hawking said, “The short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” Enterprises must consider the ethical implications of the AI products and services they are building, Image source