paint-brush
A Free Ethical OS Toolkit for Woke AI Enterprises by@alex-fly
354 reads
354 reads

A Free Ethical OS Toolkit for Woke AI Enterprises

by Alex FlyAugust 13th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In January 2019, Gartner reported that AI adoption tripled in the last year alone, with an estimated 37% of firms now implementing AI in some form. Since 2017, more than two dozen national governments have released AI strategies or plans to develop ethics standards, policies, and regulations. In a recent Deloitte survey, one-third of executives named ethical risks as one of the top three potential concerns related to AI. The Ethical OS Toolkit outlines seven ‘future-proofing’ strategies which help technologists prioritize identified risks.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - A Free Ethical OS Toolkit for Woke AI Enterprises
Alex Fly HackerNoon profile picture

Are your algorithms transparent those they impact? Is your technology reinforcing or amplifying existing bias?  

In January 2019, Gartner reported that AI adoption tripled in the last year alone, with an estimated 37% of firms now implementing AI in some form. Based on responses from over 3,000 CIOs across a wide range of industries, the survey strongly showed how AI is becoming a crucial component of enterprise strategies, regardless of the industry.

AI is one of the most disruptive technologies of the modern era. Enterprises are constantly exploring and finding new ways to harness data and identify business opportunities using AI. But as AI becomes more commonplace in the enterprise, IT leaders need to consider the ethical implications.

Since 2017, more than two dozen national governments have released AI strategies or plans to develop ethics standards, policies, and regulations.

Organizations, universities, and governments are calling on companies using AI to discuss and outline ethical principles to govern their use of the technology. Since 2017, more than two dozen national governments have released AI strategies or plans to develop ethics standards, policies, and regulations.

Today, major technology companies, such as Google and Microsoft have published their own internal principles and are dedicating new teams to address ethical issues such as bias and lack of transparency. Other initiatives by the world’s largest tech companies include:

* A new partnership between Facebook and the Technical University of Munich (TUM) to form The Institute for Ethics in Artificial Intelligence, with an initial investment of $7.5 million.

* Amazon and the National Science Foundation recently earmarked $10 million for AI fairness research.

* Salesforce announced its tackling bias in AI with new additions to its Trailhead developer education platform.

In a recent Deloitte survey, one-third of executives named ethical risks as one of the top three potential concerns related to AI. However, organizing an ethics committee or research center is a costly undertaking.

Companies can take a more practical approach first, outlining the mission of their AI-related work and forming their ethics principles around that.

One way business leaders can jumpstart this conversation is with the Ethical OS Toolkit.

The Ethical OS Toolkit

The Ethical OS Toolkit - available to download for free here - was developed by the Institute of the Future, a Palo Alto-based think tank, and the Tech and Society Solutions Lab, an initiative from the impact investment firm Omidyar Network. The toolkit provides technologists with information about new risks related to the technology they are building as well as guidelines for how they can keep their customers, communities, and society safe.

The toolkit outlines seven ‘future-proofing’ strategies which help technologists prioritize identified risks, determine their biggest and hardest-to-address threats, and provides guidance on where and how to develop strategies to mitigate those risks.

More than 20 technology companies, schools, and startups, including Techstars and Mozilla, are already using the toolkit to address and ensure ethical technology initiatives.

Questions enterprises must address to ensure ethical AI

For business leaders and developers seeking a practical way to ensure their AI efforts are in line with their company mission, the Ethical OS toolkit outlines several questions to consider.

- Does this technology make use of deep data sets and machine learning? If so, are there gaps or historical biases in the data that might bias the technology?  

- Have you seen instances of personal or individual bias enter into your product’s algorithms? How could these have been prevented or mitigated?

- Is the technology reinforcing or amplifying existing bias?  

- Who is responsible for developing the algorithm? Is there a lack of diversity in the people responsible for the design of the technology?  

- How will you push back against a blind preference for automation (the assumption that AI-based systems and decisions are correct and don’t need to be verified or audited)?

- Are your algorithms transparent to the people impacted by them? Is there any recourse for people who feel they have been incorrectly or unfairly assessed?

Taking the right steps toward ethical AI

There are plenty of stories about AI ethics boards and committees falling apart almost as quickly as they were put together. And though many of these initiatives may be flawed, and in the absence of regulatory and industry AI standards, they are still a move in the right direction.

Enterprises must consider the ethical implications of the AI products and services they are building, and a good first step is for IT leaders and their teams to discuss the questions above openly and honestly together. As Stephen Hawking said, “The short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”


Image source