Hackernoon logoAutonomy & Ethics & Investors & Opportunities by@PeterT

Autonomy & Ethics & Investors & Opportunities

Peter Teneriello Hacker Noon profile picture

@PeterTPeter Teneriello

Investment Analyst - Private Equity

Like any other person who attended elementary/middle/high school, I was required to learn a foreign language. One of these happened to be Latin. What really appealed to me about this so-called “dead” language was the syntax — every sentence was a math formula that needed to be solved before translating. This syntax and the understanding of how to best translate the original texts had developed over two thousand years, far out-lasting the civilizations of its original speakers.

In researching how autonomous agents cooperate and interact with each other, Facebook Artificial Intelligence Research observed chatbots diverging “from human language as the agents developed their own language for negotiating.” This language took substantially fewer than two thousand years to evolve. These chatbots were subsequently updated to only use human language in communicating, since Facebook’s users are humans (save for fake accounts, accounts for spoiled pets, etc). While the engineers at Facebook may know how these chatbots created their own negotiating language, the report itself is focused on showing how the chatbots were trained rather than how a non-human language arose.

I attended the IEEE’s Symposium on the Ethics of Autonomous Systems earlier this month, and one of the General Principles from their paper Ethically Aligned Design was that of Transparency. Transparency in this context refers to the ability to understand how an autonomous system makes decisions, and why it makes those decisions. What follows from this is that a system whose decisions aren’t yet understood is one that’s probably not yet ready for use with the general public. The stakeholders in such conversations would extend beyond just the companies developing these systems — including but not limited to users, the public, and accident investigators. One stakeholder group that deserves further discussion though is investors.

There’s a saying I heard that startups have three needs above all else: capital, customers, and credibility. The people who provide the capital piece of that equation will (or should, hopefully) have investment theses guiding their decisions, theses that may focus on startups using machine learning and its subtypes to solve problems. These investors may soon find themselves in a unique position to not just choose which companies to support, but to guide the ethical standards in developing autonomous systems.

Capital Factory here in Austin has launched a funding competition for seed stage companies working in artificial intelligence. Glasswing Ventures in Boston is investing a fund solely-focused on early stage companies doing the same. Both of these investors (and others as well) with artificial intelligence as a central part of their theses will be able to indirectly guide the discussion of ethics in this space, by virtue of the companies and founders they back. And though that discussion may be more complicated than deconstructing Latin sentences, the opportunity to play a direct and active role is there to be seized.

Surprisingly, not all pictures of business handshakes require a Google search for “synergy”.


Join Hacker Noon

Create your free account to unlock your custom reading experience.