For Ethics in AI, Keep Humans in the Loop (until further notice)

Discussions around the advancement of AI evolve, so too do the concerns. Machines are getting smarter, able to complete complex tasks and process information incredibly fast. But, what happens when the developments in AI move too quickly — and we lose sight of human ethics?

author avatar

19 Apr, 2019

For Ethics in AI, Keep Humans in the Loop (until further notice)

This is one of the biggest challenges that leaders will have to address — or at least begin productively addressing — in the near future. By productively I mean taking measurable action, not simply recognizing it as an issue and saying, “we’ll address that next year.”

Here are the questions we need to ask when reflecting on our ethical principles in robotics, AI, and beyond…

Q; Should we only build technology that advances humankind?
A: Yes!


It seems obvious, right? But we always need to remind ourselves of this point. Especially because the “cool factor” on AI and robotics is pretty high. Unfortunately, flashiness is not a good reason to innovate. It’s a waste of time, money, and, worst of all, it likely only serves a very small segment of the human population — if anyone at all.

Q; Who'€™s to blame when a machine makes an unethical decision?
A: This is a tough one.


How can a machine make an unethical decision? Consider this scenario: you’re in an autonomous vehicle, and the car has the option to crash into a tree — effectively hurting you or your passenger — or hurting a pedestrian on the street. What does the car do? Someone has to be accountable for ways in which our machines process information and learn. This applies to robots, drones, autonomous vehicles, etc in most spheres of society where robots while operating in the future. In these scenarios, we need to keep humans in the loop of the decision-making process for the robot — especially important when the choice to make is either “bad” or “very bad.”

Q; At this point; what'€™s the biggest differentiators from the top thinking "€œmachines"€ and the people using them?
A: Interpretation of behaviors.


In the previous scenario, I outlined a situation where it would make it very hard for a human to choose. Two people would process that situation entirely different from one another. This is because all humans interpret behavior from their own lived experience, unique to every individual. We cannot program context and lived experiences into a robot (yet, anyway!)

Q; is it possible to find a foolproof "€œsolution"€ to the issues of ethics?
A: No.


Unfortunately, this can’t be fully “solved.” Much of these are esoteric concepts; however, I can assuredly argue that tech should always operate around the human cause in some way. If we can strive toward that goal, and keep humans in the loop when ethical dilemmas are likely to arise, we shouldn’t run into too many ethical issues.


19 Apr, 2019

Technology and Education being my main passions, I have consistently been involved in working with and developing teams who promote leading edge technologies or the means for people to develop the latest innovations. I pride myself on my ability to make meaningful and productive connections : betwee... learn more

Stay Informed, Be Inspired

Wevolver’s free newsletter delivers the highlights of our award winning articles weekly to your inbox.