Who should get to decide what’s ethical as technology advances?
Technology is ripe with ethical dilemmas. New tech usually comes with more power and more advanced capabilities; we might be able to reshape the world in new, innovative ways, or we might expose the human mind to conditions it’s never experienced before.
Obviously, this opens the door to ethical challenges, such as determining whether it’s right to edit the human genome or programming self-driving cars to behave in ways aligned with our morals .
I could write an article with thousands of ethical questions we still have to answer, covering artificial intelligence (AI), virtual reality (VR), medical breakthroughs, and the internet of things (IoT). But there’s one bigger question that affects all the others, and we aren’t spending enough time to address it: who gets to decide the answers to these questions?
The high-level challenges
There are some high-level challenges we have to consider here:
1. Balancing ethics and innovation. Our legislative process is intentionally slow , designed to ensure that each new law is considered carefully before it’s passed. Similarly, it often takes years—if not decades—of scientific research to fully understand a topic. If every tech company waited for scientists and regulators to make an ethical decision, innovation would come to a halt, so we have to find a way to balance speed and thoroughness.
2. Keeping power balanced. We also need to be careful not to tip the scales of power. If one class of people, or one country, gets access to an extremely powerful or advanced technology, it could result in inhumane levels of inequality, or war. If one authority is allowed to make all ethical decisions about tech, those decisions could unfairly work in its favor, at the expense of everyone else involved.
3. Making educated decisions. Ethics are subjective, but shouldn’t be based on a gut reaction, or our feelings on a given subject. They should be exhaustively well-researched and understood before a decision is made; in other words, these decisions shouldn’t be made by someone uneducated in the matter, or by a non-expert.
4. Considering multiple areas. We also have to consider consequences in multiple areas. This isn’t just about safeguarding human life, but also human health, human psychology, and the wellbeing of our planet.
The options
So who could we consider to make ethical decisions for our technology?
Scientists. We could trust scientists, who by nature are objective truth-seekers. The problem is, research takes years to decades to complete, and even then, in-fighting could bring the process to a halt.
Inventors and entrepreneurs. We could trust the inventors and distributors of technology to protect us, and there are plenty of examples of companies doing their best to protect consumers and “do no evil,” but there are also profiteers and glory-seekers keeping some in the industry from behaving ethically.
Regulators. Historically, we’ve trusted politicians and lawmakers to protect the public and make large-scale ethical decisions. However, lawmakers aren’t the most educated on the subject, and may have trouble passing legislation as quickly as we need it, or in a way that satisfies all parties.
The general public. We could trust major ethical decisions to the general public, through a democratic system or through basic consumer decision-making. However, the average member of the public is not an expert in the area of tech ethics, and can’t be expected to make the most logical decision.
External organizations. Finally, we could delegate power to an external body that specializes in tech ethics, appointing or training experts to oversee tech company operations and make ethical decisions for us. This is more balanced than other options on this list, but begs the question: who decides who’s in charge of these organizations?
The knowledge factor
I contend that none of the above options works as an ultimate authority for making ethical decisions for tech; each has some glaring weakness that prevents it from being a suitable candidate, though the neutral, external organization is certainly promising.
So rather than designating an authority to make a decision, we should instead gear our efforts to transforming those decisions—regardless of who’s making them—to be easier to make. The only way to do that is to uncover more information about the technologies we’re creating and using, and make that knowledge more publicly available. Here are three ways to do that:
1. Understanding the consequences. First, we need to work harder to understand the consequences of the technologies we’re already using (and those to be released in the future). Cigarettes were smoked for decades before we realized their true health ramifications; we don’t want a similar level of ignorance to blind us from the repercussions of a technology that’s far more widespread, with a far graver potential for all of mankind. The answer here is almost always more unbiased research. For example, some scientific studies have explored the possibility of unintended mutations when using CRISPR-Cas9 to edit genes in vivo.
2. Informing the public. We can’t simply work to gain new information; we have to distribute that information to the public. This introduces the public as a powerful check to any organizations that might otherwise control the narrative, and ensures that the public can make educated decisions about the technology they utilize before regulators can act to protect them. Public information is also necessary to encourage the public to take collective action when necessary, such as petitioning for legal changes. For example, the Future of Life Institute was founded in part to promote public education about issues of AI safety and the potential impact of technologically advanced weaponry.
3. Dedicating resources. We also need companies to dedicate internal and external resources to improving their understanding of their own technology. For example, Google’s AI project DeepMind has its own dedicated ethics board , working to keep the system operating as ethically as possible. Ethics boards should be built-in to the majority of tech companies, and if they aren’t possible, companies should work to form neutral, third-party organizations meant to discover more about how their products are used, and keep things in careful balance.
No single person or group can overcome the challenge of deciding what’s ethical in tech, but with enough transparent knowledge, we can all decide for ourselves. To keep innovation moving forward without compromising our safety, health, or balance in society, we have to hold our companies and each other accountable to these basic principles, and keep pushing for more information and education.
I think Scott Pruitt should be making the decisions about what is ethical....
I doubt he could decide what's ethical without looking through a religious lens. Poor choice there.
I was being sarcastic...
Ah, I see. My sarc meter must be broken. thanks for clarifying.
2. Keeping power balanced. We also need to be careful not to tip the scales of power. If one class of people, or one country, gets access to an extremely powerful or advanced technology, it could result in inhumane levels of inequality, or war. If one authority is allowed to make all ethical decisions about tech, those decisions could unfairly work in its favor, at the expense of everyone else involved.
I suspect we are already there. We have been scratching our heads and finger pointing as to why pilots are experiencing laser interference around China, and US diplomats are experiencing mysterious sonic related sickness in Cuba and elsewhere.
The Cuba thing may be mass hysteria. Medical exams have found nothing.
The laser isn't exactly an new invention. The episodes may say something about China's willingness to rumble, though...
26 people are now claiming to be affected by whatever is going on in Cuba, and apparently it is now being found in China as well.
As far a lasers goes, I can't imagine how a laser on a boat can target a pilot in a plane thousands of feet above and miles away, unless it is part of a fairly sophisticated weapon system. I understand we have some advanced laser tech in the US, but I wouldn't thing that it was developed for that type of use.
There are already rules of ethics set down by various technology groups:
OK, I am going to be grim about this.
As long as there is some sort of drive to invent (the manhattan project) or profit.. scientific ethics goes out the window.
We have over 100 years of cautionary tales about AI, and yet we embrace it. We invite machines into our homes that have the capacity to listen into our private discussions. If you want to know what "ethics" we have, ask about our scientists profit margins.
That’s where we're heading if nothing is done.
What should be done, and who should be doing the doing?
Refer to my post to see what is being done. I posted IEEE's standards. You know the Institute of Electrical and Electronic Engineers, one of the MAIN international societies that sets the rules for technology. READ THEM!
I don't know the topic well enough to have a worthwhile opinion.
One needn't be an expert to see the coming-soon crises in biology/medicine, with designer babies (for the rich), eternal life (for the rich), ...
AI is coming, even if we don't yet know what it will look like.
There will be trouble.....
Yeah, but I tend to think that after watching the Terminator franchise, we are now smart enough to put an "off" switch on all AI platforms. You know, for 'just in case.'