My first semester as an MIT undergraduate nearly 15 years ago, I enrolled in a course in its Program in Science, Technology, and Society. The course dealt with the ethics and political controversy surrounding major scientific advancements, such as the invention and ultimate use of the first atomic bomb. Before then I didn’t know that many of the scientists involved in the Manhattan Project, including Albert Einstein, became strong advocates against nuclear proliferation after seeing the destruction waged by the weapons they created. As a wide-eyed freshman, I was fascinated by the very complex role science, and scientists, play in our society.
As the pace of technological development has accelerated, so too has the need for proper engagement with policymakers. This is happening in real time today, as scientists and physicians provide federal and state governments with guidance on the deployment of COVID-19 vaccines and other strategies for managing the pandemic. However, to make ethical decisions, government officials must weigh scientific recommendations with the individual and societal costs associated with them.
Self-driving cars are a slightly different example of where policy and innovation intersect. Tesla’s TSLA autopilot feature has been available in its vehicles since 2014, and last June I wrote about some of the technological advances that are still needed to improve the viability and safety of these vehicles on a broader scale. However, technological innovation isn’t the only thing that is needed for the acceptance of self-driving cars. For that to happen, we need to understand how self-driving cars fit into our society. This includes everything from legal concerns to philosophical, ethical and moral ones about how the technology should be used. Certainly improving the safety of automated vehicles is important, but how safe do they need to be before we let them actually drive for us? Do they need to be better than the average driver? Better than the best driver? If there is a pile-up on the road ahead, does the car choose the option most likely to save its own driver, or most likely to avoid harming others?
Now, as a professor who focuses on improving human interaction with autonomous systems, I believe that just as it’s important to study how a person interacts with a machine on an individual level, we need to also be mindful of how a machine “interacts” with humanity more broadly. We need to create space for multiple perspectives and discussions, and incorporate experts from various fields in the process of a technology’s development before innovation outpaces the imperative questions surrounding it.
Ethics in STEM Education
In a recent column, I highlighted that focusing solely on energy efficient technologies minimizes the critical role people must play in fighting the climate crisis. Instead, we need to better understand human behavior as it relates to energy consumption and conservation, and then use that information to ensure that these technologies, coupled with human behavior, achieve desired outcomes. To achieve this goal, engineers and scientists need to think beyond their technical disciplines, not just after a technology is developed, but during the R&D progress itself.
MORE FOR YOU
That begins in classrooms like mine. Across all levels of education, students in STEM are taught the technical aspects of their fields, but schools and universities should look to broaden their curriculum to include the ethical as well. This certainly isn’t a new idea; the Accreditation Board of Engineering and Technology (ABET) has recognized the importance of this, calling for academic institutions to make “a commitment to building ethical STEM graduates” because it’s “an investment that will benefit us all.”
This means expanding the curriculum for engineering students beyond what is traditionally taught in STEM education. This could involve requiring engineers to take dedicated ethics of technology courses rather than squeezing them in on the margins of the technical portions of their education. Otherwise, students will enter a workforce oriented to advancing technology for the sake of advancement, rather than a full picture of their potential impact — both positively and negatively. As ABET explains, “These are topics that shouldn’t be left for our technology companies to figure out, but that every STEM program today should be actively exploring with their students.”
Bringing in Other Stakeholders
The purpose of emphasizing a well-rounded engineering program isn’t to create all-knowing experts on everything related to a new technology and its deployment in society. Coming up with a groundbreaking innovation is hard enough. Too often, though, the development of many technologies can take place in isolation. Instead, the objective is to help these innovators recognize their blindspots and find areas where they should engage with stakeholders in other realms, such as policy, who can help make a new technology and its application as successful as possible.
A February 2020 a Pew Research survey found that 60% of Americans believe that scientific experts should have a more active role in our policy debates. The Covid-19 pandemic has underscored some of the benefits of doing so over the past year. But just as policymakers should look to scientists and engineers when making related decisions, we should also be extending a similar invitation to them as we design and innovate technical solutions for tomorrow. The consequences of not doing so may give rise to technologies deemed too controversial without being fully understood, or released into the marketplace prematurely. Instead, through education and engagement with science policy and ethics experts, we can better understand and assess the impact of a given technology, raise flags about concerns and identify barriers to reaching fruition.