Notre Dame’s Reilly Center releases 2015 List of Emerging Ethical Dilemmas and Policy Issues in Science and Technology

Author: Jessica Baron

Google Glass

The John J. Reilly Center for Science, Technology, and Values at the University of Notre Dame has released its annual list of emerging ethical dilemmas and policy issues in science and technology for 2015.

The Reilly Center explores conceptual, ethical and policy issues where science and technology intersect with society from different disciplinary perspectives. Its goal is to promote the advancement of science and technology for the common good.

The Center generates its annual list of emerging ethical dilemmas and policy issues in science and technology with the help of Reilly fellows, other Notre Dame experts and friends of the center. This marks the third year the Center has released a list. Readers are encouraged to vote on the issue they find most compelling at reilly.nd.edu/vote15.

The Center aims to present a list of items for scientists and laypeople alike to consider in the coming months and years as new technologies develop. Each month in 2015, the Reilly Center will present an expanded set of resources for the issue with the most votes, giving readers more information, questions to ask and references to consult.

The ethical dilemmas and policy issues for 2015, presented in no particular order, are:

Real-time satellite surveillance video

What if Google Earth gave you real-time images instead of a snapshot that’s up to three years old? Companies such as Planet Labs, Skybox Imaging (recently purchased by Google) and Digital Globe have launched dozens of satellites in the last year with the goal of recording the status of the entire Earth in real time or near real-time. The satellites themselves are getting cheaper, smaller and more sophisticated, with resolutions up to 1 foot. Commercial satellite companies make this data available to corporations — or, potentially, private citizens with enough cash — allowing clients to see useful images of areas coping with natural disasters and humanitarian crises, but also data on the comings and goings of private citizens. How do we decide what should be monitored and how often? Should we use this data to solve crimes? What is the potential for abuse by corporations, governments, police departments, private citizens or terrorists and other “bad actors”?

Astronaut bioethics (of colonizing Mars)

Plans for long-term space missions to Mars and for its colonization are already underway. On Friday (Dec. 5), NASA launched the Orion spacecraft and NASA Administrator Charles Bolden declared it “Day One of the Mars era.” The company Mars One, along with Lockheed Martin and Surrey Satellite Technology, is planning to launch a robotic mission to Mars in 2018, with humans following in 2025. Four hundred and eighteen men and 287 women from around the world are currently vying for four spots on the first one-way human settlement mission. But as we watch with interest as this unfolds, we might ask ourselves the following: Is it ethical to expose people to unknown levels of human isolation and physical danger, including exposure to radiation, for such a purpose? Will these pioneers lack privacy for the rest of their lives so that we might watch what happens? Is it ethical to conceive or birth a child in space or on Mars? And, if so, who protects the rights of a child not born on Earth and who did not consent to the risks? If we say no to children in space, does that mean we sterilize all astronauts who volunteer for the mission? Given the potential dangers of setting up a new colony severely lacking in resources, how would sick colonists be cared for? And beyond bioethics, we might ask how an off-Earth colony would be governed.

Wearable technology

We are currently attached to, literally and figuratively, multiple technologies that monitor our behaviors. The fitness tracking craze has led to the development of dozens of bracelets and clip-on devices that monitor steps taken, activity levels, heart rate, etc., not to mention the advent of organic electronics that can be layered, printed, painted or grown on human skin. Google is teaming up with Novartis to create a contact lens that monitors blood sugar levels in diabetics and sends the information to health care providers. Combine that with Google Glass and the ability to search the Internet for people while you look straight at them, and you see that we’re already encountering social issues that need to be addressed. The new wave of wearable technology will allow users to photograph or record everything they see. It could even allow parents to view what their children are seeing in real time. Employers are experimenting with devices that track volunteer employees’ movements, tone of voice and even posture. For now, only the aggregate data is being collected and analyzed to help employers understand the average workday and how employees relate to each other. But could employers require their workers to wear devices that monitor how they speak, what they eat, when they take a break and how stressed they get during a task, and then punish or reward them for good or bad data? Wearables have the potential to educate us and protect our health, as well as violate our privacy in any number of ways.

State-sponsored hacktivism and “soft war”

“Soft war” is a concept used to explain rights and duties of insurgents and even terrorists during armed conflict. Soft war encompasses tactics other than armed force to achieve political ends. Cyber war and hacktivism could be tools of soft war, if used in certain ways by states in interstate conflict, as opposed to alienated individuals or groups.

We already live in a state of low-intensity cyber conflict. But as these actions become more aggressive, damaging infrastructure, how do we fight back? Does a nation have a right to defend itself against, or retaliate for, a cyber attack, and if so, under what circumstances? What if the aggressors are non-state actors? If a group of Chinese hackers launched an attack on the U.S., does that give the U.S. government the right to retaliate against the Chinese government? In a soft war, what are the conditions of self-defense? May that self-defense be preemptive? Who can be attacked in a cyber war? We’ve already seen operations that hack into corporations and steal private citizens’ data. What’s to stop attackers from hacking into our personal wearable devices? Are private citizens attacked by cyberwarriors just another form of collateral damage?

Enhanced pathogens

On Oct. 17, the White House suspended research that would enhance the pathogenicity of viruses such as influenza, SARS and MERS — often referred to as gain-of-function, or GOF, research. Gain-of-function research, in itself, is not harmful; in fact, it is used to provide vital insights into viruses and how to treat them. But when it is used to increase mammalian transmissibility and virulence, the altered viruses pose serious security and biosafety risks. Those fighting to resume research claim that GOF research on viruses is both safe and important to science, insisting that no other form of research would be as productive. Those who argue against this type of research note that the biosafety risks far outweigh the benefits. They point to hard evidence of human fallibility and the history of laboratory accidents and warn that the release of such a virus into the general population would have devastating effects.

Non-lethal weapons

At first it may seem absurd that types of weapons that have been around since WWI and that were not designed to kill could be an emerging ethical or policy dilemma. But consider the recent development and proliferation of non-lethal weapons such as laser missiles, blinding weapons, pain rays, sonic weapons, electric weapons, heat rays, disabling malodorants and the use of gases and sprays in both the military and domestic police forces. These weapons may not kill, but they can cause serious pain, physical injuries and potentially long-term health consequences. We must also consider that non-lethal weapons may be used more liberally in situations that could be diffused by peaceful means, since there is technically no intent to kill; used indiscriminately without regard for collateral damage; or be used as a means of torture, since the harm they cause may be undetectable after a period of time. These weapons can also be misused as a lethal force multiplier — a means of effectively incapacitating the enemy before employing lethal weapons. Non-lethal weapons are certainly preferable to lethal ones, given the choice, but should we continue to pour billions of dollars into weapons that increase the use of violence altogether?

Robot swarms

Researchers at Harvard University recently created a swarm of more than 1,000 robots, capable of communicating with each other to perform simple tasks such as arranging themselves into shapes and patterns. These “kilobots” require no human intervention beyond the original set of instructions and work together to complete tasks. These tiny bots are based on the swarm behavior of insects and can be used to perform environmental cleanups or respond to disasters where humans fear to tread. The concept of driverless cars also relies on this system, where the cars themselves — without human intervention, ideally — would communicate with each other to obey traffic laws and deliver people safely to their destinations. But should we be worried about the ethical and policy implications of letting robots work together without humans running interference? What happens if a bot malfunctions and causes harm? Who would be blamed for such an accident? What if tiny swarms of robots could be set up to spy or sabotage?

Artificial life forms

Research on artificial life forms is an area of synthetic biology focused on custom-building life forms to address specific purposes. Researchers announced the first synthetic life form in 2010, created from an existing organism by introducing synthetic DNA.

Synthetic life allows scientists to study the origins of life by building it rather than breaking it down, but this technique blurs the line between life and machines and scientists foresee the ability to program organisms. The ethical and policy issues surrounding innovations in synthetic biology renew concerns raised previously with other biological breakthroughs and include safety issues and risk factors connected with releasing artificial life forms into the environment. Making artificial life forms has been deemed “playing God” because it allows individuals to create life that does not exist naturally. Gene patents have been a concern for several years now, and synthetic organisms suggest a new dimension of this policy issue. While customized organisms may one day cure cancer, they may also be used as biological weapons.

Resilient social-ecological systems

We need to build resilient social and ecological systems that can tolerate being pushed to an extreme while maintaining their functionality either by returning to the previous state or by operating in a new state. Resilient systems endure external pressures such as those caused by climate change, natural disasters and economic globalization. For example, a resilient electrical system is able to withstand extreme weather events or regain functionality quickly afterwards. A resilient ecosystem can maintain a complex web of life when one or more organism is overexploited and the system is stressed by climate change.

Who is responsible for devising and maintaining resilient systems? Both private and public companies are responsible for supporting and enhancing infrastructure that benefits the community. To what degree is it the responsibility of the federal government to assure that civil infrastructure is resilient to environmental changes? When individuals act in their own self-interest, there is the distinct possibility that their individual actions fail to maintain infrastructure and processes that are essential for all of society. This can lead to what Garrett Hardin in 1968 called the “tragedy of the commons,” in which many individuals making rational decisions based on their own interest undermine the collective’s best and long-term interests. To what extent is it the responsibility of the federal government to enact regulations that can prevent a “tragedy of the commons”?

Brain-to-brain interfaces

It’s no Vulcan mind meld, but brain-to-brain interfaces (BBI) have been achieved, allowing for direct communication from one brain to another without speech. The interactions can be between humans or between humans and animals.

In 2014, University of Washington researchers performed a BBI experiment that allowed a person command over another person about half a mile away, the goal being the simple task of moving their hand (communication so far has been one-way in that one person sends the commands and the other receives them). Using an electroencephalography (EEG) machine that detects brain activity in the sender and a transcranial magnetic stimulation coil that controls movement in the receiver, BBI has been achieved twice — this year scientists also transmitted words from brain-to-brain across 5,000 miles. In 2013, Harvard researchers developed the first interspecies brain-to-brain interface, retrieving a signal from a human’s brain and transmitting it into the motor cortex of a sleeping rat, causing the rodent to move its tail.

The ethical issues are myriad. What kind of neurosecurity can we put in place to protect individuals from having accidental information shared or removed from their brains, especially by hackers? If two individuals share an idea, who is entitled to claim ownership? Who is responsible for the actions committed by the recipient of a thought if a separate thinker is dictating the actions?

More information on these issues is available at reilly.nd.edu/list15. Vote on the most compelling issues here.

Contact: Jessica Baron, Outreach and Communications Coordinator, Reilly Center for Science, Technology, and Values, University of Notre Dame, baron.17@nd.edu, 574-631-1880 (email preferred), 574-245-0026 (for urgent text message media inquiries)