Academia and Intelligence Community Discuss AI Safety

June 22, 2023

Robin Hanson, associate professor of economics at George Mason University, analyzed the general fear of AI on June 22 during NIU AI Safety Symposium.

As artificial intelligence (AI) rapidly advances, its safety has been called into question in recent months.

GPT-4, an AI chatbot built on large language models, deceived a human to help accomplish its assigned task in a study released in March. Later that month, more than a thousand experts in the field released a letter calling for a pause on systems more powerful than GPT-4 to develop AI safety protocols for design and development. In May, a Microsoft study found GPT-4 is showing signs of artificial general intelligence — the ability to reason like a human.

“It’s a really important topic and I think we are at this precipice where the decisions we make now as this technology continues to develop could have significant consequences down the road,” said Mark Bailey, Department Chair of Cyber Intelligence and Data Science at National Intelligence University (NIU).

In response, Bailey, who studies AI safety, organized the school’s first AI Safety Symposium on June 21 and 22 in Washington, D.C. The event, hosted by NIU’s Ann Caracristi Institute and Data Science Intelligence Center, brought together 65 experts from across academia and the Intelligence Community (IC) to tackle weighty issues related to AI safety and see how they can be applied in the IC.

“Being at NIU, we’re at this nexus between the Intelligence Community and the outside academic community because we wear both hats,” said Bailey. “So, what I really wanted to do was sort of bring all these different parties together.”

Robin Hanson, associate professor of economics at George Mason University, analyzed the general fear of AI as an unknown or other, arguing that AI systems are created by and are descendants of humans, so humans shouldn’t view them as a rival faction. Not everyone agreed, leading to a lively discussion.

“He has a point. There are a lot of things that we don’t understand that we naturally push back on,” said Henry Yep, who attended from the Defense Intelligence Agency.

Chris Bailey, an NIU professor who specializes in national security law, processes, and professional ethics, discussed legal and ethical issues stemming from AI’s use in national security decisions, including in wartime.

How will an AI system determine a combatant from a civilian in modern combat? How will AI recognize a combatant surrendering when it’s often subtle and can take many forms?

He also explored proportionality related to war and moral agency. Human judgement and errors have led to civilian casualties. Are humans ready to accept the same from AI? Do humans want AI making decisions that require moral agency?

While many discussions attempted to predict future uses and dilemmas, Heather Frase’s presentation focused on the harm AI is already causing, from autonomous vehicle accidents to AI models making biased decisions that affect humans. Frase, who has also worked in the IC, is a senior fellow at Georgetown’s Center for Security and Emerging Technology, where she researches AI assessment.

“Personally, I want to see more emphasis on practical steps to start developing better and more trustworthy systems in the IC,” said Frase.

Unpredictable AI

Mark Bailey and Susan Schneider, Director of the Center for the Future Mind at Florida Atlantic University, both agreed that AI is unpredictable. The case where GPT-4 deceived a human to achieve its objective is an example of an alignment issue, with the AI not aligning with the goals and ethical values of humans.

“We need to get the alignment issue right now because it will become more complex as systems become more generalized and involved in day-to-day activities,” said Mark Bailey.

Schneider said she’s also “extremely worried” about generative AI, where algorithms use existing information to create new content, such as code, text, photos, and videos. She said she’s even spoken to Congress about the threat of “deeper fakes.”

During an event hosted by the Carnegie Endowment for International Peace in April, Director of National Intelligence Avril Haines recently shared the same concern that generative AI could be used for digital authoritarianism around the world and to promote disinformation and misinformation domestically.

“And there’s just no question that with generative AI you can be far more sophisticated in your production of misinformation and disinformation,” Haines said.

“Disillusion of the concept of truth is a huge national security issue,” added Mark Bailey.

While none of these issues come with easy solutions, Mark Bailey said that he hoped that the event led to more “cross talk” between members of the IC and academia.

“I think it takes interdisciplinary collaborations,” added Schneider. “It’s a super difficult issue and being able to trouble over it with other like minds is so important.”

Scroll to Top