Assessing AI

Doctoral student Michael Zahorec blends philosophy and computer science to understand ethical artificial intelligence usage

Thu, 01/15/26
Michael Zahorec, doctoral student.
Michael Zahorec. Photo by Devin Bittner.

If you鈥檝e been reading the news lately, you could be forgiven for thinking artificial intelligence was on the verge of taking over the world. Today鈥檚 headlines make AI seem more like a threat to humanity than the technology that for years has quietly suggested what to watch next on Netflix.

From virtual assistants like Apple鈥檚 Siri to large language models like ChatGPT, AI is in use across industries for wide-ranging tasks from enabling automation and generating digital content to engaging in customer service, but big questions loom.

How do we know when to trust AI? How can we best evaluate AI鈥檚 ability to perform the tasks promised? What guardrails exist to ensure AI is used ethically? And when is human interaction a better choice than using AI?

Michael Zahorec, who earned his master鈥檚 from in Spring 2025, is currently pursuing a doctorate through the and is focused on AI evaluation, AI explanation and responsible AI use.

At its core, AI is a predictive model that uses complex mathematical operations to generate responses to queries. In his research, Zahorec integrates philosophy and computer science perspectives to emphasize the importance of understanding the internal processes of AI models, not just a model鈥檚 behavior, to fully comprehend how a particular model functions. He also pushes for ethical guidance and responsible AI use.

鈥淚 research the philosophy behind different techniques used to understand complex generative AI models like ChatGPT,鈥 said Zahorec, who earned a bachelor鈥檚 in philosophy and mechanical engineering in 2019 from the University of Dayton in Ohio before coming to FSU. 鈥淓xclusively analyzing a model鈥檚 output, or the behavior, doesn鈥檛 really tell us how the model works. We have to understand the model鈥檚 internal components, or how it arrived at that behavior.鈥

Many researchers currently disagree on the best AI evaluation practices, and Zahorec鈥檚 argument has the potential to widely shape future AI evaluation standards.

鈥淧eople need to understand AI as imperfect mathematical models that aren鈥檛 always trustworthy,鈥 Zahorec said. 鈥淚 hope to help the public engage with AI鈥檚 benefits without falling prey to the potential harms, such as when AI generates incorrect or biased information.鈥

Michael鈥檚 work has an essential public dimension. He鈥檚 proposing innovative, nuanced suggestions about understanding key concepts in AI. His work has lasting impacts on the standards researchers use to comprehend and evaluate AI models and responsible AI use guidelines.

鈥 Courtney Fugate, professor of philosophy

In 2024, Zahorec interned on the responsible AI team for health insurance company Humana and researched ways to evaluate large language models used in customer-facing tasks. Among the ways to evaluate AI is adversarial testing, in which a researcher like Zahorec intentionally tries to trick and confuse the model to behave contrary to its design in order to uncover vulnerabilities and later, strengthen them.

鈥淭his internship gave me a practical understanding of AI-related research literature,鈥 he said. 鈥淚 saw applications of AI evaluation in a real-world context, like how data scientists apply research to use AI more safely and create better products.鈥

In forthcoming research, Zahorec charts various uses of buzzwords, such as 鈥渢ransparency,鈥 typically used by humans to describe AI design and function. His work categorizes the words by different meanings, showcasing vast disagreements in AI definitions. He鈥檚 also writing a book chapter for 鈥淭he Philosophy of Artificial Intelligence,鈥 which argues that understanding AI鈥檚 internal components is essential and explores why that is so difficult to understand in generative AI models.

鈥淚 believe we should use language models as idea generators as opposed to other paradigms, like an expert or information processor, in order to use AI responsibly,鈥 Zahorec said. 鈥淛ust because an idea is AI-generated doesn鈥檛 automatically mean it鈥檚 a good idea; verification is needed. Relying solely on AI creates potential for biased or incorrect information.鈥

Zahorec鈥檚 dissertation, which he鈥檚 slated to defend in March 2026, focuses on 鈥渟cientific kinds鈥 鈥 the question of what defines groupings, like biological species or chemical elements 鈥 in society and if scientists create or discover these kinds. He argues that 鈥渒inds鈥 are created by scientists but are grouped depending on their context in nature, meaning kinds of AI models and species are grouped in different manners.

In addition to his research, Zahorec serves as a teaching assistant in philosophy and has taught his own classes including Environmental Ethics and Logic, Reasoning and Critical Thinking. He has also lectured on AI explanation and interpretability, and he moderated the FSU-hosted 鈥淎I and its Impact on Higher Education鈥 panel discussion in September 2025. Following graduation, Zahorec plans to pursue a career in academia to continue teaching and conducting research.

鈥淢ichael鈥檚 work has an essential public dimension,鈥 said Courtney Fugate, professor of philosophy and Zahorec鈥檚 adviser. 鈥淗e鈥檚 proposing innovative, nuanced suggestions about understanding key concepts in AI. His work has lasting impacts on the standards researchers use to comprehend and evaluate AI models and responsible AI use guidelines.鈥

Carly Nelson is an FSU alumna who earned a bachelor's degree in advertising in 2025. She is currently pursuing a master's degree in strategic communications with plans to graduate in Summer 2026.