Concepts represent generalized abstractions that enable humans to categorize and
reason efffciently, yet it is unclear to what extent Large Language Models (LLMs)
comprehend these semantic relationships. Existing benchmarks typically focus
on factual recall and isolated tasks, failing to evaluate the ability of LLMs to
understand conceptual boundaries. To address this gap, we introduce CK-Arena, a
multi-agent interaction game built upon the Undercover game, designed to evaluate
the capacity of LLMs to reason with concepts in interactive settings. CK-Arena
challenges models to describe, differentiate, and infer conceptual boundaries based
on partial information, encouraging models to explore commonalities and distinctions
between closely related concepts. By simulating real-world interaction,
CK-Arena provides a scalable and realistic benchmark for assessing conceptual
reasoning in dynamic environments. Experimental results show that LLMs' understanding
of conceptual knowledge varies signiffcantly across different categories
and is not strictly aligned with parameter size or general model capabilities.