I find this really confusing. AI often says its computer systems "know" things, but when AI explains how to program a computer to be intelligent, it talks only about "knowledge representation". E.g., Russell and Norvig's, Artificial Intelligence: A Modern Approach.
In part III, for example, a part titled "Knowledge and Reasoning", the authors talk only about knowledge representation, e.g. at the start of the first chapter of part III: "This chapter introduces knowledge-based agents. The concepts that we discuss - the representation of knowledge and the reasoning processes that brings knowledge to life - are central to the entire field of artificial intelligence [original emphasis].
Why talk about representation? Why not talk about knowledge per se (that which is represented)? Where is the actual thing - knowledge? We seem to know where the representations are - inside the computer. But where is the actual knowledge? Inside the human programmer? Do AI's computer systems really know nothing, in themselves?