You are currently browsing the archives for the KR Issues category.

Matching Posts

 

In Defense of Incomplete Inference

Scott Fahlman,   July 17, 2008
Categories:  KR Issues    

In this article, I will expand a bit on some comments I made in the previous article about the tension between expressiveness, scalability, and the “general theorem proving” approach to inference – an approach that currently dominates the field of knowledge representation (KR). For a practical and useful knowledge base system (KBS), I think we […]

Read more...

Mini-Nuggets: Knowledge Base Requirements for Human-Like Thought

Scott Fahlman,   June 25, 2008
Categories:  AI, KR Issues    

This is the second mini-nuggets article: a collection of propositions, with some minimal explanation for each, on a given topic. The idea is to sketch out an overall approach or point of view quickly, without getting bogged down in a lot of detail or lengthy justification about each point. I will come back and expand […]

Read more...

Human-Like Memory Capabilities

Scott Fahlman,   June 17, 2008
Categories:  AI, KR Issues    

In an earlier article, “AI: What’s Missing“, I listed several capabilities that are missing from current AI systems – capabilities that must somehow be provided before our systems will be able to exhibit anything approaching general, human-like intelligence. Perhaps the most important of these, because it interacts with several others, is “The ability to assimilate […]

Read more...

A Good Textbook for Knowledge Representation

Scott Fahlman,   March 30, 2008
Categories:  KR Issues    

Since I mentioned Ron Brachman in my last post, I should also mention that he is the co-author, with Hector Levesque, of Knowledge Representation and Reasoning (Morgan Kaufmann, 2004), probably the best available general textbook on this topic. Not surprisingly, the book’s primary focus is on First-Order Logic and its more restrictive relatives. However, there […]

Read more...