Share and discuss this blog



Tuesday, March 7, 2017

The SPGU Tool: A response to current so-called AI



OK. We have to fight back. Enough with the AI is going to take over the world stories. Enough with chat bots. Enough with pretending AI is easy. Enough with AI people who barely know the first thing about AI.

I am not discussing machine learning here. If you want to count a lot of words fast, and you can draw some useful conclusions from that, go ahead. I wish you wouldn’t call it AI, but I can’t control that. But I can fight back. Not with words, which I know don’t really convince anyone of anything, but with a new AI tool, one that uses what I know about AI, in other words, what most people  who worked in AI in the 60’s, 70’s, and 80’s likely know about AI.

The SPGU Tool is named after the iconic book by Schank and Abelson (1977) Scripts, Plans, Goals, and Understanding.

In that book, we laid out the basis of human understanding of language by invoking a set of scripts, plans, goals, and themes, that underlie all human actions. This was used to explain how people understand language. The classic example was the attempt to understand something like “John went into a restaurant. How ordered lobster. He paid the check and left.” This understanding was demonstrated by the computer being able to answer questions such as: What did John eat? Who did he pay? Why did he pay her? In this easy example of AI, SAM (the Script Applier Mechanism we built in 1975) could answer most questions by referring to the scripts it knew about and parsing the   questions in relationship to those scripts. In this example, given a detailed restaurant script, it could place any new information within that script and make inferences about what else might be true or what might possibly be asked at that point in the script.

The SPGU Tool (SPGU-T) takes that 1970’s technology and makes it useful in the modern era. People who plan often need help in making their plans succeed. A tool that helps them plan needs to have a detailed representation of the context of that plan, what goals were being satisfied and well-known obstacles to achieving those goals. Then it can access expert knowledge to assist a planner when the planner is stuck. We used this methodology when we built the Air Campaign Planner for the Department of Defense (in the 90’s). We captured expert knowledge (in the form of short video stories), tracked what the planner was doing within a structured air campaign planning tool, and offered help (in the form of one or more retrieved stories) when SPGU-T saw that help was needed.

In a project for a pharmaceutical company, for example, one expert story we captured was called the “Happy Dog Story.” The story was how the company had found a drug that made dogs every happy and then went into clinical trials with humans very quickly. Some months later, the dogs had all killed each other, but the people who were doing the clinical trials were unaware of this. This story should come up when a planner is planning clinical trials and is relying on data that required continued tracking. SPGU-T would know this and be able to help, if and only if, all of the planning for the trials was done within SPGU-T’s framework that detailed the steps in the clinical trials script.

A partner or manager in a consulting firm could use SPGU-T to plan a client engagement. SPGU would be able to help with problems and suggest next steps at each stage if it knew the gory details of how engagements work, and if it had stories from experts addressing well-known problems that occur in engagements. SPGU-T could not only answer questions, but it could also anticipate problems, serving as a helpful expert who was always looking over the user’s shoulder.

A Deeper Look at SPGU-T

It is well beyond the state of the art, both now and in the foreseeable future, for a computer system to answer arbitrary questions, or more difficult still, to deeply understand what a person is doing and to proactively offer advice. Both of these forms of intelligent assistance are possible today, if the person is working to accomplish a well-defined, goal-oriented task using a computer-based tool that structures his or her work. In other words, if we can lay out the underlying script, and we can gather used advice that might be needed at any point in the script, we can understand questions that might be asked or assist when problems occur. That understanding would help us parse the questions and retrieve a video story as advice in response.

This isn’t simple but neither is it impossible. Advisory stories must be gathered and detailed scripts must be written. We built the needed parser years ago (called D-MAP for direct memory access parsing.)

SPGU-T helps someone to carry out a plan in a specific domain, be it planning a large-scale data analytics project, a strategy consulting engagement, a construction project, or a military air campaign. It does so by knowing a person’s goals in creating such a plan, the steps involved in plan creation, the nature of a complete and reasonable plan, and the problems that are likely to arise in the planning process.

Imagine, for example, a version of SPGU-T that is customized for developing and tracking a project plan that a consulting firm will use to successfully complete a complex data analytics project. It knows that its registered user is an engagement manager. Given the usage context, it also knows that the user’s goal is to plan a time-constrained, fee-based project on behalf of a new client. From this starting point, SPGU-T can take him or her through a systematic process for achieving that goal. At any step in the process, SPGU-T will know specifically what the user is trying to accomplish and the nature of the information he or she is expected to add to the plan. For example, in one step, the user will identify datasets required for the project. SPGU-T will expect him or her to identify the owners of those datasets, the likely lag times between data requests and receiving the required data, and any key properties of the data, such as its format and likely quality.

This very specific task context, computer-based interpretation of the semantics of the information being entered, and heuristics to infer reasonable expectations about the input enable the system to accurately interpret questions posed by the user in natural language and to retrieve context-relevant answers from a case base of answers, both video stories and textual information, to a wide range of common questions about planning a data analytics project. For example, the user might ask, “How can I determine the quality of data provided by a commercial data service?,” “What is the likely impact of poor data quality on my schedule?,” or “What is a reasonable expectation of the lag time between making a data request and receiving data from a market research firm?”

More important, perhaps, are situations in which the user does not recognize that a problem exists and, therefore does not think to ask a question, e.g., the question above about the likely lag in receiving data. In such situations, SPGU-T can use the same knowledge of task context and semantics of input information, coupled with heuristics for evaluating the completeness and reasonableness of information to proactively offer help and advice. SPGU-T can also carry information forward to a future task, for example, to offer proactive advice about the likely duration of the “data wrangling” step of an analytics project given previously entered information about the formats, quality, and lags in obtaining third-party datasets.

That being said, when SPGU-T is proactively offering help and advice, it is essential that it not be wrong if the user’s confidence in the value of such advice is to be maintained. In situations in which SPGU-T recognizes a likely problem with low certainty, it can do one of two things: It can offer a small set of potentially relevant pieces of advice from which the user can select, or it can ask the user a few questions to raise the certainty that specific advice is relevant to the user.

In either the case of answering a user’s question or proactively offering help and advice, SPGU-T can also answer follow-up questions, using not only the contextual information enumerated previously but also the user’s inferred intent in asking the follow-up question, thus making the retrieved answer all-the-more relevant.

There will, however, be cases in which SPGU-T cannot answer a question or cannot identify relevant help and advice with reasonable certainty even after interacting with the user to further understand his or her specific context. In such cases, SPGU-T will refer the question or situation to a human expert and promise the user that the expert will address the issue. SPGU-T can extend its case base as a result of capturing such interactions, thus enabling it to answer a wider range of questions and to provide better help and advice to future users.

We are building SPGU-T now. Watch this space.


No comments: