Intelligent Help Systems for UNIX

Free download. Book file PDF easily for everyone and every device. You can download and read online Intelligent Help Systems for UNIX file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Intelligent Help Systems for UNIX book. Happy reading Intelligent Help Systems for UNIX Bookeveryone. Download file Free Book PDF Intelligent Help Systems for UNIX at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Intelligent Help Systems for UNIX Pocket Guide.

If we can find the right spot in our Unix papers we can actually do something. Totally stuck on the literature. Consults friends. One explains how to use the em line editor. End of session three note of despair or disgust in his voice. Figure 2. Typical verbal protocols. Errors which were caused by misconceptions were differentiated from errors caused by mistyping. Mistyping categories were also derived from the error analysis. The following gives additional guidelines to those provided by Damerau for the formulation of the rules indicating that mistyping has occurred.

Omission of one character — this includes the omission of space between arguments in the case of an experienced user. Repetition of one character. Repetition of two characters. Inversion of two characters. Extra characters at the start — this could be caused by incomplete deletion of incorrect characters. Repetition of a previous character instead of the correct one — example nror for nroff. Development of an Intelligent Transparent Interface for Unix Since this was an unusual application, the production of a prototype is an important part of the process of development and gave useful indications for the final design.

During prototype development the code remained uncompiled and underwent continual modification. The prototype contained an intelligent spell-checker in addition to the components for recognition and remediation of other types of errors. Rule based inferencing A knowledge-based component was developed for inclusion within the prototype interface, and was based on an expert system suggested by Winston and Horn The knowledge base consists of modules of knowledge in the form of production rule sets grouped according to function and accessed when required for solving particular parts of a problem.

A production rule expresses knowledge in the form of a conditional part followed by some action. It matches the conditionals of the rules with the facts in the database, and causes the required action when a match occurs. Thus deduction is carried out by forward chaining only. Forward chaining has previously been shown to be useful for modelling cognitive processes and for solving problems which require very broad but shallow knowledge. An explanatory facility was developed, similar to that of MYCIN Shortliffe , which enabled the user to ask for justification: how a fact was derived or why a fact was required.

The output to the user was in pseudo natural language, which was aided by coding in LISP so that chosen parts of a rule were directly displayed on the screen. It became clear from the experience of using the experimental KBS that it was more difficult than had been anticipated to decide on the values of confidence factors, and they were therefore omitted from later versions.

User models The user model consists of two components: the generalised or static model and the dynamic or specific model. It is well known that people will attempt to apply their previous knowledge when attempting a new task, and numerous examples are visible in the user logs. However, as the logs indicate, this can sometimes hinder the successful completion of the new task. The record of previous activity is held on property lists: attributes for each input line include original command, deduced command, intended command, original arguments, and deduced arguments, intended arguments.

Sequences which may indicate the occurrence of inefficient use are maintained, and the frequency of usage of commands and files is also recorded. In future versions, as part of the planning operation, the Dynamic User Model should also maintain possible sequences which might indicate a command which is likely to be used next, or a file which is currently being operated upon example: edit file1 cat file1 lf file1.

Pre-testing As soon as the prototype was complete and had been tested to eliminate all apparent errors in the code, pre-testing was carried out in order to design the test-evaluation stage of development. The pre-test subjects answered questions which helped to validate the rules of the knowledge base. The interface with the completed interpretation module was pre-tested on a small group of expert users of Unix one research associate, two research students. They were not constrained to use any specific operations, but asked to use it in whatever way they liked. The explanations facility of the KBS was used as a development tool which enables the subject to ask how each deduction was made and work back through a chain of reasoning.

The rules were shown to the user in a natural language version; a property list holds the English language description of each function. The subjects indicated whether they believed each deduction to be true or false. Pre-testing revealed that additional rules and hypotheses were required. Thus a rule is required which will choose between the two possibilities or offer both. Currently the interface attempts to choose the most likely and rules out all others.

The tester followed a standard sequence of frequently used operations such as display a file on-screen, make a hard copy of a file, find which files are present in the current directory, delete a file, make a copy of a file, use the editor, use the text-formatter. By the end of this stage the LISP coding had been compiled in order to improve speed. Action of interface for the final pre-test The following is an ordered list of the actions carried out by the intelligent interface for the final pre-test.

Production rules and Unix data are read in. Menu, file-list and prompt simulator are output. A command line from the user is read in. Each token is checked for the types it might fit examples: username, string, filename, file in current directory, valid command, digits, switch.

Each token is checked to discover if it could be a mistyped command or a mistyped file in the current directory. The explanations of the KBS are shown to the subject to discover the validity of the rules. The command is passed to Unix to show its response. The subject is asked for comment on the Unix response. Modification of the rule base after pre-testing New rules were easily added to the rule set. Such rules were formulated after pre-testing on the basis of information provided by the user in response to the deductions offered by the interface.

Rules were also modified and added after consultation with users during the full test. Ideally, the new rules would be added by the interface, but there was insufficient time to automate this activity. Evaluation of the interface Although usability studies were not in common use when this research was conducted, the transparent interface was subsequently evaluated by tests with users in order to discover how effective it is for Unix and also to discover which aspects might be generally applicable to other systems which are difficult for novices to learn and use.

For this between-subjects study; 13 joint honours undergraduates in Mathematics and Computer Science were divided randomly into an experimental group 7 subjects and a control group 6 subjects. Each subject in the experimental group was provided with a version of the intelligent interface, which was removed as soon as the study was complete. Testing involved completing a standard sequence of operations to be attempted in a given order and involving frequently-used Unix commands, an improved version of that used for the pre-test since, for instance, the pre-test indicated that the operation to find on-line help should be successfully completed before any other command is attempted.

Logs were recorded for subjects and in addition a record was made of all the analyses and interpretations which the intelligent interface carried out for each input line plus the responses made by the subjects to the questions provided by the tutoring module. The following summarises the results. Further details are in Jerrams-Smith Fewer mistakes were made by the experimental group. The control group made many more repeats of identical errors. Most members of the experimental group completed the sequence; most of the control group did not.

Fewer commands were given by members of the experimental group. More misconceptions were found in the control group than in the experimental group.

Technical library

Protocol analysis was carried out in which the subjects attempted to describe their intentions and activities while they carried out the standard sequence, and this was recorded on tape. However, in most cases there was a greater emphasis on debriefing because many users found it extremely difficult to talk about what they were doing while also trying to learn new and complex tasks.

This proved to be a fruitful exercise because it indicated important aspects of the intelligent interface which were not otherwise obvious. These included: 1. The directory listing provided as part of the intelligent interface provided feedback: it was used to verify that commands had acted as expected. That works. Can you make a comparison between the INFO information and the tutorial comments? The tutorial — it aims at the problem more closely. The mode of activity may be different for learners where help is not easily available.

They may adopt a more experimental approach and hence use many more commands to effect the same results. There is an indication that misconceptions are corrected by the intelligent interface as soon as they appear and so are not repeated, as tends to occur in the control group. Recent Developments Research on adaptive intelligent interfaces, which respond to meet the needs of the individual user, is still in progress Jerrams-Smith , and a good case has recently been made for the provision of various kinds of adaptive system Benyon and Murray ; Elsom-Cook In addition, recent research has identified a number of potential problems endemic to hypermedia which could ultimately restrict its usefulness but which might be solved by the provision of adaptivity.

For instance, PROVISION OF ON-LINE HELP 17 Waterworth suggests that there remain significant unresolved usability issues, while Whalley discusses a potential problem associated with hypertext when used within an educational context: the fragmented nature of a hypertext document means that the most natural way to study hypertext is by browsing or exploring the various paths of the document.

In some instances, however, the browsing activity may be an inappropriate method of learning, for example, when an author is trying to develop a series of ideas within a particular context or framework. Further problems which need to be addressed were identified by Conklin , who describes two main difficulties associated with reading hypertext documents which apply equally to hypermedia : disorientation and cognitive overhead. Arguably this problem could also exist in traditional linear text documents, however, in this case the reader is limited to searching either earlier or later in the text.

Because hypertext offers more dimensions in which the user can move, the likelihood of a user becoming lost is increased, especially in a large network. The problem of cognitive overhead occurs when the user is presented with a large number of choices about which links to follow. These directional decisions are absent in a traditional linear text document or in a film or TV programme when the author has already made the choices for ordering of material. With hypertext and also with hypermedia , the moment a link is encountered, the reader must decide whether or not to follow the link.

The author is currently engaged in a research programme developing a number of adaptive hypermedia systems in which a flexible and detailed user model is constructed and applied. These systems provide vehicles with which to identify the variables responsible for individual user differences and to adapt the system to accommodate user requirements in terms of attributes such as level of domain knowledge, personality, preferences, information processing styles, goals and tasks, roles within an organisation.

Work currently in progress indicates that the following variables are important in enhancing the usability of the system. FD-I refers to a fundamental individual difference in information processing. In contrast, the field independent user tends to be influenced by internally generated cues and is more discriminating in the use of environmental information.

A further variable which may be of importance to adaptivity is that of Locus of control Rotter Two types of user can be distinguished: internals and externals. Those with an internal locus of control regard outcomes as the result of their own efforts, whereas those with an external locus of control regard outcomes as the result of factors beyond their influence.

While a system which removes control from the user may be compatible with an external user, its usability is reduced for internal locus users. Indeed, when internals are required to use restrictive systems they often ignore instructions and some ultimately cease to use the system. A variety of further user differences are being investigated as the basis of adaptivity. A more sophisticated form of adaptivity is also to be investigated: the identification of variables of particular relevance to the user and variables which are irrelevant to the user. The HyperLearner project Combining hypermedia technology with adaptive tutoring provides a tutoring system which can adapt the sequencing of material and can also adapt the presentation of material to suit the potentially diverse needs and abilities of individual users.

In addition, adaptive hypermedia tutoring systems are well suited to an exploratory approach to learning, and therefore encourage the active learning which has long been advocated Piaget The author has recently developed HyperLearner, a prototype hypermedia authoring system which has been used to help tutors to build tutorials of course material.

The long-term aim is the delivery of such a system, incorporating the results of the ongoing investigations. Current trends indicate that in the future, working, and learning, are likely to become increasingly homebased and the provision of systems which enable effective distance learning is therefore a topic of considerable importance.

The HyperLearner prototypes learn about the student user and therefore adapt the interaction to suit the individual. The aim of the HyperLearner project is primarily to help the tutor, and thus to help the student to learn about a specific domain, but the prototypes have also been used to help students to learn about some of the fundamental issues connected with intelligent hypermedia tutoring systems Jerrams-Smith The Telecare system is designed to form a community network that will provide primary care to the elderly, housebound, disabled or otherwise disadvantaged living in the community.

The Telecare Companion investigates the provision of adaptive support for users to access information stored in hypermedia networks, including the Internet, and communicate using a video-phone and e-mail. It provides weak hypertext linearisation so that users follow an individual and ordered sequence of selected web pages. It combines techniques of user modelling with data modelling in order to select and order pages.

While a fully adaptive hypermedia system has yet to be developed, the present research programme has already made important inroads into the design and construction of such a system. It should be stressed that the present research is ongoing and the author anticipates that more sophisticated systems will emerge. References Bailey, R. Bayman, P. Communications of the ACM 26 9 : — Benyon, D. Artificial Intelligence Review 7: — Bocker, H-D. In Diaper, D. North Holland. Brown, J. International Journal of Man-Machine Studies — Burton, R. International Journal of Man-Machine Studies 5— Carey, T.

User Differences in Interface Design. IEEE Computer — Carroll, J. Minimalist Training. Datamation 30 18 : et seq. Clancey, W. Qualitative Student Models, 86— Conklin, J. Hypertext: An Introduction and Survey. Computer 20 9 : 17— Damerau, F. Communications of the Association for Computing Machinery 7 3 : — Douglass, R. DuBoulay, B. International Journal of Man-Machine Studies 14 3 : — Edwards, M.

As We May Program by Peter Norvig, a Director of Research at Google

In Proceedings of Educational Multimedia and Hypermedia Ehrenreich, S. Human Factors — Student Modelling in Intelligent Tutoring Systems. Finin, T.


  • Intelligent Help Systems for UNIX - Google книги.
  • Intelligent Help Systems for UNIX?
  • Item Details?

Karlsruhe, FRG. Hanson, S. Hayes, P. Breaking the Man-Machine Communication Barrier. Institute of Electrical and Electronic Engineers Computer 19— International Journal of Man-Machine Studies 19 3 : — Hendley, R. Hypermedia Generation from Domain Representation. Computer Education 20 1 : — Hollnagel, E.

International Journal of Man-Machine Studies 18 2 : — Innocent, P. Towards Self-Adaptive Interface Systems. International Journal of Man-Machine Studies 16 3 : — Irgon, A. James, E. In Coombs, M. Computing Skills and the User Interface. Jerrams-Smith, J. In Proceedings of Educational Multimedia and Hypermedia.

John, D. Proceedings of the 1st International Symposium on Communication Systems.

CS News: The Life After Brown Series Returns With Peter Norvig, Google's Director Of Research

Sheffield Hallam University. Johnson, W. Intention-Based Diagnosis of Programming Errors. Kamouri, A. Lang, T. Lamas, D. Mishra, P. In Salvendy, G. Human Computer Interaction. Nickerson, R. Communications of the Association for Computing Machinery — Perez, T. Piaget, J. Memory and Intelligence. Ramsay, H. Science Applications, Inc. Relles, N. A User Interface for Online Assistance. Rotter, J. Psychological Monographs 80 Scapin, D. Human Factors 23 3 : — Self, J. Student Models in Computer Aided Instruction. International Journal of Man-Machine Studies 6: — Shneiderman, B.

Institute of Electrical and Electronics Engineers Computer 9— Shortliffe, E. Elsevier: New York, NY. Sleeman, D. Intelligent Tutoring Systems and Student Modelling. Exeter, UK. International Journal of Man-Machine Studies Sleeman D. Artificial Intelligence — Tagg, S. Waterworth, J. Multimedia Interaction with Computers. Ellis Horwood. Whalley, P. An Alternative Rhetoric for Hypertext. In McKnight, C. Hypertext: A Psychological Perspective. Wilensky, R. Communications of the ACM 27 6 : — Winston, P.

Addison-Wesley: Reading, MA. Witkin, H.

Editorial: Intelligent Help Systems for UNIX: Computational Models and Systems

Greenwood Press. Review of Educational Research 1— Artificial Intelligence Review 23—42, An empirical study undertaken on a cross-section of UNIX users at an academic site reveals a role for an active form of help system, rather than the more usual passive kind. Sample scripts supporting this view are presented and the kind of aid required for these examples is discussed. It is then proposed that to provide such aid requires the construction and maintenance of an individual model of each user.

Introduction The aim of this paper is to motivate, through examples and discussion, work on active aid systems for the UNIX file store manipulation domain. Most current help systems are passive, requiring a user to explicitly ask for help on a particular topic. Such a system is very difficult to construct and we discuss a number of the problems and issues that arise if this route is taken.

The remainder of this section introduces a number of challenging examples used to illustrate points made in later sections. Section 2 discusses some advantages of an active help system, while Section 3 outlines the modelling problems such a system may face. In Section 4, we turn to the issues to be dealt with as a system prepares to interrupt a user. In Section 5, we summarize our approach in building a system meant to tackle some of the problems that have been highlighted in this paper, and we also discuss the results of the automatic aid, with respect to the examples introduced in the present section.

In Section 6, we give an account of recent developments in this research. Motivating examples This paper arises from an empirical study undertaken on a cross-section of UNIX users at an academic site. Volunteer subjects included faculty members, research staff and postgraduate students. Thus, for example, error messages and the output from commands like ls were missing from the log and had to be reconstructed, as seen below.

The summaries given with these reconstructions are intended to enable a reader to rapidly absorb the content of what happened. For the same reason, we have also made comments next to some commands. These comments start with a hash symbol. These summaries and comments did not arise from interviews with the subjects, and merely present a single interpretation of what occurred.

Even so, it is quite remarkable what a wealth of information about each user lies in these examples, waiting to be extracted by an intelligent! The poplog example Anna has a sub-directory tf of her home directory which she wishes to move into a new sub-directory, to be called poplog. This can be achieved by mkdir poplog; mv tf poplog. However, instead of using mv, she tries first to copy tf into poplog commands 2—5 , then checks that the copy worked commands 6—8 , and finally removes the original commands 9— Alas, she is satisfied and goes on to remove the still unique original.

This command does not do anything sensible when applied to directories. Unfortunately there is no error message to alert her. Has a new directory been created? Gets an error message this time. Tries again without success. Turns to the manual. The ht example Graham wishes to collect all his files from his current working-directory into a new sub-directory called ht. To achieve this he creates the new directory command 1 , copies everything including ht!

As in the poplog example, a directory has been copied as a data-file, but in this case, the user is seen to be well aware of the behaviour of cp and rm when applied to directories rather than files. A typing mistake. Rectification of the previous typo. The popcode example Graham has two directories, ccode and popcode, which he believes to contain only unwanted files.

He attempts to delete each directory by entering it, removing all of its files, leaving it, and then removing it. However, in popcode he discovers we deduce from commands 6—9 that he has a sub-directory to deal with. We note how clear it is that failure of command 6 was not intended in this example as it was in command 10 of the ht example. Removes all the files.

Gets into the parent directory. This time there was a directory in there which was not removed. Has a look at the contents of the unremoved directory. Removes all the files in it. Tries to remove the directory; but command does not apply to directories. The right command to remove the directory.

Removes the two empty directories. The perquish example Anna wishes to collect two existing files, fred and fred1, from her home directory, together in a new directory called perqish, renaming them to iread. However, at command 3 she mis-types perqish as perquish, an error which lies latent until command 6, and uncorrected until command 19!

A version of this example is tackled in a companion paper in this volume Jones et al. Repeats command 2 without the typo this time. But command 11 is not identical to command 6 ; an oversight? She removes the unwanted file. Confirms the removal. Command 11 was not meant to be different from command 6 after all. First of all, is an aid system needed at all? In what way could an aid system offer help? Are there any cases where an active help system could be more suitable to help the user than a passive one? Some answers to these questions come straight out of real-life examples like those four illustrated in the previous section.

Not realising help is needed There are many cases where users type other than what they mean, without any error message being produced to alert them. Rather, this kind of reasoning is beyond the scope of UNIX or any other existing operating system, and hence, a help system would be needed instead. However, even a passive help system would not have been of much use to a user who had not been aware of a problematic situation, because this user would not have turned to it; how could one expect users to ask for help when they do not even know that they need it?

This can be one of the most important advantages of an aid system. The sample scripts clearly illustrate this situation, where users make errors without being aware of them. This was clearly the case in the poplog example, as well as the perquish example. The consequences of such errors vary, depending on how soon the user realises there has been an error, and how difficult it is to recover from it, if this is at all possible. In the remainder of this subsection, we are going to discuss the consequences of this kind of error, in the way they occurred in the poplog and the perquish examples.

We are also going to point out the fact that inefficient usage of commands may well fall into the same category of problem, where users may not be aware that they possibly need help. The popcode example will be used to illustrate this point. Catastrophic error in the poplog example A simple copy and remove plan in the poplog example ended up in the single catastrophic action remove since the copy part of the plan failed without the user having realised it. The lack of an error message following this command results in command 9, where she attempts to remove a directory, without it having been copied first, contrary to what she believes.

Initially, she only intended to move the directory elsewhere in the file store. This is a very tricky case indeed, where the user continues issuing commands as though a previous goal has already been accomplished. She did not seek help in the manual before typing command 5, which was the cause of the trouble. Perhaps she thought she knew how to handle this. After which, she did not even realize her mistake, and therefore, would not have asked for help even if this had been available. This is a typical situation in which a spontaneous intervention of an aid system can save trouble.

In this case, it could save the user from a catastrophic error. Considerable effort wasted in the perquish example The name of a directory called perqish is mistyped as perquish in command 3. As a result, a new file named after the mistyped name is created.

Customer Reviews

At this point the user is not aware of her error. However, after command 6, she realises that there is a problem, unlike the previous example where she never even realised that something went wrong. Still, the recovery from this error proves quite expensive in terms of the number of commands. As a matter of fact, she issued 23 commands to complete her initial goal, with 10 of them commands 7, 8, 9, 10, 11, 16, 17, 18, 19 and 23 typed in the context of recovering from the typo and consequences of command 3.

In this case, almost half of the effort was wasted due to a small typing error. However, it may have been worse. While trying to recover from her mistake, she gets involved in new ones and this goes on for some time. This is a case where the lack of an error message from UNIX is absolutely justified, simply because the error occurs in the name of a directory or file.

When new files are created, users are free to name files and directories as they please. Hence, UNIX cannot possibly complain when the user mistypes a name in a command to create a new file or directory. An aid system, which is supposed to provide this kind of reasoning could help the user recover quickly from an error like this. Another interesting point about this example is that this sort of typing mistake could have been made by anyone, novice or expert, suggesting that all types of users could benefit somehow from an aid system.

Would he be interested in seeing these very short command sequences, so that he could use them at some later time? Actually, many users probably feel safer using commands they are already familiar with, and do not bother to find out more about the operating system that they are using.


  • The Conquest (Saucer, Book 2);
  • 10 Basic Tips on Working Fast in UNIX or Linux Terminal.
  • Intelligent Help Systems for UNIX | Stephen J. Hegner | Springer.
  • Physicians Guide to Arthropods of Medical Importance, Fifth Edition.

Instead, they try to fit their needs to their existing knowledge. In this aspect, the aid system could act as a tutoring system. Again, the individuality of the user would play a significant role in deciding what help the aid system could provide. Not knowing how to ask for help Users may not know how to ask for help. It seems that sometimes the existing manual help is not sufficient for a user who turns to it for aid. The wealth of information may confuse the non-expert, who might pick up the wrong command for their case. There are times when users need advice for their individual case, and instead, all they can get is general information.

In the following two subsections we first illustrate this case where the manual proves inadequate, and then we describe the case where the user is looking for the wrong kind of information. Both cases occur in the poplog example. Failure following the man command in the poplog example Anna finds out about rmdir from the rm manual entry at command In fact, she probably wanted rm -r, but did not manage to find it in the manual.

This is only an example of a case where the user finds it difficult to retrieve information from the manual. Anna was quite lucky to encounter only one failure before she used a plan which worked. There are even worse cases where some users do not even get to find the right keyword for the command that they seek. Getting help on the wrong issue in the poplog example Anna believes that she has to find out how to remove the directory tf at command 11 of the poplog example.

But in fact, she would not want to remove it if she knew that it had not actually been copied first. In this case, the help she got from the manual did her no good. Ironically enough, no efficient reference facility would do any better if it did not have a user model. What Anna actually needed was a help system to tell her what was really going on, instead of just giving a reply to her question.

Trying vs. For example, Anna probably had some idea of the existence of the option -r in the poplog example. After the failure of command 9, because tf was a directory, she decided to take another guess before she issued the man command. However, she could not remember the exact name of the option she wanted, and in this case, typed the completely irrelevant option -i. Perhaps this kind of user would be happier and safer if an aid system could guarantee interruptions at dangerous junctures; safety encourages experiment and experiment encourages learning.

Bad typists, especially, would probably be happier to be interrupted, even if a facility for requesting help in written English were provided, because the latter would take some time to type. More human-like interaction Frustration can result for some users when the operating system is not able to spot even the most obvious to a human typing error.

They somehow expect the computer to have some features of humans, and may get upset when commands fail because of some small typing error. For example, a human expert would easily understand what Anna meant by command 14 in the perquish example. The whole context of the command sequence, and the similarity of the wrong command to the correct one, give sufficient clues to a human.

However, intelligence is required for such an error to be spotted by the computer, especially when the typo can change the meaning of a command. This was the case in command 2 of the perquish example. A monitoring help system could make the interaction more humanlike by recognizing these errors and interrupting discreetly like a human listener. Modelling Requirements In this section we address the issue of how accurately an aid system must model the user it serves.

Firstly, we consider how clearly it must reason about what actually happened, using a general understanding of the interactive nature of command-driven systems like UNIX. Secondly, it is shown that in order to reason about what should have happened the aid system must maintain a model of each individual user. What happened? For an automatic system to provide acceptable on-line aid it must show a good understanding of the user-UNIX interaction. Whatever their differences, all users interact with a shell in basically the same way; the user issues a command and UNIX may issue a reply.

This notion of interaction is important; it is not enough for an aid system to ignore the flow of information to a user and concentrate merely on the file store transformation. For example, we can strip all occurrences of ls from a command sequence leaving the effect on the machine unchanged, but this would mean that the aid system would not model the information that the user has seen. This might lead the aid system to interventions proposing rather suspect improvements. In the popcode example, the user learned that there was a sub-directory of popcode, only after attempting to remove it.

If it works it definitely is better, but the problem arises if the user is imperfect in some way e. All users distinguish these sequences, and therefore, so must an aid system. Thus, realistic aid systems will have to employ a rich model of the interaction between a user and UNIX. To do this, it must almost certainly make some generic assumptions about users; for example, users are reasonably attentive, consistent, sometimes forgetful, fat-fingered, cautious, etc.

Modelling UNIX is the least of our problems. What was intended? Through an understanding of what actually happened we may hope to reason about what was intended, even though the two may be very different. If the two are significantly different, the user ought to be informed. One approach to this might be to wait for an error message to arise, and then attempt a diagnosis for the user. However, as we will see, this cannot work; some users make productive use of failure conditions, and some errors do not provoke error messages.

Providing effective aid must involve modelling individual characteristics of each user, for it is easily seen that not doing so, quickly leads to confusion. What aspects of a user must we model? They both attempt to remove a directory by employing the command rm without any flag. An intervention at command 9 of the poplog example or, perhaps, command 10 might be acceptable: she plainly wants to get rid of tf but does not know how. Is this because he does not understand rm removes only files, or because he forgot lib was a directory?

Knowing the user we would conclude the latter; in fact, later on in the ht example at command 10 the same user is making productive use of the same failure condition. However, if he had employed ls -F as command 5, we might be less sure. For example, the fact that Anna had mistyped perqish to perquish could have been suspected right away, only because the typing, lexical and syntactic structure of the command containing it was very similar to the previous one, and people tend to make this kind of typing error.

However, as we have seen, different aspects are very heavily interrelated, and an existing model provides additional strong constraint on the sensible interpretations. Interventions Given that there is a useful point that can be made to the user, the question remains as to how, when, and on what basis to intervene.

How an intervention is made ought to depend on whether it proposes an optimisation or a correction; attention to a correction should be mandatory whereas offers of optimisation may be ignored. In this way corrections may be given as interruptions, like error messages, comprising a piece of English text to describe the problem and a proposed correction. Optimisations should not interrupt, but might instead alter the prompt to, say, a question-mark. If the user immediately hits return then the optimisation is given; otherwise, the user is free to issue a normal command.

There is little point in proposing a correction or optimisation that is beyond the current understanding of the user; if too many new ideas are introduced, it will confuse the user and be rejected. Here again, is a role for an individual user-model: to tailor advice to suit the user. When to intervene, again, depends on the nature of the proposal. Optimisations can be given as soon as they are discovered, since they rely only on what actually occurred.

This may be too soon if there is not enough evidence for suspecting the command, since it seems likely that almost any command might be intended as something else. However, if there has been sufficient evidence that a problem may have occurred, then it would probably be better for the system to intervene sooner than later, to prevent further complications of this problem. For example, we might expect command 6 in the perquish example to be of this nature.

There is no such point in the perquish example; in fact, she does make the recovery herself, but such a point occurs in the poplog example at command The basis for each intervention can, of course, be derived only from the events preceding the intervention, and it is not envisaged that we can hope to construct a dialogue between the aid system and the user; actions speak louder than words. However, the user model may also comprise a large proportion of the basis; Graham, copying a directory, is a very different matter from Anna, doing the same.

About This Item

Some interventions produced by a prototype active help system for the sample scripts are presented in the following section. Summary An active help system could be of much use to many users, and not only novices. This is the conclusion drawn from the analysis of the sample scripts. It could save the considerable amount of time and energy wasted in recoveries from errors. It could also act as a personal tutor, allowing the user to escape needless inefficiencies. The individuality of the user is of vital importance to the help an aid system should provide.

This makes the error diagnosis more complex since both general user models and individual ones should be taken into account for a clear interpretation of the interaction leading to a good plan recognition. The latter is essential for a reasonably reliable error diagnosis. Interventions should be kept to a minimum. Only when it is very obvious that the user needs help, should this be provided. By no means should the opposite happen in any sense. However, we will demonstrate the interventions produced by RESCUER when it was given, as input, the sample scripts presented in this paper.

A companion paper in this volume Jones et al. Recognition of a problematic situation. Diagnosis of the cause of the problem, if there has been one. Decision regarding the kind of advice and generation of response. These criteria include the questions whether the command was acceptable to UNIX or not, whether it was typical of its class and so on. The existence of such symptoms results in the further examination of the command typed. The process of generating an alternative interpretation contains the information needed for the construction of an explanation of what happened.

Here we exploit the fact that plausible guesses can be turned to plausible human errors. The existence of instabilities imply a possible future transition of the current state of the file store to another state, which is expected to follow at some point e. A file store is considered to be absolutely stable if it does not contain: 1. One would expect them to either have contents or be removed. Similarly, if a directory has only one child, it is considered to be borderline as to how useful it is to keep a directory with only one file. One would expect it to either have more contents, or the content file to be moved one level up and the directory to be removed.

Finally, duplicate files also imply some transition from a file store state that contains identical duplicates to a file store state where the original copies will have been removed, or the replicated files will have been changed. The poplog example 1. In this case, it generates four alternatives: 1. There are two reasons completely independent from each other that suggest that the user may have made an error: 1. There is a command other than the command typed that is very similar to the command typed, according to the knowledge representation of RESCUER.

This other command is different from the command typed in the effects that it has in the file store. This example demonstrates the successful automatic generation of a plausible correction. Therefore, it suggests a correction if there is more than one reason for believing that there may have been a mistake. Acknowledgements We thank Peter Norvig, Wolfgang Wahlster and Robert Wilensky for having made several helpful comments on an earlier version of this paper. We are also grateful to Stephen Hegner and Paul Mc Kevitt for their helpful comments on the current version.

References Breuker, J. Coaching in Help Systems. In Self, J. Artificial Intelligence and Human Learning, — London, UK: Chapman and Hall. Collins, A. Cognitive Science 1— In Johnson, P. Cambridge University Press. Jones, J. In Hegner, S. Kemke, C. Matthews, M. Virvou, M. Artificial Intelligence Review 43—88, UC was undertaken because the task was thought to be both a fertile domain for Artificial Intelligence research and a useful application of AI work in planning, reasoning, natural language processing, and knowledge representation.

This is done by interacting with the user in natural language.


  1. Download Intelligent Help Systems For Unix .
  2. Editorial: Intelligent Help Systems for UNIX: Computational Models and Systems?
  3. Filter by products, topics, and types of content!
  4. Intelligent Help Systems for UNIX - PDF Free Download;
  5. Kristeva and the Political (Thinking the Political).
  6. Hypoglycemia in Diabetes: Pathophysiology, Prevalence, and Prevention.
  7. Intelligent Help Systems for UNIX Books.
  8. KODIAK is a relation-oriented system that is intended to have wide representational range and a clear semantics, while maintaining a cognitive appeal. UC was to function as an intelligent, natural-language interface that would allow naive users to learn about the UNIX1 operating system by interacting with the consultant in ordinary English. Whereas front-ends generally take the place of other interfaces, UC was intended to help the user learn how to use an existing one. We had two major motivations for choosing this task. These can be summarized by saying that we believed the task to be both interesting and doable.

    It seemed to us that much natural-language work — indeed, much of AI research — has fallen into two largely non-intersecting categories: On the one hand, there are quite interesting and ambitious projects that have been more the fertile source of exciting speculations than of useful technology.

    In contrast, there are projects whose scope is severely limited, either to some intrinsically bounded, real-world task or to a laboratory micro-world. But such projects have rarely produced much in the way of progress on fundamental issues that comprise the central goals of AI researchers. Our hope was that the consultation task would require us to address fundamental problems in natural-language processing, planning and problem solving, and knowledge representation, all of which are of interest to us.

    In sum, virtually all the problems of language processing and reasoning arise in some fashion. While the task is interesting, it is nevertheless limited. Arbitrary knowledge of the world is generally not required, as it may be in other naturallanguage tasks, such as text processing. Even knowledge about the domain might be limited in ways that do not compromise the overall integrity of the system. This is probably less true of systems that are intended to be interfaces.

    However, a consultant may be quite useful even if it cannot help all the time. Similarly, there are strategies that might be employed in a consultant task that further reduce the degree of coverage required by the system. For example, if asked a very specific question, it is not unreasonable that a consultant respond by telling the user where to look for the information. Thus, the degree of expertise of the consultation system may be circumscribed. The domain would limit the breadth, but not the depth, of AI research required.

    UC — science or engineering? While a lengthy exposition might be needed to define this precisely, let it suffice here to say that we are interested in modeling human beings at least to a first approximation. Thus, as far as we could, we have attempted to build a system that modeled how we believe a human consultant actually functions.

    For example, since many word senses are unlikely to be used when talking to a consultant, a purely engineering approach might play down the problem of ambiguity. However, it is our goal to address such problems in a general fashion. At the same time, there were many pragmatic concessions that were made in implementing UC. Some of these were forced on us by the nature of university research. For example, a process might be divided into two components for the sake of implementation, although the particular division may not be motivated otherwise.

    These components might even exercise two different approaches to similar subproblems, depending on the biases of their authors. Sometimes, for the sake of efficiency, we chose to implement only part of what we believed to be a larger process. We will make note of other such situations in the text below.

    Linux/UNIX/Windows

    In general, when this was the case, the solution used took the form of checking for certain frequently occurring cases in order to preclude having to solve a general problem. However, we did feel that we should show that one could develop such a system along the lines that our research suggested. This would be accomplished by developing an extendible prototype. Reasonable agents versus intelligent interfaces Our goal in building UC is to simulate a human consultant. As a result, the system has a structure that is more complex than other so-called intelligent interfaces.

    Indeed, we feel that looking at such a system as an interface is misleading. Instead, we prefer the metaphor of a reasonable agent. Unlike an interface, which is a conduit through which information flows, an agent is a participant in a situation. In particular, an agent has explicit goals of its own, and a reasonable agent must be able to make obvious inferences and display judgment in making decisions. However, a reasonable agent is not always compelled to do so.

    Human consultants will not obligingly give out information to which a user is not entitled or which they suspect will be put to ill use. In addition, a good consultant might do something more than simply answer a question. In all these situations, an action other than simply responding to a request is warranted. A reasonable agent is ideally suited to handle such a broad class of situations.

    It does so by deciding what its goals should be in the given situation, and then planning for them. For example, when UC is asked how to crash the system, it forms two goals, one of helping the user to know what he or she wants, and one of protecting the integrity of the system. It then realizes that these two goals are in conflict, and eventually decides the conflict in favor of the latter goal. Of course, it is possible to achieve by other means various parts of the functionality here attributed to the model of a reasonable agent.

    For example, one can simply build one component that tries to detect misconceptions, another that checks for requests having to do with crashing the system, yet another to capitalize on opportunities to educate the user, etc. However, the reasonable-agent framework provides a single, flexible control structure in which to accomplish all these task, and, in particular, deal with interactions between them.

    That is its engineering motivation. Overview The structure of this report is as follows. First, we present an outline of the structure of the current version of our consultation system. Finally, we conclude with some discussion of the deficiencies of our current design. This representation generally contains only what can be determined from the words and linguistic structures present in the utterance.

    Recent searches Clear All. Update Location. If you want NextDay, we can save the other items for later. Yes—Save my other items for later. No—I want to keep shopping. Order by , and we can deliver your NextDay items by. In your cart, save the other item s for later in order to get NextDay delivery. We moved your item s to Saved for Later. There was a problem with saving your item s for later. You can go to cart and save for later there. Intelligent Help Systems for Unix. Average rating: 0 out of 5 stars, based on 0 reviews Write a review.

    Stephen J Hegner. Tell us if something is incorrect. Add to Cart. Free delivery. Arrives by Friday, Oct 4. Or get it by Thu, Sep 26 with faster delivery. Pickup not available. About This Item We aim to show you accurate product information.



admin