Research - Machine Intelligence Research Institute
This thesis focusses on the modelling of knowledge-intensive computer vision tasks. Knowledge-intensive tasks are tasks that require a high level of expert knowledge to be performed successfully. Such tasks are generally performed by a task expert. Task experts have a lot of experience in performing their task and can be a valuable source of information for the automation of the task. We propose a framework for creating a white-box ontology-based computer vision application. White-box methods have the property that the internal workings of the system are known and transparent. They can be understood in terms of the task domain. An application that is based on explicit expert knowledge has a number of inherent advantages, among which corrigibility, adaptability, robustness, and reliability. We propose a design method for developing white-box computer vision applications that consists of the following steps: (i) define the scope of the task and the purpose of the application, (ii) decompose the task into subtasks, (iii) define and refine application ontologies that contain the descriptive knowledge of the expert, (iv) identify computational components, (v) specify explicit procedural knowledge rules, and (vi) implement algorithms required by the procedural knowledge. The scope is one of the cornerstones of the application, since it sets the boundaries of the task. The problem owner and the domain experts are together responsible for setting the scope and defining the purpose. Scope and purpose are important for the task decomposition and for the specification of the application ontologies. The scope and purpose help the domain engineer to keep focus in creating dedicated ontologies for the application. The decomposition of the task into subtasks models the domain expert’s “observe – interpret – assess” way of performing a visual inspection task. This decomposition leads to a generic framework of subtasks alternated with application ontologies. The list of consecutive subtasks – record object, find structures, identify object parts, determine parameters, determine quality – can be reused for any visual inspection task. Application ontologies are task-specific ontologies containing the descriptive knowledge relevant for the task. We have described an interview-based knowledge acquisition method that is suited for modelling multi-domain, multiexpert task-specific ontologies. Using the knowledge of multiple experts leads to a rich application ontology; adding an outsider’s perspective from domain experts from other involved domains, leads to an expression of knowledge that may be too trivial for task experts to mention or may not be part of the usual perspective of the task experts. Knowledge acquisition based on interviews and observations only has some disadvantages. It takes a lot of modelling time for domain expert and knowledge engineer, it is difficult for the knowledge engineer to give a structured and full overview of his knowledge, and a model is created from scratch, even though reusable sources may exist. We have therefore introduced a reuse-based ontology construction component that gives domain expert a more prominent and active role in the knowledge acquisition process. This component prompts the domain expert with terms from existing knowledge sources to help him create a full overview of his knowledge. We show that this method is an efficient way to obtain a semi-formal description of the domain knowledge. With the decomposition of the knowledge-intensive task into subtasks interspersed with descriptive knowledge models completed, we focus on the subtasks. Each of these subtasks can be represented by a sequence of components that perform a clearly defined part of a task. To specify these components, we explicitly identify for each service in the computational workflow (i) the input concepts, (ii) the output concepts, and (iii) a human readable (high level) description of the service. This information is used as documentation for the procedural knowledge. Besides transparency of descriptive knowledge, transparency of processing knowledge is a desirable feature of a knowledge-intensive computer vision system. We show that blindly embedding software components in a transparent way may have an adverse effect. In some cases, transparency is not useful or desired. To support the software developer to make a balanced decision on whether transparency is called for, we have proposed a set of decision criteria – availability of expertise, application range of a component, triviality, explanation, and availability of third-party expertise. These decision criteria are paired to means of adding transparency to an application. We have elaborated several examples from the horticultural case study to show which transparency decisions are made for which reasons. Using the framework for designing knowledge-intensive computer vision applications, we have implemented a prototype system to automatically assess the quality of tomato seedlings. We have shown that the proposed design method indeed results in a white-box system that has adaptability, corrigibility, reliability and robustness as properties. We provide guidelines on how to implement tool support for the adaptability and corrigibility properties of the system, to better assist the end users of the application. Moreover, we show how organisational learning and building trust in the system are supported by the white-box setup of the computer vision application.
The Duhem-Quine Thesis Reconsidered - Part Two - …
As Fred Dretske is reputed have said, ‘one man’s modus ponens is another man’s modus tollens‘. There is nothing–neither in logic nor experience–which forces us to accept any observation. We could even adopt a strong anti-realist position that nothing but our experiences exists and abandon any pretense of explaining them. However, questions of when and why we might assent to a particular scientific hypothesis or falsifying observation were the subject of . If the Duhem-Quine problem is reducible to the claim that any purported falsification may be mistaken, then I contend that it’s a trivial thesis. It is especially perverse to bring this objection against Popper, who not only stressed the corrigibility of observation, but was also a thoroughgoing fallibilist.
Those who will not concede the corrigibility of their beliefs must directly equate disagreement and error, andfit their explanation of error on the heads of all critics and dissenters.
The Machine Intelligence Research Institute (MIRI), ..
Here are the types to follow:
Justify the beginningOn first inspection, it is plain that there are two primordial approaches to take.
Begin with metaphilosophy
Leave the beginning unjustified
Begin with a truth which needs no proof
Begin with the self-evident
Begin with faith
Accept a circular beginning
Begin with a self-justifying principle
Justify the beginning by the middle or end
Identify the beginning with the end; prove the beginning by proving the end
Regress to the logical beginning from wherever one starts
Render coherent what you find
Reject ideal of certainty, insist on corrigibility
Begin with the presuppositionless
Begin with something negative
Begin with error, not truth
Begin with something non-cognitive, not knowledge
Begin with the idea of the first cause
Begin with realities, not the ideas of realities
Reject the problem