Prof Dr Knut Hinkelmann and Dr Andreas Martin from the University of Applied Sciences and Arts Northwestern Switzerland are organising a symposium at Stanford University in March 2019 on the combination of machine learning with knowledge engineering.
How can the knowledge acquired by artificial intelligence (AI) and machine learning methods be represented, in such a way, that it can be explained and understood by humans? Based on this question, Prof Dr Knut Hinkelmann and Dr Andreas Martin of the Intelligent Information Systems Research Group from the University of Applied Sciences and Arts Northwestern Switzerland were appointed by the Association for the Advancement of Artificial intelligence (AAAI) to hold a symposium at Stanford University.
The AAAI 2019 Spring Symposium on Combining Machine Learning with Knowledge Engineering (AAAI-MAKE 2019) will be held in March 2019 at the headquarters of the AAAI, the Stanford University in Palo Alto (California, USA). It will bring together researchers and practitioners from the various areas of machine learning and knowledge engineering to work together on a «smarter» AI that can explain conclusions, is compliant and based on explicit domain knowledge.
Machines can learn, but how can their knowledge be made explainable? (Photo: iStock)
AI and machine learning
In the process of digitalisation, AI is celebrating a veritable revival as the basis of machine learning. Significant advancements in learning algorithms, access to computing power with suitable hardware and the availability of vast amounts of data as the basis for training have recently resulted in impressive applications and significantly reduced the learning effort. Data-driven machine learning, such as artificial neural networks (e.g. deep learning), is particularly suitable for complex situations for which knowledge is primarily tacit.
Tacit knowledge is not enough
To us humans, tacit knowledge means, «being able to do something without being able to say how» – we can hardly explain such knowledge. However, AI approaches with tacit knowledge are not sufficient for many scenarios and business cases, since explanations are usually expected. This is particularly the case where decisions can have severe consequences, for instance in medicine. Banks, insurance companies and the pharmaceutical industry also demand adherence to legislation and regulations – compliance being the keyword.
Explain conclusions with explicit knowledge
The explicit knowledge that can be communicated, stored, processed and transferred can hardly be acquired solely by data; instead, it must be represented – that is the field of knowledge engineering and knowledge representation. Knowledge-based systems that make knowledge explicit have been used for decades. Such systems are based on logic and can thus explain and make their conclusions understandable.
Data-driven machine learning is best suited for building AI systems based on tacit knowledge. Knowledge engineering, on the other hand, is suitable for the representation of expert knowledge that must be taken into consideration for compliance reasons, for example. There is an increasing demand for the combination of machine learning with knowledge engineering with the purpose of joining the forces and capabilities of both AI methods. Recent results have shown that explicit representation of domain knowledge can support data-driven machine learning approaches and improve its learning process.
Date: March 25-27, 2019
Location: Stanford University, Palo Alto, California, USA