Main Page

From MDKE15

Revision as of 13:04, 12 March 2016 by Wortmann (Talk | contribs)
(diff) ← Older revision | Current revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Model-Driven Knowledge Engineering for Improved Software Modularity in Robotics and Automation
at European Robotics Forum 2015
Vienna, Austria, March 11-13, 2015


Dates & Deadlines

  • February, 21 March, 01: Submission deadline for extended abstracts
  • March, 01 March, 06: Notification of acceptance
  • March, 13: Workshop date

Theme & Goals

Robotics is a growing discipline where developers with different backgrounds and focuses work together. Nowadays robotic-systems usually consist of a particular hardware platform with specific software architecture. Knowledge about sensor interaction, constraints and further information is coded within the architecture or certain modules. ROS is one step towards interfacing different robotic algorithms, software, etc. with a unique interface. Nevertheless information about how to handle things is still coded in single solutions without harmonized interfaces and model descriptions.

Model-Driven Engineering (MDE) already has huge impact on other fields. Currently, there are various approaches to the engineering of robotics applications, but widely applicable and thus accepted approaches have yet to emerge. This in part is due to the system lock-in that comes with selection of a (modeling) framework. Improving re-use and modularity of robotics applications requires to model the implicit knowledge encapsulated in current robotics modules and models explicitly. Applying knowledge engineering (KE) to model-driven robotics development will ease reuse and enable more efficient robotics software engineering.

Topics of Interest

This workshop aims to bring together researchers from two different fields: on one hand frameworks, languages, and tools for MDE have been developed, on the other hand robotics systems consist of an increasing amount of heterogeneous software which contains new exploitable knowledge about their properties and composition. The demands on robotics software regarding reusability, reliability, expandability, and efficiency are very high, hence suitable modeling techniques are required to achieve high quality software products. Furthermore the programming costs for robots in industrial lines as well as for service robotics applications are continuously increasing. To reduce such costs via reuse, methods developed in MDE and KE should be exhaustively applied to robotics software engineering. As robotics software engineering faces various challenges (e.g., software architecture, communication, motion planning). This workshop aims to provide a platform for the presentation of novel approaches to tackle these challenges by means of MDE and KE and how these helps to reduce cost and time in the development process. The presented methods may range from modeling languages and tools for very specific aspects to knowledge-aware modeling languages to complete frameworks in the robotics domain. The scope of this workshop includes, but is not limited to:

  • Integration of knowledge engineering with architecture and deployment modeling
  • Composition of modules and components with the help of knowledge engineering
  • Modeling languages for knowledge engineering
  • Toolchains for the knowledge-aware modeling of robotics applications
  • Applications of knowledge engineering to models at run-time and self-* properties
  • Knowledge-Driven model transformation between languages and frameworks

Workshop Program

The workshop will be held Friday, March, 13th, 10:45 to 12:15 in the SCHUNK room.

  • 10:45 - 10:50 Opening & Introduction
  • 10:50 - 11:20 Keynote Speeches
    • Michael Beetz: openEASE --- A Knowledge Processing Service for Robots and Robotics Researchers
    • Christian Berger: Combining Model-based Simulations with Real-World Sensor Recordings - Challenges and Opportunities (Presentation)
  • 11:20 - 11:50 Authors' Presentations
    • Dominick Vanthienen and Herman Bruyninckx. The Composition Pattern for model-driven robot application development (Presentation)
    • Stefan Schiffer, Alexander Ferrein and Gerhard Lakemeyer. Abstracting Away Low-Level Details in Service Robotics with Fuzzy Fluents (Presentation)
    • Rasmus Hasle Andersen, Anders Billesø Beck, Lars Dalgaard and John Hallam. Architecture for Efficient Reuse in Industrial Settings (Presentation)
  • 11:50 - 12:15 Discussion and Closing

Michael Beetz: openEASE --- A Knowledge Processing Service for Robots and Robotics Researchers

Prof. Michael Beetz is a professor for Computer Science at the Faculty for Informatics of the University Bremen and head of the Institute for Artificial Intelligence. From 2006 to 2011, he was vice coordinator of the German national cluster of excellence CoTeSys (Cognition for Technical Systems) where he is also co-coordinator of the research area “Knowledge and Learning”.

Michael Beetz received his diploma degree in Computer Science with distinction from the University of Kaiserslautern. He received his MSc, MPhil, and PhD degrees from Yale University in 1993, 1994, and 1996 and his Venia Legendi from the University of Bonn in 2000. Michael Beetz was a member of the steering committee of the European network of excellence in AI planning (PLANET) and coordinating the research area "robot planning". He is associate editor of the AI Journal. His research interests include plan-based control of robotic agents, knowledge processing and representation for robots, integrated robot learning, and cognitive perception.

openEASE --- A Knowledge Processing Service for Robots and Robotics Researchers

Making future autonomous robots capable of accomplishing human-scale manipulation tasks requires us to equip them with knowledge and reasoning mechanisms. We propose openEASE, a remote knowledge representation and processing service that aims at facilitating these capabilities. openEASE provides its users with unprecedented access to knowledge of leading-edge autonomous robotic agents. It also provides the representational infrastructure to make inhomogeneous experience data from robots and human manipulation episodes semantically accessible, as well as a suite of software tools that enable researchers and robots to interpret, analyze, visualize, and learn from the experience data. Using openEASE users can retrieve the memorized experiences of manipulation episodes and ask queries regarding to what the robot saw, reasoned, and did as well as how the robot did it, why, and what effects it caused.

Christian Berger: Combining Model-based Simulations with Real-World Sensor Recordings - Challenges and Opportunities

Dr. Christian Berger is assistant professor in the Department of Computer Science and Engineering at the University of Gothenburg, Sweden. He received his Ph.D. from RWTH Aachen University, Germany in 2010 for his work on challenges for the software engineering for self-driving vehicles together with academic and industrial partners like University of California, Berkeley in California and Volkswagen Group. His research expertise is on simulative approaches, formal methods, and model-based software engineering. He coordinated the interdisciplinary project for the development of the autonomously driving vehicle “Caroline”, which participated in the 2007 DARPA Urban Challenge Final in the United States. Currently, he is coordinating a research initiative on self-driving vehicles at Chalmers and University of Gothenburg in Sweden. He published more than 60 peer-reviewed articles in workshops, conferences, journals, and books.

Combining Model-based Simulations with Real-World Sensor Recordings - Challenges and Opportunities (Presentation)

Virtual testing plays a key role for today’s increasingly intelligent robot cars. It complements testing on proving grounds and on public roads by enabling a cost-effective way of automated testing with many scenarios that differ only slightly while preserving repeatability to study effects on the software when changing influencing external or internal variables to the system. However, virtual testing also depends on the embodied models in the simulation environment that themselves rely on assumptions of the reality. Therefore, a combination of virtual testing with real world testing is meaningful to also capture effects that have not been modeled in the virtual test environment. This talk will present an approach that combines models from virtual test environments with real world recordings to show possible application domains.

Submission Guidelines

All submitted papers will be reviewed on the basis of technical quality, relevance, significance, and clarity by the program committee. All workshop papers should be submitted electronically in PDF format through the EasyChair workshop website. The workshop results will be published at RWTH Open Access publication server.

Organizing Committee

Program Committee