Projects

The following is a list of projects I am involved in during my work as a research assistant at Bielefeld University of Applied Sciences.

Service Robots in Smart Homes (02/2016 –  04/2019)

The project deals with the cooperation between service robots and an intelligent environment to cooperatively accomplish tasks in a smart home environment.

My activity in the project focuses on teaching virtual borders to mobile robots. They address the problem of restricing the workspace of a mobile robot according to the users’ needs. These virtual borders are non-physical borders respected by the robot, and they are used to change the robot’s navigational behavior. Thus, a non-expert user can interactively and flexibly restrict the workspace of his/her mobile robot. This is especially interesting in human-centered environments to exclude certain areas from working, e.g. a bath room as a social place, or to define certain areas for working, e.g. spot vacuum cleaning. Currently, I am investigating different interaction methods to allow non-experts the intuitive interaction with the robot. Additionally, I investigate the influence of a smart home environment by leveraging additional smart home components and learning capabilities in the interaction process.

Relevant publications:

  • D. Sprute, P. Viertel, K. Tönnies, and M. König, “Learning Virtual Borders through Semantic Scene Understanding and Augmented Reality,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019, p. 4607–4614.
    [BibTeX] [Abstract]

    Virtual borders are an opportunity to allow users the interactive restriction of their mobile robots’ workspaces, e.g. to avoid navigation errors or to exclude certain areas from working. Currently, works in this field have focused on human-robot interaction (HRI) methods to restrict the workspace. However, recent trends towards smart environments and the tremendous progress in semantic scene understanding give new opportunities to enhance the HRI-based methods. Therefore, we propose a novel learning and support system (LSS) to support users during teaching of virtual borders. Our LSS learns from user interactions employing methods from visual scene understanding and supports users through recommendations for interactions. The bidirectional interaction between the user and system is realized using augmented reality. A validation of the approach shows that the LSS robustly recognizes a limited set of typical areas for virtual borders based on previous user interactions (F1-Score=91.5%) while preserving the high accuracy of standard HRI-based methods with a median of Mdn=84.6%. Moreover, this approach allows the reduction of the interaction time to a constant mean value of M=2 seconds making it independent of the border length. This avoids a linear interaction time of standard HRI-based methods.

    @inproceedings{sprute:2019d,
    author = {Dennis Sprute and Philipp Viertel and Klaus T{\"o}nnies and Matthias K{\"o}nig},
    booktitle={{2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}},
    title = {{Learning Virtual Borders through Semantic Scene Understanding and Augmented Reality}},
    year = {2019},
    month={11},
    pages={4607--4614},
    abstract={Virtual borders are an opportunity to allow users the interactive restriction of their mobile robots' workspaces, e.g. to avoid navigation errors or to exclude certain areas from working. Currently, works in this field have focused on human-robot interaction (HRI) methods to restrict the workspace. However, recent trends towards smart environments and the tremendous progress in semantic scene understanding give new opportunities to enhance the HRI-based methods. Therefore, we propose a novel learning and support system (LSS) to support users during teaching of virtual borders. Our LSS learns from user interactions employing methods from visual scene understanding and supports users through recommendations for interactions. The bidirectional interaction between the user and system is realized using augmented reality. A validation of the approach shows that the LSS robustly recognizes a limited set of typical areas for virtual borders based on previous user interactions (F1-Score=91.5%) while preserving the high accuracy of standard HRI-based methods with a median of Mdn=84.6%. Moreover, this approach allows the reduction of the interaction time to a constant mean value of M=2 seconds making it independent of the border length. This avoids a linear interaction time of standard HRI-based methods.}
    }

  • D. Sprute, K. Tönnies, and M. König, “Interactive Restriction of a Mobile Robot’s Workspace in a Smart Home Environment,” Journal of Ambient Intelligence and Smart Environments, vol. 11, iss. 6, p. 475–494, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Virtual borders are employed to allow humans the interactive and flexible restriction of their mobile robots’ workspaces in human-centered environments, e.g. to exclude privacy zones from the workspace or to indicate certain areas for working. They have been successfully specified in interaction processes using methods from human-robot interaction. However, these methods often lack an expressive feedback system, are restricted to robot’s on-board interaction capabilities and require a direct line of sight between human and robot. This negatively affects the user experience and interaction time. Therefore, we investigate the effect of a smart environment on the teaching of virtual borders with the objective to enhance the perceptual and interaction capabilities of a robot. For this purpose, we propose a novel interaction method based on a laser pointer, that leverages a smart home environment in the interaction process. This interaction method comprises an architecture for a smart home environment designed to support the interaction process, the cooperation of human, robot and smart environment in the interaction process, a cooperative perception including stationary and mobile cameras to perceive laser spots and an algorithm to extract virtual borders from multiple camera observations. The results of an experimental evaluation support our hypotheses that our novel interaction method features a significantly shorter interaction time and a better user experience compared to an approach without support of a smart environment. Moreover, the interaction method does not negatively affect other user requirements concerning completeness and accuracy.

    @Article{sprute:2019c,
    author={Sprute, Dennis and T{\"o}nnies, Klaus and K{\"o}nig, Matthias},
    title={{Interactive Restriction of a Mobile Robot’s Workspace in a Smart Home Environment}},
    journal={{Journal of Ambient Intelligence and Smart Environments}},
    year={2019},
    volume={11},
    number={6},
    pages={475--494},
    month={10},
    abstract={Virtual borders are employed to allow humans the interactive and flexible restriction of their mobile robots' workspaces in human-centered environments, e.g. to exclude privacy zones from the workspace or to indicate certain areas for working. They have been successfully specified in interaction processes using methods from human-robot interaction. However, these methods often lack an expressive feedback system, are restricted to robot's on-board interaction capabilities and require a direct line of sight between human and robot. This negatively affects the user experience and interaction time. Therefore, we investigate the effect of a smart environment on the teaching of virtual borders with the objective to enhance the perceptual and interaction capabilities of a robot. For this purpose, we propose a novel interaction method based on a laser pointer, that leverages a smart home environment in the interaction process. This interaction method comprises an architecture for a smart home environment designed to support the interaction process, the cooperation of human, robot and smart environment in the interaction process, a cooperative perception including stationary and mobile cameras to perceive laser spots and an algorithm to extract virtual borders from multiple camera observations. The results of an experimental evaluation support our hypotheses that our novel interaction method features a significantly shorter interaction time and a better user experience compared to an approach without support of a smart environment. Moreover, the interaction method does not negatively affect other user requirements concerning completeness and accuracy.},
    url={//arxiv.org/abs/1902.06997}
    }

  • D. Sprute, K. Tönnies, and M. König, “A Study on Different User Interfaces for Teaching Virtual Borders to Mobile Robots,” International Journal of Social Robotics, vol. 11, iss. 3, p. 373–388, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Human-aware robot navigation is an essential aspect to increase the acceptance of mobile service robots in human-centered environments, e.g. home environments. Robots need to navigate in a human-acceptable way according to the users’ conventions, presence and needs. In order to address the users’ needs, we employ virtual borders, which are non-physical borders and respected by the robots while working, to effectively restrict the workspace of a mobile robot and change its navigational behavior. To this end, we consider different user interfaces, i.e. visual markers, a laser pointer, a graphical user interface and a RGB-D Google Tango tablet with augmented reality application, to allow non-expert users the flexible and interactive definition of virtual borders. These user interfaces were evaluated with respect to their correctness, flexibility, accuracy, teaching effort and user experience. Experimental results show that the RGB-D Google Tango tablet as user interface yields the best overall results compared to the other user interfaces. Apart from a low teaching effort and high flexibility and accuracy, it features the highest user ratings acquired from a comprehensive user study with 25 participants for intuitiveness, comfort, learnability and its feedback system.

    @Article{sprute:2019b,
    author={Sprute, Dennis and T{\"o}nnies, Klaus and K{\"o}nig, Matthias},
    title={{A Study on Different User Interfaces for Teaching Virtual Borders to Mobile Robots}},
    journal={{International Journal of Social Robotics}},
    year={2019},
    volume={11},
    number={3},
    pages={373--388},
    month={06},
    abstract={Human-aware robot navigation is an essential aspect to increase the acceptance of mobile service robots in human-centered environments, e.g. home environments. Robots need to navigate in a human-acceptable way according to the users' conventions, presence and needs. In order to address the users' needs, we employ virtual borders, which are non-physical borders and respected by the robots while working, to effectively restrict the workspace of a mobile robot and change its navigational behavior. To this end, we consider different user interfaces, i.e. visual markers, a laser pointer, a graphical user interface and a RGB-D Google Tango tablet with augmented reality application, to allow non-expert users the flexible and interactive definition of virtual borders. These user interfaces were evaluated with respect to their correctness, flexibility, accuracy, teaching effort and user experience. Experimental results show that the RGB-D Google Tango tablet as user interface yields the best overall results compared to the other user interfaces. Apart from a low teaching effort and high flexibility and accuracy, it features the highest user ratings acquired from a comprehensive user study with 25 participants for intuitiveness, comfort, learnability and its feedback system.},
    url={https://doi.org/10.1007/s12369-018-0506-3}
    }

  • R. Rasch, D. Sprute, A. Pörtner, S. Battermann, and M. König, “Tidy up my room: Multi-agent cooperation for service tasks in smart environments,” Journal of Ambient Intelligence and Smart Environments, vol. 11, iss. 3, p. 261–275, 2019.
    [BibTeX] [Abstract] [Download PDF]

    Low-cost robots are usually specialized systems that cannot solve complex tasks, e.g., doing the laundry or tidying up. These tasks are usually solved by more complex and expensive general-purpose systems. The most common problems are the lack of sensors and actuators or computing power to perform multiple functions in parallel. The integration of robots into intelligent environments can help to solve more complex tasks by utilizing the components of the intelligent environment. Such an approach is used in the system to detect pointing gestures from a user and locating objects that a low-priced robot then collects and carries away. This approach is performed by cameras and supported by lights of the intelligent environment. The experimental results show that the cooperation of robots and smart environment increases the success rate of complex tasks in situations where the robot or components of the intelligent environment would underperform.

    @Article{rasch:2019a,
    author={Robin Rasch and Dennis Sprute and Aljoscha Pörtner and Sven Battermann and Matthias König},
    title={{Tidy up my room: Multi-agent cooperation for service tasks in smart environments}},
    journal={{Journal of Ambient Intelligence and Smart Environments}},
    year={2019},
    month={05},
    pages={261--275},
    volume={11},
    number={3},
    abstract={Low-cost robots are usually specialized systems that cannot solve complex tasks, e.g., doing the laundry or tidying up. These tasks are usually solved by more complex and expensive general-purpose systems. The most common problems are the lack of sensors and actuators or computing power to perform multiple functions in parallel. The integration of robots into intelligent environments can help to solve more complex tasks by utilizing the components of the intelligent environment. Such an approach is used in the system to detect pointing gestures from a user and locating objects that a low-priced robot then collects and carries away. This approach is performed by cameras and supported by lights of the intelligent environment. The experimental results show that the cooperation of robots and smart environment increases the success rate of complex tasks in situations where the robot or components of the intelligent environment would underperform.},
    url={https://doi.org/10.3233/AIS-190524}
    }

  • D. Sprute, K. Tönnies, and M. König, “This Far, No Further: Introducing Virtual Borders to Mobile Robots Using a Laser Pointer,” in 2019 Third IEEE International Conference on Robotic Computing (IRC), 2019, p. 403–408.
    [BibTeX] [Abstract] [Download PDF]

    We address the problem of controlling the workspace of a 3-DoF mobile robot. In a human-robot shared space, robots should navigate in a human-acceptable way according to the users’ demands. For this purpose, we employ virtual borders, that are non-physical borders, to allow a user the restriction of the robot’s workspace. To this end, we propose an interaction method based on a laser pointer to intuitively define virtual borders. This interaction method uses a previously developed framework based on robot guidance to change the robot’s navigational behavior. Furthermore, we extend this framework to increase the flexibility by considering different types of virtual borders, i.e. polygons and curves separating an area. We evaluated our method with 15 non-expert users concerning correctness, accuracy and teaching time. The experimental results revealed a high accuracy and linear teaching time with respect to the border length while correctly incorporating the borders into the robot’s navigational map. Finally, our user study showed that non-expert users can employ our interaction method.

    @inproceedings{sprute:2019a,
    author = {Dennis Sprute and Klaus T{\"o}nnies and Matthias K{\"o}nig},
    booktitle={{2019 Third IEEE International Conference on Robotic Computing (IRC)}},
    title = {{This Far, No Further: Introducing Virtual Borders to Mobile Robots Using a Laser Pointer}},
    year = {2019},
    month={02},
    pages={403--408},
    url = {https://arxiv.org/abs/1708.06274},
    abstract={We address the problem of controlling the workspace of a 3-DoF mobile robot. In a human-robot shared space, robots should navigate in a human-acceptable way according to the users' demands. For this purpose, we employ virtual borders, that are non-physical borders, to allow a user the restriction of the robot's workspace. To this end, we propose an interaction method based on a laser pointer to intuitively define virtual borders. This interaction method uses a previously developed framework based on robot guidance to change the robot's navigational behavior. Furthermore, we extend this framework to increase the flexibility by considering different types of virtual borders, i.e. polygons and curves separating an area. We evaluated our method with 15 non-expert users concerning correctness, accuracy and teaching time. The experimental results revealed a high accuracy and linear teaching time with respect to the border length while correctly incorporating the borders into the robot's navigational map. Finally, our user study showed that non-expert users can employ our interaction method.}
    }

  • D. Sprute, K. Tönnies, and M. König, “Virtual Borders: Accurate Definition of a Mobile Robot’s Workspace Using Augmented Reality,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, p. 8574–8581.
    [BibTeX] [Abstract] [Download PDF]

    We address the problem of interactively controlling the workspace of a mobile robot to ensure a human-aware navigation. This is especially of relevance for non-expert users living in human-robot shared spaces, e.g. home environments, since they want to keep the control of their mobile robots, such as vacuum cleaning or companion robots. Therefore, we introduce virtual borders that are respected by a robot while performing its tasks. For this purpose, we employ a RGB-D Google Tango tablet as human-robot interface in combination with an augmented reality application to flexibly define virtual borders. We evaluated our system with 15 non-expert users concerning accuracy, teaching time and correctness and compared the results with other baseline methods based on visual markers and a laser pointer. The experimental results show that our method features an equally high accuracy while reducing the teaching time significantly compared to the baseline methods. This holds for different border lengths, shapes and variations in the teaching process. Finally, we demonstrated the correctness of the approach, i.e. the mobile robot changes its navigational behavior according to the user-defined virtual borders.

    @inproceedings{sprute:2018b,
    author = {Dennis Sprute and Klaus T{\"o}nnies and Matthias K{\"o}nig},
    booktitle={{2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}},
    title = {{Virtual Borders: Accurate Definition of a Mobile Robot's Workspace Using Augmented Reality}},
    year = {2018},
    month ={10},
    pages={8574--8581},
    url = {https://arxiv.org/abs/1709.00954},
    abstract={We address the problem of interactively controlling the workspace of a mobile robot to ensure a human-aware navigation. This is especially of relevance for non-expert users living in human-robot shared spaces, e.g. home environments, since they want to keep the control of their mobile robots, such as vacuum cleaning or companion robots. Therefore, we introduce virtual borders that are respected by a robot while performing its tasks. For this purpose, we employ a RGB-D Google Tango tablet as human-robot interface in combination with an augmented reality application to flexibly define virtual borders. We evaluated our system with 15 non-expert users concerning accuracy, teaching time and correctness and compared the results with other baseline methods based on visual markers and a laser pointer. The experimental results show that our method features an equally high accuracy while reducing the teaching time significantly compared to the baseline methods. This holds for different border lengths, shapes and variations in the teaching process. Finally, we demonstrated the correctness of the approach, i.e. the mobile robot changes its navigational behavior according to the user-defined virtual borders.}
    }

  • A. Pörtner, L. Schröder, R. Rasch, D. Sprute, M. Hoffmann, and M. König, “The Power of Color: A Study on the Effective Use of Colored Light in Human-Robot Interaction,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, p. 3395–3402.
    [BibTeX] [Abstract] [Download PDF]

    In times of more and more complex interaction techniques, we point out the powerfulness of colored light as a simple and cheap feedback mechanism. Since it is visible over a distance and does not interfere with other modalities, it is especially interesting for mobile robots. In an online survey, we asked 56 participants to choose the most appropriate colors for scenarios that were presented in the form of videos. In these scenarios a mobile robot accomplished tasks, in some with success, in others it failed because the task is not feasible, in others it stopped because it waited for help. We analyze in what way the color preferences differ between these three categories. The results show a connection between colors and meanings and that it depends on the participants’ technical affinity, experience with robots and gender how clear the color preference is for a certain category. Finally, we found out that the participants’ favorite color is not related to color preferences.

    @inproceedings{poertner:2018a,
    author = {Aljoscha P{\"o}rtner and Lilian Schr{\"o}der and Robin Rasch and Dennis Sprute and Martin Hoffmann and Matthias K{\"o}nig},
    booktitle={{2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}},
    title = {{The Power of Color: A Study on the Effective Use of Colored Light in Human-Robot Interaction}},
    year = {2018},
    month ={10},
    pages={3395--3402},
    url = {https://arxiv.org/abs/1802.07557},
    abstract={In times of more and more complex interaction techniques, we point out the powerfulness of colored light as a simple and cheap feedback mechanism. Since it is visible over a distance and does not interfere with other modalities, it is especially interesting for mobile robots. In an online survey, we asked 56 participants to choose the most appropriate colors for scenarios that were presented in the form of videos. In these scenarios a mobile robot accomplished tasks, in some with success, in others it failed because the task is not feasible, in others it stopped because it waited for help. We analyze in what way the color preferences differ between these three categories. The results show a connection between colors and meanings and that it depends on the participants' technical affinity, experience with robots and gender how clear the color preference is for a certain category. Finally, we found out that the participants' favorite color is not related to color preferences.}
    }

  • D. Sprute, R. Rasch, A. Pörtner, S. Battermann, and M. König, “Gesture-Based Object Localization for Robot Applications in Intelligent Environments,” in 2018 International Conference on Intelligent Environments (IE), 2018, p. 48–45.
    [BibTeX] [Abstract] [Download PDF]

    Drawing attention to objects and their localization in the environment are essential building blocks for domestic robot applications, e.g. fetch-and-delivery or navigation tasks. For this purpose, human pointing gestures turned out to be a natural and intuitive interaction method to transfer the spatial data of an object from human to robot. Current approaches only use the robot’s on-board sensors to perceive gesture-based instructions, which restricts them to the field of view of the robot’s camera. The integration of mobile robots into intelligent environments, such as smart homes, opens new possibilities to overcome this limitation by utilizing components of the surrounding environment as additional sensors. We take advantage of these new possibilities and propose a multi-stage object localization system based on human pointing gestures that considers the whole intelligent environment as interaction partner. Our experimental results show that our multi-stage approach successfully refines the position initially proposed by a human pointing gesture by employing a distributed camera network integrated into the environment for object localization.

    @inproceedings{sprute:2018a,
    author={Dennis Sprute and Robin Rasch and Aljoscha Pörtner and Sven Battermann and Matthias König},
    booktitle={{2018 International Conference on Intelligent Environments (IE)}},
    title={{Gesture-Based Object Localization for Robot Applications in Intelligent Environments}},
    year={2018},
    pages={48--45},
    month={06},
    url={https://ieeexplore.ieee.org/document/8595031},
    abstract={Drawing attention to objects and their localization in the environment are essential building blocks for domestic robot applications, e.g. fetch-and-delivery or navigation tasks. For this purpose, human pointing gestures turned out to be a natural and intuitive interaction method to transfer the spatial data of an object from human to robot. Current approaches only use the robot's on-board sensors to perceive gesture-based instructions, which restricts them to the field of view of the robot's camera. The integration of mobile robots into intelligent environments, such as smart homes, opens new possibilities to overcome this limitation by utilizing components of the surrounding environment as additional sensors. We take advantage of these new possibilities and propose a multi-stage object localization system based on human pointing gestures that considers the whole intelligent environment as interaction partner. Our experimental results show that our multi-stage approach successfully refines the position initially proposed by a human pointing gesture by employing a distributed camera network integrated into the environment for object localization.}
    }

  • D. Sprute, R. Rasch, K. Tönnies, and M. König, “A Framework for Interactive Teaching of Virtual Borders to Mobile Robots,” in 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2017, p. 1175–1181.
    [BibTeX] [Abstract] [Download PDF]

    The increasing number of robots in home environments leads to an emerging coexistence between humans and robots. Robots undertake common tasks and support the residents in their everyday life. People appreciate the presence of robots in their environment as long as they keep the control over them. One important aspect is the control of a robot’s workspace. Therefore, we introduce virtual borders to precisely and flexibly define the workspace of mobile robots. First, we propose a novel framework that allows a person to interactively restrict a mobile robot’s workspace. To show the validity of this framework, a concrete implementation based on visual markers is implemented. Afterwards, the mobile robot is capable of performing its tasks while respecting the new virtual borders. The approach is accurate, flexible and less time consuming than explicit robot programming. Hence, even non-experts are able to teach virtual borders to their robots which is especially interesting in domains like vacuuming or service robots in home environments.

    @inproceedings{sprute:2017b,
    author = {Dennis Sprute and Robin Rasch and Klaus T{\"o}nnies and Matthias K{\"o}nig},
    title = {{A Framework for Interactive Teaching of Virtual Borders to Mobile Robots}},
    booktitle={{2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)}},
    pages={1175--1181},
    year = {2017},
    month={09},
    abstract={The increasing number of robots in home environments leads to an emerging coexistence between humans and robots. Robots undertake common tasks and support the residents in their everyday life. People appreciate the presence of robots in their environment as long as they keep the control over them. One important aspect is the control of a robot’s workspace. Therefore, we introduce virtual borders to precisely and flexibly define the workspace of mobile robots. First, we propose a novel framework that allows a person to interactively restrict a mobile robot’s workspace. To show the validity of this framework, a concrete implementation based on visual markers is implemented. Afterwards, the mobile robot is capable of performing its tasks while respecting the new virtual borders. The approach is accurate, flexible and less time consuming than explicit robot programming. Hence, even non-experts are able to teach virtual borders to their robots which is especially interesting in domains like vacuuming or service robots in home environments.},
    url ={http://ieeexplore.ieee.org/document/8172453/}
    }

  • D. Sprute, A. Pörtner, R. Rasch, S. Battermann, and M. König, “Ambient Assisted Robot Object Search,” in Enhanced Quality of Life and Smart Living: 15th International Conference on Smart Homes and Health Telematics (ICOST), 2017, p. 112–123.
    [BibTeX] [Abstract] [Download PDF]

    In this paper, we integrate a mobile service robot into a smart home environment in order to improve the search of objects by a robot. We propose a hierarchical search system consisting of three layers: (1) local search, (2) global search and (3) exploration. This approach extends the sensory variety of the mobile service robot by employing additional smart home sensors for the object search. Therefore, the robot is no more limited to its on-board sensors. Furthermore, we provide a visual feedback system integrated into the smart home to effectively inform the user about the current state of the search process. We evaluated our system in a fetch-and-delivery task, and the experimental results revealed a more efficient and faster search compared to a search without support of a smart home. Such a system can assist elderly people, especially people with cognitive impairments, in their home environments and support them to live self-determined in old age.

    @inproceedings{sprute:2017a,
    author={Sprute, Dennis and P{\"o}rtner, Aljoscha and Rasch, Robin and Battermann, Sven and K{\"o}nig, Matthias},
    title={{Ambient Assisted Robot Object Search}},
    bookTitle={{Enhanced Quality of Life and Smart Living: 15th International Conference on Smart Homes and Health Telematics (ICOST)}},
    year={2017},
    month={08},
    publisher={Springer International Publishing},
    pages={112--123},
    abstract={In this paper, we integrate a mobile service robot into a smart home environment in order to improve the search of objects by a robot. We propose a hierarchical search system consisting of three layers: (1) local search, (2) global search and (3) exploration. This approach extends the sensory variety of the mobile service robot by employing additional smart home sensors for the object search. Therefore, the robot is no more limited to its on-board sensors. Furthermore, we provide a visual feedback system integrated into the smart home to effectively inform the user about the current state of the search process. We evaluated our system in a fetch-and-delivery task, and the experimental results revealed a more efficient and faster search compared to a search without support of a smart home. Such a system can assist elderly people, especially people with cognitive impairments, in their home environments and support them to live self-determined in old age.},
    isbn={978-3-319-66188-9},
    url={https://doi.org/10.1007/978-3-319-66188-9_10}
    }

Further information can be found at: http://www.iot-minden.de/course/seerose/


Vine Moth Monitoring (08/2015 – 02/2016)

Vine Moth MonitoringMonitoring of vine moths is performed by evaluating special cards that serve for oviposition. For this purpose, a smart phone application was developed to count the number of eggs on the cards and to determine the geographic position of the card dependent on the time. Egg counting is realized by image processing algorithms applied to the input images of the smart phone’s camera. The monitoring results provide detailed information about the vine moth population and give decision support for insecticide application.

My activity in the project comprised the whole development of the smart phone application including image processing algorithms.

Relevant publications:

  • D. Sprute, A. Greif, J. Gross, C. Hoffmann, M. Rid, and M. König, “Schädlingsmonitoring des Traubenwicklers durch Auswertung einer Motten-Eiablage-Karte mittels Smartphone-Anwendung,” in Referate der 36. GIL-Jahrestagung – Intelligente Systeme – Stand der Technik und neue Möglichkeiten, Osnabrück, Deutschland, 2016, p. 201–204.
    [BibTeX] [Abstract] [Download PDF]

    Zum Schädlingsmonitoring des Traubenwicklers dienen Motten-Eiablage-Karten, auf denen die Weibchen ihre Eier ablegen. Diese Karten wurden bisher manuell durch Begutachtung ausgewertet. Dieser Beitrag beschreibt ein neues automatisiertes Auswertungsverfahren der Karten mit Hilfe einer Smartphone-App. Sie benutzt die integrierte Kamera zur bildbasierten Zählung der Eier und den GPS-Empfänger zur Bestimmung des aktuellen Aufnahmeortes. Die gewonnenen Daten (Zeit, Standort und Eieranzahl) können dazu genutzt werden, den Schädling effektiv zu überwachen, und sie bieten Entscheidungshilfen für den gezielten Einsatz von Insektiziden.

    @inproceedings{sprute:2016a,
    author = {Dennis Sprute and Anna Greif and Jürgen Gross and Christoph Hoffmann and Margit Rid and Matthias König},
    title = {{Schädlingsmonitoring des Traubenwicklers durch Auswertung einer Motten-Eiablage-Karte mittels Smartphone-Anwendung}},
    booktitle = {{Referate der 36. GIL-Jahrestagung – Intelligente Systeme – Stand der Technik und neue Möglichkeiten, Osnabrück, Deutschland}},
    pages = {201--204},
    year = {2016},
    month = {02},
    isbn = {978-3-88579-647-3},
    url = {http://subs.emis.de/LNI/Proceedings/Proceedings253/article5.html},
    abstract = {Zum Schädlingsmonitoring des Traubenwicklers dienen Motten-Eiablage-Karten, auf denen die Weibchen ihre Eier ablegen. Diese Karten wurden bisher manuell durch Begutachtung ausgewertet. Dieser Beitrag beschreibt ein neues automatisiertes Auswertungsverfahren der Karten mit Hilfe einer Smartphone-App. Sie benutzt die integrierte Kamera zur bildbasierten Zählung der Eier und den GPS-Empfänger zur Bestimmung des aktuellen Aufnahmeortes. Die gewonnenen Daten (Zeit, Standort und Eieranzahl) können dazu genutzt werden, den Schädling effektiv zu überwachen, und sie bieten Entscheidungshilfen für den gezielten Einsatz von Insektiziden.}
    }

Further information can be found at: http://www.iot-minden.de/course/vine-moth-monitoring/


Fall and Activity Recognition in Smart Homes (05/2014 – 08/2015)

Wearable DeviceThe project focuses on the development of a fall detection and activity recognition system and the integration of the system into a smart home environment. It is based on a small-sized customized wearable device that is attached to the user’s waist. Activity recognition is performed locally on the wearable with its restricted computational capabilities in real time. A special focus is set on the porting of appropriate machine learning algorithms onto the chip. The smart home integration of the system allows urgent support for fallen people as well as a fast response of the intelligent environment depending on the current user’s activity.

My activity in the project encompassed the hardware development and the implementation of machine learning algorithms on the chip to recognize basic human activities.

Relevant publications:

  • D. Sprute and M. König, “On-Chip Activity Recognition in a Smart Home,” in 12th International Conference on Intelligent Environments (IE), 2016, p. 95–102.
    [BibTeX] [Abstract] [Download PDF]

    This paper proposes a novel activity recognition system that is integrated into a smart home environment. It is characterized by low costs, high energy efficiency and low intrusiveness to increase the acceptance of the users. Activity recognition is performed locally on a single small-sized wearable device incorporating a microprocessor and a tri-axial accelerometer. After investigating on different feature sets and classification algorithms, the final implementation only considers five time domain features and a C4.5 decision tree classifier resulting in an immediate response. This wearable device is successfully integrated into an intelligent environment by Bluetooth Low Energy wireless communication protocol and openHAB as platform-independent integration software. The integration of the system into the home environment allows reactions of the home depending on the activity which enriches the life quality of the residents. Additionally, the system covers fall detection that enables the home to provide a fallen person with urgent support.

    @inproceedings{sprute:2016b,
    author = {Dennis Sprute and Matthias König},
    title = {{On-Chip Activity Recognition in a Smart Home}},
    booktitle = {{12th International Conference on Intelligent Environments (IE)}},
    pages={95--102},
    year = {2016},
    month = {09},
    url = {http://ieeexplore.ieee.org/document/7723476/},
    abstract = {This paper proposes a novel activity recognition system that is integrated into a smart home environment. It is characterized by low costs, high energy efficiency and low intrusiveness to increase the acceptance of the users. Activity recognition is performed locally on a single small-sized wearable device incorporating a microprocessor and a tri-axial accelerometer. After investigating on different feature sets and classification algorithms, the final implementation only considers five time domain features and a C4.5 decision tree classifier resulting in an immediate response. This wearable device is successfully integrated into an intelligent environment by Bluetooth Low Energy wireless communication protocol and openHAB as platform-independent integration software. The integration of the system into the home environment allows reactions of the home depending on the activity which enriches the life quality of the residents. Additionally, the system covers fall detection that enables the home to provide a fallen person with urgent support.}
    }

  • A. Pörtner, D. Sprute, A. Weinitschke, and M. König, “Integration of a fall detection system into the intelligent building,” in 45. Jahrestagung der Gesellschaft für Informatik, Cottbus, Deutschland, 2015, p. 191–202.
    [BibTeX] [Abstract] [Download PDF]

    Health monitoring and the integration of such systems into homely environments can support healthy aging. In this paper, we focus on the integration of a fall detection system into the intelligent building. We present a low intrusive way to detect falls and locate people inside a building based on the low power wireless technology Bluetooth Smart and a concept how well designed human computer interaction (HCI) concepts inside a building can help to save lives or at least prevent people of heavy injuries.

    @inproceedings{poertner:2015,
    author = {Aljoscha Pörtner and Dennis Sprute and Alexander Weinitschke and Matthias König},
    title = {Integration of a fall detection system into the intelligent building},
    booktitle = {{45. Jahrestagung der Gesellschaft für Informatik, Cottbus, Deutschland}},
    pages = {191--202},
    year = {2015},
    month = {09},
    isbn = {978-3-88579-640-4},
    url = {http://subs.emis.de/LNI/Proceedings/Proceedings246/article43.html},
    abstract = {Health monitoring and the integration of such systems into homely environments can support healthy aging. In this paper, we focus on the integration of a fall detection system into the intelligent building. We present a low intrusive way to detect falls and locate people inside a building based on the low power wireless technology Bluetooth Smart and a concept how well designed human computer interaction (HCI) concepts inside a building can help to save lives or at least prevent people of heavy injuries.}
    }

  • M. König, H. -J. Lakomek, A. Pörtner, and D. Sprute, “Smart Fall: Entwicklung eines Systems zur Sturz- und Aktivitätenerkennung im Smart Home,” Orthopädie Technik, p. 30–35, 2017.
    [BibTeX] [Abstract] [Download PDF]

    Das Projekt „Smart Fall“ beschäftigt sich mit einem kostengünstigen System zur Erkennung von Aktivitäten und Stürzen älterer Menschen und der Einbindung des Systems in einen Smart-Home-Kontext. Das entwickelte System umfasst zwei wesentliche Komponenten: Ein sogenanntes Wearable dient als Sensorik zur Erkennung von Stürzen und Aktivitäten einer Person, während eine Empfangskomponente zur Kopplung an das Smart Home dient. Beide Komponenten kommunizieren funkbasiert miteinander. Die Erkennung von Stürzen und eine damit verbundene Alarmierung im Notfall betrifft insbesondere ältere Menschen, die sich möglicherweise nach einem Sturz nicht mehr selbst helfen können. Das Thema hat ebenfalls eine starke Relevanz für Menschen in häuslicher Pflege und in Pflegeeinrichtungen.

    @article{koenig:2017a,
    author = {Matthias K{\"o}nig and H.-J. Lakomek and Aljoscha P{\"o}rtner and Dennis Sprute},
    title = {{Smart Fall: Entwicklung eines Systems zur Sturz- und Aktivitätenerkennung im Smart Home}},
    journal={{Orthopädie Technik}},
    pages={30--35},
    year = {2017},
    month={09},
    abstract={Das Projekt „Smart Fall“ beschäftigt sich mit einem kostengünstigen System zur Erkennung von Aktivitäten und Stürzen älterer Menschen und der Einbindung des Systems in einen Smart-Home-Kontext. Das entwickelte System umfasst zwei wesentliche Komponenten: Ein sogenanntes Wearable dient als Sensorik zur Erkennung von Stürzen und Aktivitäten einer Person, während eine Empfangskomponente zur Kopplung an das Smart Home dient. Beide Komponenten kommunizieren funkbasiert miteinander. Die Erkennung von Stürzen und eine damit verbundene Alarmierung im Notfall betrifft insbesondere ältere Menschen, die sich möglicherweise nach einem Sturz nicht mehr selbst helfen können. Das Thema hat ebenfalls eine starke Relevanz für Menschen in häuslicher Pflege und in Pflegeeinrichtungen.},
    url={http://www.iot-minden.de/wp-content/uploads/2017/09/OT0917_K%C3%B6nig.pdf}
    }

  • D. Sprute, A. Pörtner, A. Weinitschke, and M. König, “Smart Fall: Accelerometer-Based Fall Detection in a Smart Home Environment,” in Inclusive Smart Cities and e-Health: 13th International Conference on Smart Homes and Health Telematics (ICOST), 2015, p. 194–205.
    [BibTeX] [Abstract] [Download PDF]

    The detection of falls in an elderly society is an active field of research because of the the enormous costs caused by falls. In this paper, Smart Fall is presented. It is a new accelerometer-based fall detection system integrated into an intelligent building. The developed system consists of two main components. Fall detection is realized inside a small customized wearable device that is characterized by low costs and low-energy consumption. Additionally, a receiver component is implemented which serves as mediator between the wearable device and a Smart Home environment. The wireless connection between the wearable and the receiver is performed by Bluetooth Low Energy (BLE) protocol. OpenHAB is used as platform-independent integration platform that connects home appliances vendor- and protocol-neutral. The integration of the fall detection system into an intelligent home environment offers quick reactions to falls and urgent support for fallen people.

    @inproceedings{sprute:2015,
    author ={Sprute, Dennis and Pörtner, Aljoscha and Weinitschke, Alexander and König, Matthias},
    title ={{Smart Fall: Accelerometer-Based Fall Detection in a Smart Home Environment}},
    booktitle ={{Inclusive Smart Cities and e-Health: 13th International Conference on Smart Homes and Health Telematics (ICOST)}},
    publisher ={Springer International Publishing},
    year ={2015},
    month ={06},
    isbn ={978-3-319-19311-3},
    pages ={194--205},
    url = {http://link.springer.com/chapter/10.1007%2F978-3-319-19312-0_16},
    abstract = {The detection of falls in an elderly society is an active field of research because of the the enormous costs caused by falls. In this paper, Smart Fall is presented. It is a new accelerometer-based fall detection system integrated into an intelligent building. The developed system consists of two main components. Fall detection is realized inside a small customized wearable device that is characterized by low costs and low-energy consumption. Additionally, a receiver component is implemented which serves as mediator between the wearable device and a Smart Home environment. The wireless connection between the wearable and the receiver is performed by Bluetooth Low Energy (BLE) protocol. OpenHAB is used as platform-independent integration platform that connects home appliances vendor- and protocol-neutral. The integration of the fall detection system into an intelligent home environment offers quick reactions to falls and urgent support for fallen people.}
    }

  • D. Sprute, Activity Recognition in a Smart Home Using Machine Learning Methods, 2015.
    [BibTeX] [Abstract]

    This thesis proposes a novel activity recognition system that is integrated into the smart home environment. It is characterized by low costs, high energy efficiency and low intrusiveness to increase the acceptance of the users. Activity recognition is performed online on a small-sized wearable device incorporating a microprocessor and a tri-axial accelerometer. After investigating on different feature sets and classification algorithms, the final implementation only considers five time domain features and a C4.5 decision tree classifier achieving a recall and precision of 96.4%. This wearable device is successfully integrated into the intelligent environment by Bluetooth Low Energy wireless communication protocol and openHAB as platform-independent integration software. The integration of the system into the home environment allows reactions of the home depending on the activity which enriches the life quality of the residents. Additionally, the system covers fall detection that enables the home to provide a fallen person with urgent support.

    @misc{sprute:2015a,
    author ={Sprute, Dennis},
    title ={{Activity Recognition in a Smart Home Using Machine Learning Methods}},
    note ={Master's thesis, Bielefeld University of Applied Sciences},
    year ={2015},
    month ={08},
    abstract ={This thesis proposes a novel activity recognition system that is integrated into the smart home environment. It is characterized by low costs, high energy efficiency and low intrusiveness to increase the acceptance of the users. Activity recognition is performed online on a small-sized wearable device incorporating a microprocessor and a tri-axial accelerometer. After investigating on different feature sets and classification algorithms, the final implementation only considers five time domain features and a C4.5 decision tree classifier achieving a recall and precision of 96.4%. This wearable device is successfully integrated into the intelligent environment by Bluetooth Low Energy wireless communication protocol and openHAB as platform-independent integration software. The integration of the system into the home environment allows reactions of the home depending on the activity which enriches the life quality of the residents. Additionally, the system covers fall detection that enables the home to provide a fallen person with urgent support.}
    }

Further information can be found at: http://www.iot-minden.de/course/smart-fall/