The project was supported by the Swiss National Found for Scientific Research and realised in the Pervasive and Artificial Intelligence Research Group, Department of informatics, University of Fribourg - Switzerland.
This project presents a framework that offers tools for the design and the implementation of ubiquitous computing systems supporting user motions, activities and situations. With the rapid development of context-aware mobile computing and sensor-based interaction, many new challenges come up, three of which are particularly are addressed in this thesis. The first is the need for wholistic tools to develop Ubiquitous computing infrastructures. The second concerns smart applications allowing users to benefit from the distributed computing power in their environment, and the third is the integration of enriched human-computer interaction using motions, activity and situation provided by the increasing sensing capability of the user environment or mobile devices. We propose the uMove framework, a comprehensive solution which allows to design and develop Ubicomp systems representing different kinds of physical or virtual environments based on a systemic approach. uMove proposes both theoretical foundations and implementation tools and is divided into three specific facets. The first facet is the conceptual model describing a ubiquitous computing system made of entities and observers within their physical or logical environment. The second facet is a system architecture which offers designers and developers the tools to theoretically define a logical system, including the types of contexts taken into consideration. The third facet is development tools that allow programmers to implement their systems, sensors, applications and services. The uMove framework is evaluated and validated in an interactive manner through four projects.
The main contribution of this project is the creation of a comprehensive development framework for Ubicomp systems and context-aware applications. The specific contributions are:
System modelling
The approach chosen for our model of environment follows
the systemic concepts and the semantic model is based on
the Von Bertalanffy's General System Theory (GST)
(BERTALANFFY 74). Von Bertalanffy was a biologist who
developed a theory generalising the definitions of systems
used in specific scientific disciplines such as physics,
(bio-)chemistry, mathematics, biology, economics or social
sciences. A modelled environment becomes a system, in
systemic sense.
A system models the physical or virtual world where objects (entities), possibly living things capable of motion, interact naturally with their environment and are observed by agents (observers).
System architecture
Based on the semantic model, we propose an architecture
which allows to define different layers of abstraction
including a system made of interacting entities, the
sensors gathering the different entity contexts, the
system observation and the context-aware applications
which handle the events received by the sensors.
We also present a methodology to evaluate the design and
components architecture of a Ubicomp system and
application to ensure that the various algorithms,
strategies, inferences (of activities or context) and
sensors operate together smoothly, satisfy user
requirements, take into account technical and
infrastructure limitations and form a coherent and
comprehensive system.
Implementation tools
Once modelled and validated, a system can be implemented
with a set of Java-based programming tools. We developed
APIs that offer the necessary classes and methods to build
the middleware on which the system will run. These APIs
allow to connect sensors and context-aware applications
which interact with the entities, and they offer
functionality for the monitoring and integration of mobile
devices running on the Android platform. We also propose a
graphical user interface which can instantiate and monitor
a system and dynamically load services for mobile devices.
Validation scenario and applications
We propose a set of validation projects that use the uMove
framework, implement the concepts of systems and test the
capability of the proposed concepts to adequately address
the research goals. Through these projects, we also
experiment the concept of Kinetic User Interface (KUI)
with scenarios which implies a mode of interaction where
location and motion tracking, including user activity can
be used as first order input modalities to a Ubicomp
system. The goal of a KUI is to allow users to interact
with Ubicomp systems in a more implicit way using their
kinetic properties to trigger events at the level of
applications.
The first project, called Robin, focuses on the observation of a rescue team helped by a semi-autonomous robot. The robot is sent ahead of the team and gathers contextual information in a building (in case of fire for instance) to send back to the server for situation analysis and activity recommendation or possibly alarms. The second project provides a smart environment for a nursing home. It focuses on the activity tracking of elderly persons who are still independent but monitored by medical staff in case of problems. Finally we describe an activity recognition module which can be plugged into a KUI system in order to track and analyse predefined categories of activities.
uMove middleware
The uMove middleware allows to define and implement a uMove system on top of which different specific applications will be developed (e.g. user tracking, activity-based smart alert). The framework contains two specific parts: the conceptual framework and the Java API. uMove allows programers to easily create all entities, the relations between them and the connected sensors, and to load the activity and situation recognition modules (algorithms or classes). However, uMove does not provide the activity (task) and situation recognition modules or algorithms. Instead, it allows them to be separately developed and connected to entities (actors and observers) active in the system.
uMove Middleware
Sensor Layer.
The sensor layer contains the sengets6 which are the
logical abstractions of the sensors connected to the
system. For instance, an application tracking the
movement of users within a building may need to connect
location sensors, independent of their type. The
location senget connects any type of sensors (e.g. RFID,
Bluetooth or wifi) and provides the entity location to
the
higher level.
Actor Layer.
The entity layer contains the logical representation of
the physical entities (i.e. users, places, objects)
being observed. Each entity (actor, places or zones) is
defined by its identity, its role, its location within
the system and its current motion and activity. Entities
are organised in an n-ary tree and all have a parent
node except the root of the system (e.g. the world or
the building).
Entities get their contexts updated from the connected
sengets. An entity can be attached to an activity
manager object which determines the current activity.
Observation Layer.
The observation layer analyses the current situation of the actor based on their activity and contexts. They listen for any entity changes and forward them to the situation manager in order to have the new situation analysed and if needed, inform the application (e.g. a ”warning” message or a ”critical situation” message).