Research & Facilities

Research Laboratories

Organized research laboratories and groups within the area of spatial informatics include:

SCIS is also home to the National Center for Geographic Information and Analysis (NCGIA)

Illustrative Research Areas

Graduate students and faculty are engaged in far ranging yet complementary research programs. Many of the projects involve multi-investigator and cross-disciplinary efforts. For specific project descriptions, consult the NCGIA site or some of the example research abstracts. The following are illustrative of general areas within which specific topics are being pursued.

  • Spatio-Temporal Models:  growing amount of data and information have important spatial and temporal dimensions. We need methods to manage, query, and access both dimensions.
    Research areas:

    • Databases for moving objects
    • Location based services
    • Event based models
    • Generic and domain-specific spatial ontologies
  • Geosensor Networks: Networks of small-form sensors with computing platforms and wireless communication streaming live data are deployed in massive numbers in geographical space. Geosensor networks are the next generation environmental platforms that deliver a microscopic view of geographic space. We need advanced methods for building robust, intelligent systems integrate sensing in space and time with understanding events in space and time.
    Research areas:

    • In-network data aggregation and spatial query execution
    • Qualitative and quantitative in-network algorithms to detect and incrementally track events
    • Mobile geosensor networks
    • Field-based foundations and algorithms for sensor data streams
  • User Interfaces and Interactions: As devices get smaller and we need to use them in different environments. New interaction paradigms are needed under the differing conditions.
    Research areas:

    • Sketch interaction
    • Egocentric pointing devices
    • Space-time visualization environments
  • Mixed Reality (Virtual and Augmented Reality): Despite being a relatively new technology, Mixed Reality is one of the fastest growing markets across the globe. It already disrupts a broad-cross section of industries including education, medicine, retail, as well as entertainment and social lives. However, there are numerous areas to improve.
    Research areas:

    • Multisensory technologies (including touch, smell, and taste)
    • Immersion, Presence, and Interaction in VR
    • Human sensory perception and multimodal interaction
    • Human-Computer Interaction (HCI)
  • Information Extraction: Surveillance and monitoring systems generate huge volumes of information. From such data streams we want to identify objects and/or behaviors.
    Research areas:

    • Automated feature extraction from satellite and aerial imagery
    • Detection and tracking of moving objects from imagery
    • Event detection and activity monitoring from time series and space-time series sensing
  • Information Integration: Growing heterogeneous collections of information (maps, images, text, video, time series) need to be integrated and searched for patterns
    Research areas:

    • Semantic similarity models
    • Event data models
    • Metadata models
    • Uncertainty models
    • Logical approaches to semantic integration
  • Information Policy: Access, Security, Privacy, Intellectual Property Rights: As technologies and their interaction with society become more complex, neither technological nor legal solutions uninformed by the other are sufficient.
    Research areas:

    • Ethics driven information systems design
    • Interoperability of legal rights in the use of data
    • Models for internalizing external societal costs in systems design and development
    • Societal needs driven metadata, data provenance and recommender systems.
    • Protecting privacy in pervasive surveillance environments

Faculty Research Specialties

Among the key knowledge advancement interests of the faculty include the following:

Dr. Kate Beard: geographic information systems, digital libraries, uncertainty in spatial data, information visualization, spatial and temporal analysis

Dr. Max Egenhofer: geographic database systems, spatial reasoning, formalizations of spatial relations; user interface design, spatial query languages

Dr. Nicholas Giudice: perception, cognitive neuroscience, human factors engineering, neurocognitive engineering, multimodal interaction and spatial cognition

Dr. Torsten Hahmann: spatial informatics, knowledge representation, artificial intelligence, logic, ontologies of space and time, modular and hierarchical ontologies

Dr. Silvia Nittel: spatial database systems, geosensor networks, data streaming, decentralized spatial computing

Dr. Harlan Onsrud: information system legal and ethical issues, combined technological and legal approaches in addressing access, security, privacy, and intellectual property issues, STEM + Computing education research

Dr. Nimesha Ranasinghe: multisensory interactive media, augmented reality, and human-computer Interaction

See detailed information about the research interests of each of these faculty members at the links above or at Faculty and Staff

Sample Research Grants

Following are short abstracts for current and past externally funded research projects:

From Real-Time Sensor Data Streams to Continuous Data Fields Models: Formal Foundations and Computational Challenges
Program: Computer and Information Science and Engineering (CISE)
Sponsor: National Science Foundation, Information Integration and Informatics
Co-PIs: Silvia Nittel and Max Egenhofer
Massive sensor data streams are created from the automatic collection of sensor data in high frequency and in near real-time today. This project aims to advance the analytical potential of live-streamed data, historical data streams, and model simulations by creating an overarching representation in the form of the field data model with a set of operators that establish the field algebra. A field is best explained as, for example, a magnetic field; the magnetic force can be determined for each point in a magnetic field and the field is therefore considered to be continuous. Similarly, environmental phenomena such as air pollution or flooding are considered continuous in space and time although they are sampled at limited, discrete time-space locations with sensors. This project develops the field algebra, which is an intuitive, yet mathematically defined formalism to represent real-world phenomena as fields and to express analytical needs as canonical operations over fields. The field model represents phenomena as continuous entities again, and the implementation hides the fact that their spatio-temporal continuity is calculated on-the-fly based on real-time measurements streams. Extending sensor data streams to fields is transformative, as rarely a domain scientist is interested in the readings of individual sensors. Allowing scientists to work with high-level abstractions will significantly enhance their analytical tasks such as finding insights about changes, trends, or unexpected events happening in the real world. The project will integrate fields and data streams mathematically so that mappings between both are well defined. The field data model is complemented by the development of an innovative computational framework for synthesizing and analyzing fields based on very large numbers of high throughput, real-time sensor data streams, and for creating continuous representations on-the-fly. This framework provides novel algorithms to assure that the field operators can absorb the throughput of very large numbers of sensor data streams, yet still compute complex analytical results in near real-time. This project will benefit our society by enabling us to react to situations such as extreme weather events, environmental disasters or chemical accidents immediately, and organize response effort based on accurate and timely information; this will help to protect the public interests better. The research in this project develops a formal foundation for sensor data streams by abstracting them as geographic fields, and a scalable computational framework that computes field operators on massive numbers of sensor data streams in near real-time. In this research, the field algebra, with a recursive definition of fields and a set of field operators are formalized. The field algebra and data streams are formally integrated on the level of their mathematical foundations. The formal field algebra is implemented as a data type hierarchy and integrated with stream data models. At the same time, a computational framework is developed that extends data stream engines with computational components to estimate spatio-temporal fields based on recursive or transposed field definitions, and the evaluation of complex predicates over fields, which lays the foundation for co-analyzing live and historic fields. The results of this project will be distributed via scientific publications, open source software, and online training tutorials and classes. The project web site ( will provide access to the results of this project.

Empowering Multi-Conceptual Spatial Reasoning with a Repository of 
Qualitative and Quantitative Spatial Ontologies
Program: Computer and Information Science and Engineering (CISE) Research
Initiation Initiative (CRII), Robust Intelligence
Sponsor: NSF, Information Integration and Informatics
PI: Torsten Hahmann
People have manifold ways to express and process spatial information. As needed, they employ conceptualizations of space – so-called cognitive maps – that differ in granularity, scope, and precision. Based on the context, people use different implicit assumptions to interpret a spatial relation such as “A is contained in B”. The assumptions made or implied by different conceptualizations often even contradict another (e.g., in “the lake contains a bay” the bay is a subregion of the lake while in “the lake contains an island” the island is surrounded by the lake). This is not an issue for people: we are able to quickly and reliably decide which conceptualization is most suitable in a specific situation, often choosing the simplest applicable conceptualization. For instance, people can quickly make navigation decisions (“Do I need to travel north or south on the Interstate?”) or answer simple spatial queries (“Is the ocean east or west from here?”). Similar capabilities for flexibly utilizing multiple conceptualizations
would make computational tools for recording and processing spatial information much more powerful and user-friendly. Towards this goal, the proposed research investigates a formalism and basic procedures for automatically choosing a spatial representation best suited to solve a specific task, such as finding data with certain qualitative characteristics, answering a spatial query, or testing a spatial hypothesis. The research will contribute to a theoretical foundation for collecting, accessing, and manipulating spatial information in more natural ways without impeding its efficient processing in information systems. It will lower the barrier of entry for interacting with and analyzing spatial information and promotes technologies to cut time and costs typically spend on transforming diverse spatial data sets into a
coherent model.
In this project, the different spatial conceptualizations will be encoded as machine-interpretable spatial ontologies and placed in a structured ontology repository that leverages relationships from mathematical logic. The formal foundation for comparing the expressivity of ontologies using the mathematical notion of definability will be developed and used to organize the repository by differences in ontological assumptions and expressivity. This structure will be exploited by procedures for automatically selecting an ontology that best fits a specific spatial task. In addition, formal encodings of the knowledge necessary to convert spatial information from one ontology to another will be investigated, accompanied by procedures that utilize this information for automatically identifying and converting pieces of geometric background knowledge relevant to a specific spatial task. This research advances our understanding of how spatial information expressed using high-level, natural spatial terms is related to the kind of geometric information that forms the basis of spatial information systems. It contributes to a better understanding of how logical relationships can be utilized to formally compare the expressiveness of two spatial ontologies, and how a spatial ontology repository can be supplemented by information that allows automated conversion of knowledge based on different ontologies.

Integrating Computing into Science Teaching and Learning in Grades 6-8:
A Diverse Partnership to Develop an Evidence-Guided Model to Serve Rural Communities
Program: STEM + Computing
Sponsor: National Science Foundation
Co-PIs: Susan R. McKay, Mitchell Bruce, Sara Lindsay, Harlan Onsrud (Senior Investigators Torsten Hahmann, Connie Holden)
This exploratory integration project is an interdisciplinary partnership among teachers, administrators, education researchers, and UMaine STEM and STEM Education faculty, including those from the School of Computing and Information Science. This group will develop, implement, investigate, and refine a model to integrate computing into middle school science and study the essential elements of teacher professional learning and curricular support needed for successful integration. Future teachers from UMaine’s Master of Science in Teaching (MST) Program will work with the partnership on the develop of a comprehensive module for each of grades 6-8 aligned with Computer Science Teachers Association Standards and tied to the use of real, relevant data for students at the middle level in rural communities. Teachers, and then their students, will experience, first-hand, how computing skills and computational thinking enable them to tackle new types of important, complex problems and develop more realistic models of significant phenomena.
Rigorous research and evaluation will contribute to broader knowledge of how to support teachers as computing is integrated into STEM teaching and learning. Findings will be presented and published for the research and practitioner communities and will also be disseminated to community members in partnering districts and other stakeholders. The classroom implementation will be studied to understand the contributions that the integration of computing with STEM makes to students’ learning of science and computing and their attitudes toward STEM careers, with particular attention to gender and engaging students from economically challenged communities.

Multimodally Encoded Spatial Images in Sighted and Blind
Program: National Eye Institute
Sponsor: National Institutes of Health (NIH)
Co-PIs: N.A. Giudice, J.M. Loomis (PI), and R.L. Klatzky
Humans can perceive surrounding space through vision, hearing, touch and sensed consequences of movement.  As they perceive and interact with their environment, people establish, from perceptual processing, an ongoing representation of the physical layout of objects and locations.  This representation remains at a perceptual level as long as it is supported by sensory stimulation.  Perhaps surprisingly, when such stimulation ceases, as when the eyes close or a sound source is turned off, the perceptual representation also ceases; yet, people remain able to direct actions appropriately in space.  This proposal investigates the representation of spatial layout that remains available in the absence of direct perceptual support, and hence serves to guide action.  We call it the “spatial image”.  We propose that spatial images are (1) fully three-dimensional and externalized, (2) capable of being formed in working memory, both from perception by way of multiple sensory modalities and from constructive spatial processes (imagination or retrieval from long-term memory), and (3) amodal in nature (not depending on the input source).
Our proposed research will further knowledge about spatial images produced by visual, haptic, and auditory input. Our research consists of theoretically-based experiments involving sighted and blind subjects. All of the experiments rely on logic to make inferences about internal processes and representations from observed behavior, such as verbal report, joystick manipulation, and more complex spatial actions, like reaching, pointing, and walking. Our experiments are grouped into 3 topics. The first topic deals with the development of spatial images through touch, both direct and when mediated by holding a tool, and whether spatial images can be mentally re-scaled at will. The second topic is the testing of the amodality hypothesis: that regardless of the sensory source, spatial images formed from different modalities function in the same way. The third topic is concerned with whether spatial images are equally precise in all directions around the head, in contrast to visual images which are thought to be of high precision only when located in front of the head.

Information Integration and Human Interaction for Indoor and Outdoor Spaces
Program: Information Integration and Informatics (Small)
Sponsor: National Science Foundation
PI: Worboys, Co-PI: Giudice
The goal of this research project is to provide a framework model that integrates existing models of indoor and outdoor space, and to use this model to develop an interactive platform for navigation in mixed indoor and outdoor spaces. The user should feel the transition between inside and outside to be seamless, in terms of the navigational support provided. The approach consists of integration of indoors and outdoors on several levels: conceptual models (ontologies), formal system designs, data models, and human interaction. At the conceptual level, the project draws on existing ontologies as well as examining the “affordances” that the space provides. For example, an outside pedestrian walkway affords the same function as an inside corridor. Formal models of place and connection are also used to precisely specify the design of the navigational support system. Behavioral experiments with human participants assess the validity of our framework for supporting human spatial learning and navigation in integrated indoor and outdoor environments. These experiments also enable the identification and extraction of the salient features of indoor and outdoor spaces for incorporation into the framework. Findings from the human studies will help validate the efficacy of our formal framework for supporting human spatial learning and navigation in such integrated environments. Results will be distributed using the project website They will be incorporated into graduate level courses on human interaction with mobile devices, shared with public school teachers participating in the University of Maine’s NSF-funded RET (Research Experiences for Teachers), and provide linkages with the two companies and one research center associated with the project.

Sponsor: Maine National Guard
PI: Worboys, Co-PIs Nittel, Beard, Abedi
The SenseME project is collaboration between the University of Maine, Global Relief Technologies (GRT), and the Maine National Guard (MENG) whose goals are to provide *Critical Infrastructure Sensor Integration* and *Logistical Asset Tracking* during emergencies by using sensor technology to:
1. Work towards a common operating picture (COP) to develop real-time information on Maine’s critical infrastructure for civil and military emergency management officials and responders.
2. Manage Maine’s key logistical assets throughout their complete lifecycles to support MENG’s core mission to procure and move supplies and commodities throughout the state during emergencies.
It consists of two programs, Critical Infrastructure Sensor Integration and Logistical Asset Tracking.

Cyber Enhancement of Spatial Cognition for the Visually Impaired
Program: Cyber-Enabled Discovery and Innovation (CDI-Type II)
Sponsor: National Science Foundation (NSF)
Co-PIs: N.A. Giudice, K. Daniilidis (PI), R. Manduchi, and S. Roumeliotis
Indoor navigation poses significant challenges for blind and visually-impaired persons, as without vision, there is often no mechanism for accessing room numbers, building maps, and other navigation-critical environmental cues. Effective wayfinding requires successful execution of several related behaviors including orientation, spatial updating, and object and place recognition, all of which necessitate accurate assessment of the surrounding environment. Most research on wayfinding aids has focused on outdoor environments as spatial behavior is readily supported by speech enabled GPS-based navigation systems which provide access to information describing streets, addresses and points of interest. By contrast, there is a dearth of technology for conveying such information for supporting environmental access and wayfinding behavior for indoor usage. the limited technology that is available requires significant modifications to the building infrastructure and has limited functionality, barriers which have discouraged adoption. The lack of compensatory indoor navigation technology has led to dramatic problems in  the independence, quality of life, and safety risks for blind individuals, one of the fastest growing demographics of our aging population. Guide dogs and white canes are widely used for the purpose of mobility and environmental sensing. However, where these tools are extremely effective for identification of obstacles in the path of travel, they do not provide information useful for staying oriented in the environment or building up a robust mental representation of the space (cognitive map). What is needed to solve the indoor navigation challenge is a device that conveys realtime information about the environment and which supports wayfinding behavior and cognitive map development.
This proposal adopts a multi-faceted approach for solving the indoor navigation problem for people with limited vision. It leverages expertise from machine vision, robotics and blind spatial cognition, with behavioral studies on interface design to guide the discovery of information requirements and optimal delivery methods for an indoor navigation system. The proposed cyber assistant provides position, orientation, local geometry, and object identification via the use of appropriate sensors and sensor fusion algorithms. The combination of designing perceptually guided  navigation algorithms, implemented on miniature-size commercially-available hardware, in a device which is motivated by theories and end-user needs of the target population, will lead to the development of a navigation assistant with the broadest impact to the widest range of potential users.

Sensor Science, Engineering and Informatics
Program: Integrated Graduate Education and Research
Sponsor: National Science Foundation (NSF)
Co-PIs: Beard-Tisdale, Lad, Smith, Vetelino, Worboys
The University of Maine is a proven leader in integrating cutting-edge sensor science, engineering, and informatics research into high school, undergraduate, and graduate education. Faculty, students, and alumni are developing novel sensor applications for homeland security, healthcare, the environment, energy, agriculture, food safety, transportation, manufacturing, mapping, and other areas. As an indicator of their success, faculty and alumni continue to spin off numerous sensor-related companies. The Internet and information technologies, coupled with miniaturization techniques and new approaches to informatics, are propelling sensor technology to a threshold of major growth. As sensing becomes ubiquitous, there is a critical need to manage, integrate and analyze diverse and even conflicting sensor data streams. A challenging gulf persists between the raw and massive amounts of data produced by even a single sensor and the ability of collaborating sensors to generate integrated information that is useful for decision-making. This gulf occurs in part when the researchers developing the sensors and sensor systems are unaware of the needs of those who must analyze and respond to the data. Similarly, those involved on the informatics side are often unfamiliar with the potential and challenges of new sensor materials, devices, and platforms. New approaches to graduate education are needed to bridge this gulf.
The primary goal of this IGERT program is to train Ph.D. scientists and engineers in the multidisciplinary area of sensor systems ranging from the design and networking of sensors to the interpretation of complex sensor data. IGERT fellows will develop a systems view that embraces sensor science, engineering, and informatics and emphasizes a high degree of professionalism, marked by leadership skills, the ability to contribute effectively within an interdisciplinary team, and appreciation for the complex social and ethical ramifications of ubiquitous sensing. IGERT fellows will complete an innovative sequence of courses, theses, and other activities to develop knowledge and skills in (i) sensor materials and devices (ii) sensor signal conditioning and networks and (iii) the integration and transformation of raw sensor data streams into knowledge.
Twenty faculty members from 5 departments (Spatial Information Science and Engineering, Electrical and Computer Engineering, Chemical and Biological Engineering, Chemistry, and Physics) will train 20 IGERT fellows over 5 years. While collaborations exist among researchers studying new materials and sensing modalities, collaboration with researchers in signal conditioning, networking, and informatics is far less common even within the same domain, i.e., homeland security. This IGERT program will connect the detection and meaningful interpretation of a ìsensedî event. Innovative aspects include: (i) highly interdisciplinary summer symposia with internationally-recognized scholars, policy-makers, and industry leaders, (ii) a new, interdisciplinary, certificate program that includes a new, team-building design course and entrepreneurship courses (iii) interdisciplinary research experiences that pair fellows with at least two advisors representing diverse aspects of sensor science and engineering (e.g. materials characterization and data fusion), and (iv.) extensive interaction with high school students and teachers. An external advisory board will provide program oversight.
The intellectual merit of this IGERT program rests in its novel emphasis on sensing at scales from nano to global and its innovative approach to cross-training graduate students on integrated sensing systems from event detection to spatio-temporal information analysis including the social issues relating to sensor systems.
Broader impacts include integration of IGERT activities with UMaine’s NSF-funded GK-12: Sensors! and RET Site: Sensors! (affecting at least 40 high schools, many of which are located in poor and isolated regions); entrepreneurship training that better prepares Ph.D. students to meet the sensor-related needs of industry; incorporation of ethical and public policy dimension of sensing; increased research collaborations among some of the University’s major research units including the Laboratory for Surface Science and Technology and the National Center for Geographic Information and Analysis.

Data Management for Ad-Hoc Geosensor Networks
Program: Faculty Early Career Development (CAREER)
Sponsor: National Science Foundation
Co-PIs: Silvia Nittel
This project explores data management methods for geosensor networks, i.e. large collections of very small, battery-driven sensor nodes deployed in the geographic environment that measure the temporal and spatial variations of physical quantities such as temperature or ozone levels. An important  task of such geosensor networks is to collect, analyze and  estimate information about continuous phenomena under observation such as a toxic cloud close to a chemical plant in real-time and in an energy-efficient way. The main thrust of this project is the integration of spatial data analysis techniques with in-network data query execution in sensor networks. The project investigates novel algorithms such as incremental, in-network kriging that redefines a traditional, highly computationally-intensive spatial data estimation method for a distributed, collaborative and incremental processing between tiny, energy and bandwidth constrained sensor nodes. This work includes the modeling of location and sensing characteristics of sensor devices with regard to observed phenomena, the support of temporal-spatial estimation queries, and a focus on in-network data aggregation  algorithms for complex spatial estimation queries. Combining high-level data query interfaces with advanced spatial analysis methods will allow domain scientists to use sensor networks effectively in environmental observation.

Monitoring Dynamic Spatial Fields Using Responsive Geosensor Networks
Program: Information and Intelligent Systems
Sponsor: NSF
Co-PIs: Worboys, Nittel
Advances in hardware and systems software provide the capability for large numbers of small, low-cost MEMS devices with
limited on-board processing and wireless communication capabilities to be placed in the field. Environmental monitoring
is one of the major application area for geosensor networks, sensor networks embedded in  geographic regions. The
goal is the observation, monitoring and analysis of environmental phenomena such as wildfires, flooding, and detection and tracking of toxic spills. Issues such as energy and communication constraints have up to now prevented the full potential of such networks to be realized. This proposal is concerned with the responsiveness of such geosensor networks to changes in dynamic spatial fields. Imagine a geosensor network detecting levels of carbon dioxide pollution at various places on a region of the Earth. Because of energy and communication constraints, only a small proportion of the sensor nodes can be active at any one time. As CO2 levels change (maybe the region of dangerously high levels is splitting into two connected parts), the sensor network needs to be responsive to the change, for example, reconfiguring itself by activating/deactivating sensor nodes,  to capture the detail of the field of CO2 where is is most needed (i.e., where the split is taking place). The proposal directly addresses the collaborative system topic of problem solving in highly distributed, sensor-based information networks, by the provision of a framework for optimizing and updating information flow between sensor nodes.