Research & Facilities
- Research Laboratories
- Illustrative Research Areas
- Faculty Research Specialties
- Sample Research Grants
- Facilities and Resources
- Graduate Students – Profiles, Thesis Topics, Project Videos
Organized research laboratories and groups within the area of spatial informatics include:
- Geosensor Networks Laboratory
- Virtual Environmental and Multimodal Interaction Laboratory (VEMI Lab)
- Multisensory Interactive Media Lab (MIM Lab)
SCIS is also home to the National Center for Geographic Information and Analysis (NCGIA)
Graduate students and faculty are engaged in far ranging yet complementary research programs. Many of the projects involve multi-investigator and cross-disciplinary efforts. For specific project descriptions, consult the NCGIA site or some of the example research abstracts. The following are illustrative of general areas within which specific topics are being pursued.
- Spatio-Temporal Models: growing amount of data and information have important spatial and temporal dimensions. We need methods to manage, query, and access both dimensions.
- Databases for moving objects
- Location based services
- Event based models
- Generic and domain-specific spatial ontologies
- Geosensor Networks: Networks of small-form sensors with computing platforms and wireless communication streaming live data are deployed in massive numbers in geographical space. Geosensor networks are the next generation environmental platforms that deliver a microscopic view of geographic space. We need advanced methods for building robust, intelligent systems integrate sensing in space and time with understanding events in space and time.
- In-network data aggregation and spatial query execution
- Qualitative and quantitative in-network algorithms to detect and incrementally track events
- Mobile geosensor networks
- Field-based foundations and algorithms for sensor data streams
- User Interfaces and Interactions: As devices get smaller and we need to use them in different environments. New interaction paradigms are needed under the differing conditions.
- Sketch interaction
- Egocentric pointing devices
- Space-time visualization environments
- Mixed Reality (Virtual and Augmented Reality): Despite being a relatively new technology, Mixed Reality is one of the fastest growing markets across the globe. It already disrupts a broad-cross section of industries including education, medicine, retail, as well as entertainment and social lives. However, there are numerous areas to improve.
- Multisensory technologies (including touch, smell, and taste)
- Immersion, Presence, and Interaction in VR
- Human sensory perception and multimodal interaction
- Human-Computer Interaction (HCI)
- Information Extraction: Surveillance and monitoring systems generate huge volumes of information. From such data streams we want to identify objects and/or behaviors.
- Automated feature extraction from satellite and aerial imagery
- Detection and tracking of moving objects from imagery
- Event detection and activity monitoring from time series and space-time series sensing
- Information Integration: Growing heterogeneous collections of information (maps, images, text, video, time series) need to be integrated and searched for patterns
- Semantic similarity models
- Event data models
- Metadata models
- Uncertainty models
- Logical approaches to semantic integration
- Information Policy: Access, Security, Privacy, Intellectual Property Rights: As technologies and their interaction with society become more complex, neither technological nor legal solutions uninformed by the other are sufficient.
- Ethics driven information systems design
- Interoperability of legal rights in the use of data
- Models for internalizing external societal costs in systems design and development
- Societal needs driven metadata, data provenance and recommender systems.
- Protecting privacy in pervasive surveillance environments
Among the key knowledge advancement interests of the faculty include the following:
Dr. Kate Beard: geographic information systems, digital libraries, uncertainty in spatial data, information visualization, spatial and temporal analysis
Dr. Max Egenhofer: geographic database systems, spatial reasoning, formalizations of spatial relations; user interface design, spatial query languages
Dr. Nicholas Giudice: perception, cognitive neuroscience, human factors engineering, neurocognitive engineering, multimodal interaction and spatial cognition
Dr. Torsten Hahmann: spatial informatics, knowledge representation, artificial intelligence, logic, ontologies of space and time, modular and hierarchical ontologies
Dr. Silvia Nittel: spatial database systems, geosensor networks, data streaming, decentralized spatial computing
Dr. Harlan Onsrud: information system legal and ethical issues, combined technological and legal approaches in addressing access, security, privacy, and intellectual property issues
Dr. Nimesha Ranasinghe: multisensory interactive media, augmented reality, and human-computer Interaction
See detailed information about the research interests of each of these faculty members at Faculty and Staff
Following are short abstracts for current and past externally funded research projects:
Multimodally Encoded Spatial Images in Sighted and Blind
Information Integration and Human Interaction for Indoor and Outdoor Spaces
Cyber Enhancement of Spatial Cognition for the Visually Impaired
Sensor Science, Engineering and Informatics
Data Management for Ad-Hoc Geosensor Networks
Monitoring Dynamic Spatial Fields Using Responsive Geosensor Networks
Commons of Geographic Data
Program: National Eye Institute
Sponsor: National Institutes of Health (NIH)
Co-PIs: N.A. Giudice, J.M. Loomis (PI), and R.L. Klatzky
Humans can perceive surrounding space through vision, hearing, touch and sensed consequences of movement. As they perceive and interact with their environment, people establish, from perceptual processing, an ongoing representation of the physical layout of objects and locations. This representation remains at a perceptual level as long as it is supported by sensory stimulation. Perhaps surprisingly, when such stimulation ceases, as when the eyes close or a sound source is turned off, the perceptual representation also ceases; yet, people remain able to direct actions appropriately in space. This proposal investigates the representation of spatial layout that remains available in the absence of direct perceptual support, and hence serves to guide action. We call it the “spatial image”. We propose that spatial images are (1) fully three-dimensional and externalized, (2) capable of being formed in working memory, both from perception by way of multiple sensory modalities and from constructive spatial processes (imagination or retrieval from long-term memory), and (3) amodal in nature (not depending on the input source).
Our proposed research will further knowledge about spatial images produced by visual, haptic, and auditory input. Our research consists of theoretically-based experiments involving sighted and blind subjects. All of the experiments rely on logic to make inferences about internal processes and representations from observed behavior, such as verbal report, joystick manipulation, and more complex spatial actions, like reaching, pointing, and walking. Our experiments are grouped into 3 topics. The first topic deals with the development of spatial images through touch, both direct and when mediated by holding a tool, and whether spatial images can be mentally re-scaled at will. The second topic is the testing of the amodality hypothesis: that regardless of the sensory source, spatial images formed from different modalities function in the same way. The third topic is concerned with whether spatial images are equally precise in all directions around the head, in contrast to visual images which are thought to be of high precision only when located in front of the head.
Program: Information Integration and Informatics (Small)
Sponsor: National Science Foundation
PI: Worboys, Co-PI: Giudice
The goal of this research project is to provide a framework model that integrates existing models of indoor and outdoor space, and to use this model to develop an interactive platform for navigation in mixed indoor and outdoor spaces. The user should feel the transition between inside and outside to be seamless, in terms of the navigational support provided. The approach consists of integration of indoors and outdoors on several levels: conceptual models (ontologies), formal system designs, data models, and human interaction. At the conceptual level, the project draws on existing ontologies as well as examining the “affordances” that the space provides. For example, an outside pedestrian walkway affords the same function as an inside corridor. Formal models of place and connection are also used to precisely specify the design of the navigational support system. Behavioral experiments with human participants assess the validity of our framework for supporting human spatial learning and navigation in integrated indoor and outdoor environments. These experiments also enable the identification and extraction of the salient features of indoor and outdoor spaces for incorporation into the framework. Findings from the human studies will help validate the efficacy of our formal framework for supporting human spatial learning and navigation in such integrated environments. Results will be distributed using the project website www.spatial.maine.edu/IOspace. They will be incorporated into graduate level courses on human interaction with mobile devices, shared with public school teachers participating in the University of Maine’s NSF-funded RET (Research Experiences for Teachers), and provide linkages with the two companies and one research center associated with the project.
Sponsor: Maine National Guard
PI: Worboys, Co-PIs Nittel, Beard, Abedi
The SenseME project is collaboration between the University of Maine, Global Relief Technologies (GRT), and the Maine National Guard (MENG) whose goals are to provide *Critical Infrastructure Sensor Integration* and *Logistical Asset Tracking* during emergencies by using sensor technology to:
1. Work towards a common operating picture (COP) to develop real-time information on Maine’s critical infrastructure for civil and military emergency management officials and responders.
2. Manage Maine’s key logistical assets throughout their complete lifecycles to support MENG’s core mission to procure and move supplies and commodities throughout the state during emergencies.
It consists of two programs, Critical Infrastructure Sensor Integration and Logistical Asset Tracking.
Program: Cyber-Enabled Discovery and Innovation (CDI-Type II)
Sponsor: National Science Foundation (NSF)
Co-PIs: N.A. Giudice, K. Daniilidis (PI), R. Manduchi, and S. Roumeliotis
Indoor navigation poses significant challenges for blind and visually-impaired persons, as without vision, there is often no mechanism for accessing room numbers, building maps, and other navigation-critical environmental cues. Effective wayfinding requires successful execution of several related behaviors including orientation, spatial updating, and object and place recognition, all of which necessitate accurate assessment of the surrounding environment. Most research on wayfinding aids has focused on outdoor environments as spatial behavior is readily supported by speech enabled GPS-based navigation systems which provide access to information describing streets, addresses and points of interest. By contrast, there is a dearth of technology for conveying such information for supporting environmental access and wayfinding behavior for indoor usage. the limited technology that is available requires significant modifications to the building infrastructure and has limited functionality, barriers which have discouraged adoption. The lack of compensatory indoor navigation technology has led to dramatic problems in the independence, quality of life, and safety risks for blind individuals, one of the fastest growing demographics of our aging population. Guide dogs and white canes are widely used for the purpose of mobility and environmental sensing. However, where these tools are extremely effective for identification of obstacles in the path of travel, they do not provide information useful for staying oriented in the environment or building up a robust mental representation of the space (cognitive map). What is needed to solve the indoor navigation challenge is a device that conveys realtime information about the environment and which supports wayfinding behavior and cognitive map development.
This proposal adopts a multi-faceted approach for solving the indoor navigation problem for people with limited vision. It leverages expertise from machine vision, robotics and blind spatial cognition, with behavioral studies on interface design to guide the discovery of information requirements and optimal delivery methods for an indoor navigation system. The proposed cyber assistant provides position, orientation, local geometry, and object identification via the use of appropriate sensors and sensor fusion algorithms. The combination of designing perceptually guided navigation algorithms, implemented on miniature-size commercially-available hardware, in a device which is motivated by theories and end-user needs of the target population, will lead to the development of a navigation assistant with the broadest impact to the widest range of potential users.
Program: Integrated Graduate Education and Research
Sponsor: National Science Foundation (NSF)
Co-PIs: Beard-Tisdale, Lad, Smith, Vetelino, Worboys
The University of Maine is a proven leader in integrating cutting-edge sensor science, engineering, and informatics research into high school, undergraduate, and graduate education. Faculty, students, and alumni are developing novel sensor applications for homeland security, healthcare, the environment, energy, agriculture, food safety, transportation, manufacturing, mapping, and other areas. As an indicator of their success, faculty and alumni continue to spin off numerous sensor-related companies. The Internet and information technologies, coupled with miniaturization techniques and new approaches to informatics, are propelling sensor technology to a threshold of major growth. As sensing becomes ubiquitous, there is a critical need to manage, integrate and analyze diverse and even conflicting sensor data streams. A challenging gulf persists between the raw and massive amounts of data produced by even a single sensor and the ability of collaborating sensors to generate integrated information that is useful for decision-making. This gulf occurs in part when the researchers developing the sensors and sensor systems are unaware of the needs of those who must analyze and respond to the data. Similarly, those involved on the informatics side are often unfamiliar with the potential and challenges of new sensor materials, devices, and platforms. New approaches to graduate education are needed to bridge this gulf.
The primary goal of this IGERT program is to train Ph.D. scientists and engineers in the multidisciplinary area of sensor systems ranging from the design and networking of sensors to the interpretation of complex sensor data. IGERT fellows will develop a systems view that embraces sensor science, engineering, and informatics and emphasizes a high degree of professionalism, marked by leadership skills, the ability to contribute effectively within an interdisciplinary team, and appreciation for the complex social and ethical ramifications of ubiquitous sensing. IGERT fellows will complete an innovative sequence of courses, theses, and other activities to develop knowledge and skills in (i) sensor materials and devices (ii) sensor signal conditioning and networks and (iii) the integration and transformation of raw sensor data streams into knowledge.
Twenty faculty members from 5 departments (Spatial Information Science and Engineering, Electrical and Computer Engineering, Chemical and Biological Engineering, Chemistry, and Physics) will train 20 IGERT fellows over 5 years. While collaborations exist among researchers studying new materials and sensing modalities, collaboration with researchers in signal conditioning, networking, and informatics is far less common even within the same domain, i.e., homeland security. This IGERT program will connect the detection and meaningful interpretation of a ìsensedî event. Innovative aspects include: (i) highly interdisciplinary summer symposia with internationally-recognized scholars, policy-makers, and industry leaders, (ii) a new, interdisciplinary, certificate program that includes a new, team-building design course and entrepreneurship courses (iii) interdisciplinary research experiences that pair fellows with at least two advisors representing diverse aspects of sensor science and engineering (e.g. materials characterization and data fusion), and (iv.) extensive interaction with high school students and teachers. An external advisory board will provide program oversight.
The intellectual merit of this IGERT program rests in its novel emphasis on sensing at scales from nano to global and its innovative approach to cross-training graduate students on integrated sensing systems from event detection to spatio-temporal information analysis including the social issues relating to sensor systems.
Broader impacts include integration of IGERT activities with UMaine’s NSF-funded GK-12: Sensors! and RET Site: Sensors! (affecting at least 40 high schools, many of which are located in poor and isolated regions); entrepreneurship training that better prepares Ph.D. students to meet the sensor-related needs of industry; incorporation of ethical and public policy dimension of sensing; increased research collaborations among some of the University’s major research units including the Laboratory for Surface Science and Technology and the National Center for Geographic Information and Analysis.
Program: Faculty Early Career Development (CAREER)
Sponsor: National Science Foundation
Co-PIs: Silvia Nittel
This project explores data management methods for geosensor networks, i.e. large collections of very small, battery-driven sensor nodes deployed in the geographic environment that measure the temporal and spatial variations of physical quantities such as temperature or ozone levels. An important task of such geosensor networks is to collect, analyze and estimate information about continuous phenomena under observation such as a toxic cloud close to a chemical plant in real-time and in an energy-efficient way. The main thrust of this project is the integration of spatial data analysis techniques with in-network data query execution in sensor networks. The project investigates novel algorithms such as incremental, in-network kriging that redefines a traditional, highly computationally-intensive spatial data estimation method for a distributed, collaborative and incremental processing between tiny, energy and bandwidth constrained sensor nodes. This work includes the modeling of location and sensing characteristics of sensor devices with regard to observed phenomena, the support of temporal-spatial estimation queries, and a focus on in-network data aggregation algorithms for complex spatial estimation queries. Combining high-level data query interfaces with advanced spatial analysis methods will allow domain scientists to use sensor networks effectively in environmental observation.
Co-PIs: Worboys, NittelAdvances in hardware and systems software provide the capability for large numbers of small, low-cost MEMS devices with
is one of the major application area for geosensor networks, sensor networks embedded in geographic regions. The
goal is the observation, monitoring and analysis of environmental phenomena such as wildfires, flooding, and detection and tracking of toxic spills. Issues such as energy and communication constraints have up to now prevented the full potential of such networks to be realized. This proposal is concerned with the responsiveness of such geosensor networks to changes in dynamic spatial fields. Imagine a geosensor network detecting levels of carbon dioxide pollution at various places on a region of the Earth. Because of energy and communication constraints, only a small proportion of the sensor nodes can be active at any one time. As CO2 levels change (maybe the region of dangerously high levels is splitting into two connected parts), the sensor network needs to be responsive to the change, for example, reconfiguring itself by activating/deactivating sensor nodes, to capture the detail of the field of CO2 where is is most needed (i.e., where the split is taking place). The proposal directly addresses the collaborative system topic of problem solving in highly distributed, sensor-based information networks, by the provision of a framework for optimizing and updating information flow between sensor node
Program: Partnership for a Nation of Learners: Community Collaboration Grants for Museums, Libraries and Public Broadcasters
Sponsor: Institute of Museum and Library Services (IMLS)
Co-PIs: Lutz, Nittel, Onsrud
Data is the lifeblood of science. Researchers need the widest possible access to data from all sources to explore, experiment, test, and evaluate theories; and, ultimately, to increase understanding of our world. Data access, especially with the advent of the digital age, is crucial to the future development of science and society.
Researchers across a spectrum of disciplines—spanning infectious disease tracking, emergency services, cartography, digital communication, and many other fields—depend on locationally-referenced data from many sources to advance human knowledge. Those sources currently include national, state, and some local governments, as well as large commercial companies. Fortunately, there are many initiatives underway to make this government and commercial information accessible to present and future generations.However, none of these initiatives addresses access to a significant body of “invisible” non-textual geographic data that exists on the local hard drives of individual researchers, schools, non-profit groups, private associations, small companies, and other non-governmental organizations.At the University of Maine, Fogler Library, in partnership with faculty from the Department of Spatial Information Science and Engineering, is proposing the creation of Commons of Geographic Data (CGD) System software. This project will research and design an open source software system that will provide libraries and their users, as well as creators of locally generated non-textual geographic data, with (1) reliable knowledge of intellectual property ownership and use conditions for the data; (2) easy generation of standards-based metadata; (3) reliable data provenance in a digital environment across many possible re-uses; and (4) peer-generated indicators of quality and suitability for use of the data.The CGD System, using open-source and open-access technology, will enable libraries to remove technical and legal barriers facing individual researchers, local government agencies, nonprofit organizations, field scientists, and individual citizens who wish to contribute, access, and use locally generated non-textual geographic information. Through libraries, this project will help free up currently unavailable “invisible” information generated by non-federal local sources, and make that data available to the widest possible range of potential users. The CGD System will create an infrastructure that can be used by anyone who wishes to contribute geographic data but does not wish to become a specialist in geographic metadata generation or intellectual property management.The CGD System will build upon existing efforts in disparate domains and will create a new, integrated, easy-to-use software-based process that will enable libraries to collect, organize, and make accessible non-textual geographic data generated by non-specialists in a simple, intuitive automated “one-stop” manner. No system exists at present that simultaneously addresses all of these critical library concerns.
The Commons of Geographic Data System creates a tool that will enable libraries to efficiently collect, organize, and make readily available currently “invisible” geographic non-textual data, both today and in the Semantic Web environment of the future.