CMU_logo_horiz_white 250px
Synchronized Multi-sensor Integrated Learning Environment (SMILE)

Advances in commercial, off-the-shelf (COTS) autonomous vehicles, especially unmanned aerial systems (UAS), and convolutional deep neural networks (CNN) supported by energy self-aware computing enable a major change in how reconnaissance, search-and-rescue, emergency first response and other similar challenges can be addressed. Prior work has demonstrated the ability to use a single COTS UAS to achieve real time target identification by employing either an air-to-ground high definition video link coupled to a ground-based CNN recognizer or an on-board GPU-accelerated CNN recognizer.

Vulnerabilities of broadband air-to-ground communication (inadequate network coverage and/or explicit jamming attacks) and/or single-craft deployment motivate further exploration of robust systems in which a single craft can use a pre-trained CNN for autonomous search, incrementally augment the model in flight, enroll other UAS into a cluster by sharing the (base or incrementally improved) CNN model, enhance sensing capabilities through multi-UAS time-coordinated sensing, and robustly relaying target position (a low-bandwidth operation) to a remote station. With such a deployment, while long range communication may be denied or hampered, physical proximity of cluster members can be exploited for high bandwidth, low-RF-footprint communication both for CNN transfer and for cooperative sensing.


This project aims to create, characterize and develop a comprehensive framework in which clusters of COTS UAS, equipped with high-performance, low-power processors for real-time sensory data processing, can be readily programmed, deployed and trained in-flight for semi-autonomous, robust, time-coordinated operation. We call this vision the Synchronized Multi-sensory Integrated Learning Environment, (SMILE).  Pursuing this vision necessitates exploration of multiple, inter-related research topics including (a) mechanisms for achieving dynamic enrollment of UAS into a cluster through secure and efficient transfer learning, (b) techniques for extending a CNN in flight, using incrementally-gained knowledge about the target of interest, including the possibility of coordinated sensing by multiple UAS (e.g., to gather data simultaneously from differing points-of-observation), (c) creating the ability to establish timing synchronization within clusters for the purpose of coordinated sensing and (d) understanding the tradeoffs of local vs. remote computing, both within the cluster and with ground stations.