Skip to main content

ARS AI Innovation Fund - FY2021 Awards

The AI CoE was able to fund four AI Innovation Fund proposals in FY2021. The program was very competitive, with many more proposals submitted (nearly 60!) than we could support. Information about the funded projects is provided below.

Funded proposals

Explainable Deep Learning-Based Image Analysis With Blackbird RGB Imaging Robot For Laboratory High Throughput Phenotyping

  • PI and Co-PIs: Lance Cadle-Davidson, Yu Jiang
  • Amount of award: $100,000.00
  • Abstract: We developed and in 2021 will commercialize the Blackbird computer vision platform, which images 200 samples per hour at 1-micron resolution for deep learning-based analysis of foliar disease severity, with better accuracy than manual microscopy. To broaden its impact to other foliar traits (eg, trichomes, stomata), tissues (eg, roots, flowers, seeds, small fruits), and organisms (eg, insects, nematodes, fish embryos), we are adapting different lenses, sensors, and grids for imaging. Here we propose to develop a documented, containerized platform for model development and deployment and image analysis at SCINet. To test its implementation in ARS labs, this project will provide two Blackbird robots to labs selected by NP301 in consultation with Breeding Insight and will guide those labs through imaging and data analysis. In the process, we will develop training materials to facilitate widespread deployment of Blackbird in ARS and university labs as a uniform phenotyping platform.

Automated Detection Of Prairie Dog Colonies From Airborne Imagery Using Deep Learning

  • PI and Co-PIs: David Augustine, Justin Derner, Lauren Porensky
  • Amount of award: $83,790.00
  • Abstract: Black-tailed prairie dogs are keystone species for rangeland ecosystem health in semi-arid environments, but their colonies and populations of individuals are susceptible to dramatic boom and bust cycles. Prairie dogs alter the vegetation quantity with their grazing and burrowing activities, which presents competition for grazing livestock. As a result, public lands managers use control measures to prevent the expansion of colonies to adjacent private properties, as well as to maintain adequate forage for grazing livestock. These management decisions require detailed information on how colony boundaries are changing over time, but such information is cost-prohibitive to collect on the ground to the dynamic nature of colonies and the vast spatial extent across which they are found. This hinders the effectiveness of interventions and makes it difficult to communicate with ranchers, conservation groups, and other stakeholders about why and how decisions are being made. We propose to develop an algorithm based on deep learning, specifically a deep convolutional neural network (DCNN), to delineate prairie dog colony boundaries from high spatial resolution airborne imagery. Since DCNN’s can detect high-level features in imagery based on multi-scale patterns and morphology, we hypothesize that an algorithm can be developed to detect the distinct spatial patterns created by prairie dog colonies through burrowing and defoliation. In addition to optimizing the DCNN architecture itself, we will also identify which input layers are needed and evaluate how performance changes with coarsening spatial resolution to identify appropriate sensors and/or flight altitudes to maximize the efficiency of future monitoring.

Deep Learning-Based 3D Fruit-Tree Perception through Efficient Multi-Sensor Fusion for Robotic Harvesting of Apples

  • PI and Co-PIs: Renfu Lu, Zhaojian Li
  • Amount of award: $99,240.25
  • Abstract: Harvest automation is critically needed for addressing labor shortage and increasing labor cost, worker safety, and profitability and sustainability of the U.S. apple industry. This project is intended to develop a unified, efficient perception system to provide 3D fruit and tree branch information to support real-time robotic planning and controls. Built on our preliminary work for a single-sensor perception system, we will evaluate multi-sensor perception systems to substantially improve apple detection and localization performance by extending the 3D detection and localization capability to tree trunk and branches. To that end, we propose two sensing paradigms: 1) multiple RGB-D cameras from different angles of viewing; and 2) camera-LiDAR fusion. We will investigate novel pre-fusion and post-fusion strategies to systematically combine these sensors for enhanced fruit and tree branch detection and localization. We will develop algorithms for efficient dynamic moving window perception to support real-time robotic operation on a resource-limited platform. Extensive evaluations of the new multi-sensor systems will be performed in both artificial and real orchards under different lighting and canopy conditions. This research is expected to provide comprehensive orchard databases with multi-modal sensors and efficient deep learning based fruit-tree perception algorithms for robotic harvesting of apples.

Adapting Deep Learning For Three-Dimensional Mapping Of Soil Carbon

  • PI and Co-PIs: Kristen Veum, Curtis Ransom, Ken Sudduth, Newell Kitchen
  • Amount of award: $86,345.00
  • Abstract: Agricultural lands can be a sink for carbon and play an important role to help the United States become carbon neutral. Current methods of measuring carbon sequestration -through repeated temporal soil samples- are costly and laborious. A promising alternative is using visible, near-infrared (VNIR) diffuse reflectance spectroscopy. However, VNIR data are complex, which requires several data processing steps and often yields inconsistent results, especially when using in situ VNIR measurements. Using a convolutional neural network (CNN) could bypass these steps and incorporate measurements from multiple sensors to predict three-dimensional carbon stocks. A CNN modeling framework will be developed to predict soil carbon by incorporating information from profile VNIR, apparent electrical conductivity (ECa), penetration resistance measurements, and spatial covariates (e.g., mobile-sensor ECa and topography). Improvements over traditional modeling methods will be reported. Additional objectives include evaluating the optimal spatial density of sensor measurements needed to use the CNN model to estimate soil carbon. These models will be used to develop three-dimensional (down to 1 m) estimates of soil carbon which will quantify carbon sequestration over five years from fields with contrasting management history.