Visual Data Analytics and Robotics for Automated Construction Applications

Date/Time
Date(s) - 03/16/2020
2:00 PM - 3:00 PM

Location
Join Zoom Meeting https://ufl.zoom.us/j/7857875658 Meeting ID: 785 787 5658

Categories


In the past decade, the construction industry has struggled to improve its productivity while the manufacturing industry has experienced a dramatic increase through automation. Moreover, in recent years the US construction industry has suffered from labor shortage. Automation and robotics can potentially revolutionize the construction industry and address these problems. To advance theory and practice, the overall goal of this research is to increase the degree of automation in construction applications. The three research objectives are the development of (1) an integrated mobile robotic system consists of autonomous robotic agents that collect and analyze data with minimal human input, (2) a computer vision technique that automatically registers as-built video sequences (i.e., a series of image frames) with as-planned Building Information Models (BIM) in real-time, facilitating as-built and as-planned comparison, and (3) a robotic worker for various construction applications. The first study is about an autonomous data collection system that integrates an unmanned ground vehicle (UGV) with an unmanned aerial vehicle (UAV). This system has the potential to be used in several construction applications, such as site surveying, progress and safety monitoring, and structural health monitoring. The UGV autonomously navigates through space, leveraging its sensors. The UAV acts as an external eye for the UGV, observing the scene from a vantage point that is inaccessible to the UGV. The key aspects of this system are simultaneous localization of both UAV and UGV, mapping of the surrounding environment, and efficient path planning using multiple sensors. The big visual data collected in the first study is used in the second study, which facilitates as-built and as-planned data comparison through registration of video sequences and BIM in real-time. The proposed method recovers the camera poses of image frames in the 3D model coordinate system by performing an augmented monocular Simultaneous Localization and Mapping (SLAM) and perspective detecting and matching between the image frames and their corresponding BIM views. The presented method can potentially and fully automate past studies on automating progress inference, given the visual representation of as-built models aligned with BIM. The third objective proposes a mobile robotic system that integrates the perception of the scene with grasp and control planning. To achieve this, a UGV with two stereo cameras and a robotic arm reaches objects of interest using a global-to-local control planning strategy. Then, using a scene segmentation pipeline, objects are detected, picked, and placed in a predetermined location. This dynamic object manipulation system has the potential to be used in various construction applications, such as material handling and site cleaning.