Philippe Wyder

Visual Design Intuition

predicting dynamic properties of beams from raw cross-section images

 

Can machines mimic visual intuition?

Visual Design Intuition

An artistic representation of a humanoid robot contemplating the design of a bridge generated using Dalle-2

In this work we aim to mimic the human ability to acquire the intuition to estimate the performance of a design from visual inspection and experience alone. We study the ability of convolutional neural networks to predict static and dynamic properties of cantilever beams directly from their raw cross-section images. Using pixels as the only input, the resulting models learn to predict beam properties such as volume maximum deflection and eigenfrequencies with 4.54% and 1.43% mean average percentage error, respectively, compared with the finite-element analysis (FEA) approach. Training these models does not require prior knowledge of theory or relevant geometric properties, but rather relies solely on simulated or empirical data, thereby making predictions based on ‘experience’ as opposed to theoretical knowledge. Since this approach is over 1000 times faster than FEA, it can be adopted to create surrogate models that could speed up the preliminary optimization studies where numerous consecutive evaluations of similar geometries are required. We suggest that this modeling approach would aid in addressing challenging optimization problems involving complex structures and physical phenomena for which theoretical models are unavailable.

Wyder PM, Lipson H. Visual design intuition: predicting dynamic properties of beams from raw cross-section images. Journal of The Royal Society Interface. 2021;18(184):20210571. doi:10.1098/rsif.2021.0571

The Project Venom Logo

This project was conducted under the name Project Venom with the above logo.

Autonomous Drone Hunter

onboard detection and tracking for GPS-denied environments

Project Venom was a research project at Columbia’s Creative Machines Lab from fall 2016 to summer of 2018. Its mission is summarized below. In the first year, I joined the team as an undergraduate research assistant, and then was promoted to lead the project from fall 2017 until its conclusion. The final project team consisted of eight undergraduate and Masters level research assistants from computer science, mechanical engineering, and electrical engineering. By May 2018, we developed a flying hunting drone prototype that tracked target drones using Tiny-YOLO on a Jetson TX2, computed the relative target location from the resulted image coordinates, and sent the desired next pose via MAVROS to a PX4 based flight controller. Furthermore, we completed a proof of concept of an end-to-end AI-Pilot trained and tested in our custom simulator that sends stick-inputs to the flight controller to hunt and take down another drone. Since this work is unpublished, I cannot publicly share a more detailed report at this time.

As small, unmanned aerial vehicles (UAVs), also known as drones, move from the hobbyist market into main-stream, reports of misuse, reckless flying, and drone use for malicious purposes become more frequent. Additionally, the fast paced progress in AI raises fears of a dangerous combination of AI and UAVs. In order to effectively regulate drones and protect people against UAV threats, a system is needed that can stop or even apprehend a rogue drone. Such a system would have to be robust enough to capture both a drone piloted by a human or powered by an AI-pilot. Project Venom was developing a UAV platform that detects, hunts, and takes down other small UAVs in GPS-deprived environments, relying solely on on-board computation. The drone platform is comprised of a PX4 based flight controller, a RGBD stereo camera and an onboard computing module that uses pre-trained machine learning models to classify other drones within its sensor range as well as navigate and take down the drone once authority has been granted by a human operator. In parallel we were developing an AI-Pilot that could autonomously hunt another drone; we believe that an AI-Piloted drone—once it surpasses the skill level of a human pilot—can only be taken down by another AI-Pilot.

Publication: Wyder PM, Chen YS, Lasrado AJ, Pelles RJ, Kwiatkowski R, et al. (2019) Autonomous drone hunter operating by deep learning and all-onboard computations in GPS-denied environments. PLOS ONE 14(11): e0225092. https://doi.org/10.1371/journal.pone.0225092

Precision Phenotyping Using Low-Cost Drones & Deep Learning

My part of the presentation starts at 6:44min

During the Summer of 2017, I used my UAV experience to design, build, and test a low-cost data collection platform for deep learning based phenotyping and disease detection. The project was specifically targeted towards detecting lesions caused by a fungal infection called Northern Leaf Blight (NLB) on corn plants. Harvest losses caused by NLB were estimated at $1.9 billion in 2015 alone. In our work, we identified the specific qualities needed to build a successful data collection platform. To guarantee a high disease-detection accuracy from our deep learning model, our autonomous drone platform had to collect blur-free image data with a high pixel-density. We presented our findings at the Columbia Technology Ventures symposium (see video above).