We introduce Tree-D Fusion, featuring the first collection of 600,000 environmentally aware, 3D simulation-ready tree models generated through Diffusion priors. Each reconstructed 3D tree model corresponds to an image from Google's Auto Arborist Dataset, comprising street view images and associated genus labels of trees across North America. Our method distills the scores of two tree-adapted diffusion models by utilizing text prompts to specify a tree genus, thus facilitating shape reconstruction. This process involves reconstructing a 3D tree envelope filled with point markers, which are subsequently utilized to estimate the tree's branching structure using the space colonization algorithm conditioned on a specified genus.
The input to Tree-D Fusion is an RGB image of a tree and its genus. To perform shape reconstruction, we minimize the loss function w.r.t. the NeRF parameter θ. The loss function is constructed from two diffusion models, StableDiffusion with Lora and Zero123, trained on real tree images and synthetic 3D tree models. The output is an optimized NeRF τ (θ∗), which is a detailed 3D tree envelope. We then populate the volume of τ (θ∗) by markers based on the envelope and reconstruct trees by genus-conditioned space colonization algorithm.
coming soon