A team from the Hebrew University of Jerusalem has developed an affordable, non-invasive way to estimate total leaf area in dwarf tomato plants using 3D reconstruction from standard video footage. By combining structure-from-motion techniques with machine learning, the study accurately predicts plant growth without the need for costly sensors or destructive sampling. This breakthrough offers a scalable solution for crop monitoring in both greenhouses and open fields, making precision agriculture more accessible than ever.
Led by PhD candidate Dmitrii Usenko from the Institute of Environmental Sciences, the research team demonstrated that a simple, low-cost imaging technique can accurately estimate the total leaf area (TLA) of dwarf tomato plants. Under the guidance of Dr. David Helman and in collaboration with Dr. Chen Giladi from Sami Shamoon College of Engineering, the team showed how standard 2D videos taken from multiple angles can be converted into detailed 3D data for improved agricultural management, according to a press release.
Central to the approach is structure-from-motion (SfM), a computer vision technique that reconstructs 3D geometry from object movement in video sequences. Instead of relying on expensive LiDAR or multispectral cameras, the researchers used ordinary video footage of tomato plants to recreate the precise shape and size of their foliage.
“Accurate measurement of total leaf area is crucial for understanding plant growth, photosynthesis, and water use,” explains Dr. Helman. “But traditional approaches often require destructive sampling or costly, inaccessible equipment. Our model brings accessibility and accuracy together in a way that could benefit both smallholder farmers and large-scale agricultural operations.”
Using over 300 video clips of dwarf tomato plants grown in controlled greenhouse conditions, the researchers trained machine learning models to estimate leaf area from features extracted in 3D point clouds. Their top-performing model achieved an impressive coefficient of determination (R²) of 0.96, surpassing traditional 2D methods and maintaining accuracy even when faced with challenges like overlapping leaves or plant movement.
The potential applications extend far beyond tomatoes. Because this approach is crop-agnostic and relies solely on standard RGB imaging, it offers a scalable solution for crop monitoring worldwide. Additionally, the model’s open-source release invites the research community to further refine and customize the tool for diverse agricultural needs.
“By reducing the cost barrier to accurate plant monitoring, we hope to democratize access to precision agriculture,” says Usenko. “This is a small but meaningful step toward smarter, more sustainable farming.”


