Autodesk Recap


This entry is part 1 of 1 in the series House model
Hey. This page is more than 4 years old! The content here is probably outdated, so bear that in mind. If this post is part of a series, there may be a more recent post that supersedes this one.

I think I have gone on about Recap before, when making busts of my dog’s head and working out much concrete I needed for a post foundation, but every time I use it I am reminded how cool it is. So I am going to go on about it again.

What you do it take loads of photos of something from different angles and upload them into Autodesk’s Recap. Recap flicks your photos to the cloud. It stews on them a little and sends you back a 3D mesh file for the object. The process’s technical name is photogrammetry. Autodesk used to have a phone app called 123d which did the same thing but they have since discontinued it.

I have been using Recap quite a bit to ‘survey’ the tricky landscape around my house.

I have a paper copy of an old proper survey and I am combining this with the various Recap meshes and my own tape-measure measurements.

  • The historic survey is quite sparse but gives me some key spot levels.
  • The 3d meshes from Recap are a bit of a data overload and have no reference scale: Recap does not know or care if you are sending its photos of a golf ball, the moon, or a golf-ball the size of the moon!

My Autodesk account restricts me to 100 photos. With that I got the scan below.

Regarding the number of photos, I imagine its probably a case of diminishing returns – double the photos does not mean double the usefulness.

In the model above there are some missing bits and bobs, notably the sinkhole that seems to have opened up on the road.

Anyway. I think it is great.

What I do with this now is kind of a cull-and-clean. I do not need 7 million polygons describing the ivy bloom for example. I have played around with various ways trying to automate this process. None have been that successful.

I am finding a few hours of effort doing things by hand is the best way to get the scanned mesh into something useful. I am using Sketchup for this:

  • tracing the geometry of the important bits,
  • filling in gaps,
  • making geometry way more simple and;
  • classifying what things are (walls, road etc) with components and layers.
In RED: my human-powered reduction of what is actually relevant.
The MESH: Reduced version of what comes from Recap. reduced by 90%! From 7 million triangles.
I reckon my by hand version will not reach 4 digits.
But the Recap mesh really is BEAUTIFUL right! What a tool!

The Weirdness

There is a weirdness to all this postprocessing. Why am I reducing this down? Why don’t I use the scan data and build on that rather than reduce it to <1% of its former self?

I suppose that is the next step really, adding some computer intelligence to the process so it classifies things like vegetation and can do a bit of guesswork when things are missing and perhaps reduces mesh complexity where it is not needed. Of course, Google and others are already well into this with AI in various mapping services – articles below – so it’s not too far around the digital-corner:

https://www.blog.google/products/maps/google-maps-101-how-we-map-world/

https://www.blog.google/products/maps/google-maps-101-how-imagery-powers-our-map/

https://www.nvidia.com/en-us/self-driving-cars/hd-mapping/