Conference proceedings paper
Automatically determining scale within unstructured point clouds
Research Areas
Currently no objects available
Publication Details
Author list: Kadamen J, Sithole G
Publisher: Copernicus GmbH
Place: Prague, Czech Republic
Publication year: 2016
Start page: 617
End page: 624
Total number of pages: 8
ISBN: 2194-9050
Abstract
Three dimensional models obtained from
imagery have an arbitrary scale and therefore have to be scaled.
Automatically scaling these models requires the detection of objects in
these models which can be computationally intensive. Real-time object
detection may pose problems for applications such as indoor navigation.
This investigation poses the idea that relational cues, specifically
height ratios, within indoor environments may offer an easier means to
obtain scales for models created using imagery. The investigation aimed
to show two things, (a) that the size of objects, especially the height
off ground is consistent within an environment, and (b) that based on
this consistency, objects can be identified and their general size used
to scale a model. To test the idea a hypothesis is first tested on a
terrestrial lidar scan of an indoor environment. Later as a proof of
concept the same test is applied to a model created using imagery. The
most notable finding was that the detection of objects can be more
readily done by studying the ratio between the dimensions of objects
that have their dimensions defined by human physiology. For example the
dimensions of desks and chairs are related to the height of an average
person. In the test, the difference between generalised and actual
dimensions of objects were assessed. A maximum difference of 3.96% (2.93cm)
was observed from automated scaling. By analysing the ratio between the
heights (distance from the floor) of the tops of objects in a room,
identification was also achieved.
Projects
Currently no objects available
Keywords
Currently no objects available