Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Capturing Natural-Colour 3D Models of Insects for Species Discovery and Diagnostics

  • Chuong V. Nguyen ,

    chuong.nguyen@csiro.au

    Affiliation Computational Informatics, Commonwealth Scientific and Industrial Research Organization (CSIRO), Canberra, ACT, Australia

  • David R. Lovell,

    Affiliation Computational Informatics, Commonwealth Scientific and Industrial Research Organization (CSIRO), Canberra, ACT, Australia

  • Matt Adcock,

    Affiliation Computational Informatics, Commonwealth Scientific and Industrial Research Organization (CSIRO), Canberra, ACT, Australia

  • John La Salle

    Affiliations Ecosystem Sciences, Commonwealth Scientific and Industrial Research Organization (CSIRO), Canberra, ACT, Australia, Atlas of Living Australia, Canberra, ACT, Australia

Abstract

Collections of biological specimens are fundamental to scientific understanding and characterization of natural diversity—past, present and future. This paper presents a system for liberating useful information from physical collections by bringing specimens into the digital domain so they can be more readily shared, analyzed, annotated and compared. It focuses on insects and is strongly motivated by the desire to accelerate and augment current practices in insect taxonomy which predominantly use text, 2D diagrams and images to describe and characterize species. While these traditional kinds of descriptions are informative and useful, they cannot cover insect specimens “from all angles” and precious specimens are still exchanged between researchers and collections for this reason. Furthermore, insects can be complex in structure and pose many challenges to computer vision systems. We present a new prototype for a practical, cost-effective system of off-the-shelf components to acquire natural-colour 3D models of insects from around 3 mm to 30 mm in length. (“Natural-colour” is used to contrast with “false-colour”, i.e., colour generated from, or applied to, gray-scale data post-acquisition.) Colour images are captured from different angles and focal depths using a digital single lens reflex (DSLR) camera rig and two-axis turntable. These 2D images are processed into 3D reconstructions using software based on a visual hull algorithm. The resulting models are compact (around 10 megabytes), afford excellent optical resolution, and can be readily embedded into documents and web pages, as well as viewed on mobile devices. The system is portable, safe, relatively affordable, and complements the sort of volumetric data that can be acquired by computed tomography. This system provides a new way to augment the description and documentation of insect species holotypes, reducing the need to handle or ship specimens. It opens up new opportunities to collect data for research, education, art, entertainment, biodiversity assessment and biosecurity control.

Introduction

Technology has a critical role to play in accelerating the understanding of biological diversity and, for decades, scientists have strived to create accurate 3D duplicates of plants and animal specimens [1]. This paper describes a novel method of using technology to liberate information about physical specimens by bringing them into the digital domain as natural-colour 3D models—consistent with ideas and directions articulated by several other authors [2][10]. In particular, the proof of concept system we present fits well with the suggestion of Wheeler et al. [11] to “engineer and deploy a network of automated instruments capable of rapidly creating 3D images of type specimens” as part of a larger strategy of dealing with the massive backlog of insect types that are not yet digitized in any form. High resolution 3D scans, as well as being useful as versatile replicas, also have the potential to act as a common frame of reference for other data relating to the original insect such as annotations, auxiliary image collections, and measurements. These additional aspects are vital for the ways taxonomists convey the various morphological characters that distinguish a new species from those previously discovered.

Our work is focused on the digitization of insect species, building on research and development at the Australian National Insect Collection (ANIC) which currently holds over 12 million specimens, and is growing by around 100,000 specimens every year. Our mission is to enable high-quality 3D models of insects to be acquired quickly and cheaply, for ANIC to use as a component of its digitization strategy. Like many Natural History collections around the globe, the ANIC maintains many (thousands) Holotypes - each the single specimen of a species that is used to define the characteristic features of that species. Holotypes exist as a physical object carefully protected from damage through handling. Digital colour 3D models of sufficient detail will enable collections managers to liberate these precious specimens for the research work they are intended to fulfill.

Micro Computed Tomography (Micro CT) is currently a key method [12], [13], able to create micron-accurate volumetric models of millimeter-scale objects and their internal structure. However, like recent 3D reconstructions from scanning electron microscope (SEM) micrographs [14], [15], Micro CT is unable to capture important information about the surface of the object: its natural colour. Exposure and reconstruction times can be long (tens of hours) and, as an X-ray imaging method, Micro CT generally demands special safety equipment. Current systems cost in the hundred-thousand dollar range and, while more compact desktop models are available, these are still not especially portable.

The inability of X-ray based methods for insect digitization to capture colour led us to consider image-based 3D reconstruction techniques as reviewed in [16], [17]. These methods have been successfully applied to the reconstruction of 3D cityscapes and other (generally fairly simple) objects [18][20]. Some small biological specimens have been digitized [21][23] but the methods used do not specifically cater for the complex structures and challenging surface optical properties of insects. Human-in-the-loop approaches have been proposed for insect modeling [24] as have methods (limited to simple insect geometries) for inferring 3D insect shape from a single 2D image [25]. Experiments [26], [27] with laser scanning systems like [28] have suggested that this approach has difficulties with the fine structures and the small scale of many insects, as well as reflective, transparent or iridescent surfaces.

One way to avoid these difficulties is to steer clear of 3D reconstruction altogether and simply present 2D images obtained from different viewing angles [29]. While this method of 3D visualization is popular for museum collections it does not provide the quantitative information (e.g., 3D morphology) needed to analyze and compare insect specimens. Furthermore large amounts of data are involved: many high-resolution images are needed to give a convincing illusion of looking at an actual 3D object. This makes smooth, realistic interaction difficult and precludes straightforward email exchange or embedding of the object data.

In summary, there is a lack of existing systems that could capture the 3D structure and surface optical properties of small, intricate insect specimens at sufficient resolution for ANIC and other collections to digitize, share, analyze and compare their holdings. The rest of the paper describes our prototype system and its operation, and how it has achieved these design objectives.

Materials and Methods

Here we provide overviews of the digitization process and equipment. A video [30] as depicted in Figure 1 shows the main components of the system and the digitization process in action.

thumbnail
Figure 1. 3D visualisation of a granary weevil on web as part of a video showing an overview of the 3D scanning process.

Go the link at [30] to view the video.

https://doi.org/10.1371/journal.pone.0094346.g001

Process overview

In high-level terms, our system and work-flow involve three main steps (Figure 2):

thumbnail
Figure 2. The three main steps to create a natural-colour 3D model of specimen.

The steps are mounting the insect onto a pin, acquisition of 2D images of the specimen at different poses, then reconstruction of a single 3D model from those multiple images.

https://doi.org/10.1371/journal.pone.0094346.g002

Mounting. the physical specimen is pinned onto a pre-printed mat used later by the reconstruction software to estimate camera pose (viewing angle and position).

Acquisition. 2D images of the specimen are automatically acquired from different orientations (and focal depths for small insects). This step marks the transition from the physical to the digital domain.

Reconstruction. in which a 3D model is inferred from multiple 2D images. For small insects, this involves multi-focus image stacking before the general steps of extracting camera pose, shape and colour.

The system has two modes of acquisition, depending on the specimen size. Insects larger than 10 mm are captured in normal-mode in which the depth of focus of the normal DSLR camera lens is enough to keep the whole specimen in focus at any viewing angle. Insects smaller than 10 mm are captured in macro-mode using a high-magnification lens. Because of the shallow depth of focus of this lens, multiple images are captured at different distances from the specimen and processed into a single in-focus image.

Equipment overview

Figures 3 and 4 show normal- and macro-mode setups. The main hardware components of the system are:

thumbnail
Figure 3. Connections (A) and hardware (B) for normal-mode image acquisition.

The green sphere marks the center of rotation and mounting location of specimens. The turntable is the master device that triggers the camera after rotating to predetermined pan and tilt angles. Images can be stored in camera memory or transferred directly to the computer as they are acquired.

https://doi.org/10.1371/journal.pone.0094346.g003

thumbnail
Figure 4. Connections (A) and hardware (B) for macro-mode image acquisition.

The macro lens, macro ring flash and macro-rail are needed for capturing high-magnification and depth-extended images of small insects. At each rotation step, the turntable triggers the control box of macro-rail. The macro-rail then moves to a set of predetermined positions. At each position, the control box triggers the camera to capture an image.

https://doi.org/10.1371/journal.pone.0094346.g004

  • A two-axis turntable to present views of the specimen from different angles of rotation
  • A macro-rail to vary the distance between the camera and specimen in macro-mode
  • A camera with macro lens and flash.
  • Two laser pointers for specimen alignment
  • A computer for 2D image processing and 3D reconstruction.

It is noted that in macro-mode our system uses a macro-rail to capture multi-focus images exactly at predefined depths, as opposed to refocusing the camera lens. A camera flash is needed to eliminate motion blur due to camera shutter's vibration when capturing at high magnification.

To minimize cost and development time we sought to use off-the-shelf components wherever practicable. These are described in detail in Supplementary Information S1.

Process in detail

Step 1: Mounting.

Collections usually store and display insects larger than mm by pinning them so that the insect's long axis is horizontal and the pin vertical. Insects smaller than mm are usually either pinned or glued in cards. This paper however focuses on pinned insects and issues arising from this mounting method. Pinning insects horizontally allows many insects to be stored in wide, flat display drawers but creates a few problems for our system:

  • The pin becomes part of the 3D model and must be edited or segmented out in post-reconstruction
  • Editing can often not fully remove evidence of the pin
  • Images of the underside of the specimen can be difficult or impossible to capture, leading to an incomplete 3D model.

Re-pinning the insect so its long axis is vertical helps with image acquisition but risks damaging the specimen, including parts, such as genitalia, that are important for the identification of some species. For some specimens, these affected parts can be isolated through dissection and scanned separately.

After the specimen is pinned, the pin is glued to a small magnet (Figure 5C) that will hold the pin in position on the turntable. Next, a specially patterned mat (Figure 5B), required by the reconstruction software (3DSOM [31]), is attached to provide information about camera pose and position relative to the specimen. Generally the suitable size of the pattern is about one to two times the length of the insect to be scanned. Scanning smaller insects requires smaller patterns to be printed. Currently, modern laser printers with 1200 dpi printing resolution can produce patterned mats as small as 5 mm in diameter. Printing smaller patterns that are sharp enough to be recognised by the reconstruction software is currently a technical challenge.

thumbnail
Figure 5. Preparing insect specimen for scanning.

A) Steps to prepare insect specimens for image capturing. B) A special mat target needs to be attached to a scanned specimen for 3DSOM software to estimate of camera viewing position and angle. C) For a large insect such as this 30 mm long Christmas beetle, the pin is glued to a 10 mm rare-earth disk magnet which is in turn attached to a 50 mm mat target. D) For a small insect such as this 3 mm long granary weevil, the micro pin is glued to a 5 mm mat target. E) shows comparison in size of the two specimens.

https://doi.org/10.1371/journal.pone.0094346.g005

Finally, the whole assembly is placed on the two-axis turntable and positioned (with the assistance of horizontal and vertical laser pointers) so the specimen is centered on the intersection of the axes of tilt and rotation. The lasers are aligned to the rotation axes of the turntable. A specimen is manually aligned to each of the laser beams such that each beam hits the centre of the insect's body.

Step 2: Acquisition.

In essence, the acquisition process is about automatically obtaining 2D images of the specimen in different poses. As far as the relationship between the camera and specimen goes, this system has three degrees of freedom: pan, tilt and (in macro-mode) distance along the specimen-camera axis. With the specimen mounted at the intersection of the pan and tilt axes of the turntable, this amounts to rotating the turntable through a range of pan and tilt angles, capturing an image at each step (Figure 6A). In macro-mode there is an additional “inner loop” of translating the camera to acquire partially focused images at different distances from the specimen for later processing into a single image with all parts of the specimen fully in focus (Figure 6B).

thumbnail
Figure 6. Automated image acquisition process.

A) Normal-mode. B) Macro-mode.

https://doi.org/10.1371/journal.pone.0094346.g006

There are many ways to automate the acquisition process. The desire to use off-the-shelf components led us to use the GigaPan Panorama Robot EPIC 100 [32] for mounting the specimen. The GigaPan is designed for mounting and controlling a camera—and this led to the GigaPan robot also acting as the acquisition controller. In other words, it is the turntable that triggers the macro-rail. The macro-rail moves and triggers the camera which triggers its flash and takes an image. Supplementary Information S1 contains more detail about this set-up.

In normal-mode, using rotation and axis tilt, the set-up captures 144 individual images. In macro-mode, the additional up to 31 images required at each step mean that the system can capture up to 4,464 separate images per specimen. Capturing more images is also possible.

Step 3: Reconstruction.

The third and final step of the digitization process is where the 2D digital information acquired from a physical specimen is manipulated to produce a 3D digital model (Figure 7).

thumbnail
Figure 7. Image processing pipeline for normal-mode and macro-mode images.

Macro-mode images require an extra step to stack each set of multi-focus images captured from the same viewing angle (but at different depth distances) into a single in-focus image.

https://doi.org/10.1371/journal.pone.0094346.g007

In macro-mode, the stack of partially focused images acquired at different specimen-camera distances must be combined into a single in-focus image for a given viewing angle. We used Helicon Focus [33] for this because of its ability to exploit multiple CPU cores. Single core open-source alternatives are available [34], [35].

Armed with a set of in-focus 2D images of an object from different viewing angles, there are two main 3D reconstruction techniques that could be applied:

Visual hull (also known as volume carving) algorithms [36], [37] project the silhouette of the object into a virtual volume at each viewing angle, carving away the volume outside the silhouette to leave a 3D visual hull which approximates the shape of the actual object. This approach does not recover concave surfaces, but photo-consistency can be used to correct this to an extent[38]. The extent of improvement by photo-consistency is limited for some insects due to strong speculiar reflections on the outer-surface and fine body structures such as legs, antennae, spikes and hairs.

Multi-view stereo algorithms generally rely on photo-consistency measures to identify the location of common features seen in different views [39], [40] and can also incorporate silhouette information [41].

Both strategies are computationally intensive and the computational demands increase with reconstruction resolution. Image clustering [18], [42] and improved feature descriptors [20] have been previously proposed to enable reconstructions to better exploit the very high image resolution produced by professional photography cameras.

Our initial investigations indicated that the visual-hull-based method could more accurately reconstruct some of the thin structures found in insects (e.g., legs, antennae, wings) and insect surfaces with strong specular reflections. 3DSOM [31] was used to provide off-the-shelf visual-hull-based reconstruction as it produced the best quality output of the different approaches [42][44].

Figure 7 sets out the detail of the reconstruction process, including the extraction of the camera pose in each input image. 3DSOM initially estimates this information from the target pattern captured in the image and further refines these estimates during 3D reconstruction. Specimen silhouettes are extracted from input images. Once the 3D geometry of the specimen's surface is reconstructed, texture colour is extracted from the images and added to the model. The resulting 3D model can be then exported to different formats—including HTML (with WebGL, Flash or Java), X3D, 3DS (AutoDesk), and STL (STereoLithography)—for subsequent viewing, analysis or embedding into documents. X3D is a convenient format as it is supported by popular 3D visualisation software, and a X3D file can included as an embedded object or as XML inline in an HTML5 file for 3D web visualisation. InstantReality's [45] tool “aopt” can perform this conversion X3D to 3D-supported HTML automatically.

Results and Discussion

Figure 8 shows high-resolution natural-colour 3D models of insects ranging from 3 mm to 30 mm in length. These 3D insect models are also available for interactive viewing at [46] and can be downloaded at [47][54]. The smallest of these—the 3 mm granary weevil—proved challenging to resolve due to an out-of-focus problem when its images were captured at magnification. The 3D model of granary weevil was obtained from images captured in macro-mode, while 3D models of larger insects were obtained from images captured in normal-mode. The 3D visualisation of insect models is based on the open-source X3DOM framework [55] which uses WebGL for plug-in-less display within a web browser (such as Firefox and Chrome). The file size of models, including 3D mesh and texture, depends on the desired visualisation quality and the complexity of the geometry and colour of the actual specimen. For the 3D models shown at [46], the file size ranges from 5 to 24 megabytes, with number of vertices from 80,000 to 130,000 and texture resolution from 4 to 16 megapixels.

thumbnail
Figure 8. Various 3D insect models.

Go to the link at [46] to interact with the 3D models or to the links at [47][54] to download. Top: 3D models of the insects with natural-colour texture, scaled to have similar sizes. They are A) a granary weevil (Sitophilus granarius), B) a sand wasp (Bembix sp.), C) a longhorn beetle (Aridaeus thoracicus), D) a Christmas beetle (Anoplognathus viriditarsis) and E) a amycterine ground weevil (Gagatophorus draco). Bottom: F) A photograph of the real insect specimens of the 3D models captured.

https://doi.org/10.1371/journal.pone.0094346.g008

Figure 9 illustrates the effectiveness of macro-mode image acquisition as compared to normal-mode image acquisition when applied to very small insects such as the granary-weevil. A Canon EF-65 mm macro lens was employed in both cases. In normal-mode, a stencil with a 2 mm hole had to be attached immediately in front of the camera (Figure 9A) to reduce the effective aperture and increase the depth of focus. In both cases a flash was used to mitigate the effects of wobble due to the camera shutter movement. With a flash, the exposure time of an image is effectively the very short duration of the flash when it triggers, and therefore it minimizes any motion blur. Flash energy in macro mode was of full power and in normal-mode (for the 2 mm aperture) it was of full power. The results shown in Figure 9 clearly illustrate the improvements of macro-mode. The macro-mode model was reconstructed with multi-focus stacking of 31 images from each view, each captured with an F/8 lens aperture at increments of 0.25 mm along the specimen-camera axis.

thumbnail
Figure 9. Comparison of natural-colour 3D reconstructions using (A) a small aperture and (B) a F/8 aperture with multi-focus image stacking.

A) shows an extra mask with a 2 mm hole put in front of the lens to extend depth of focus as compared to B) an F/8 lens aperture. C) the resulting images captured at the same angle by small aperture. D) multi-focus image stacking from 31 partial-focus images captured at distances 0.25 mm apart. E)-H) show screen shots of resulting 3D models without and with texture colour.

https://doi.org/10.1371/journal.pone.0094346.g009

Figure 10 provides a qualitative comparison of a natural-colour 3D model obtained using our system and a Micro CT model of a different specimen of the same species. While the 5.7m resolution Micro CT clearly captures more details of the surface geometry than our optical approach (including the missing antenna socket in inset A), there are features that it cannot resolve at these resolutions because they are to do with variation in the colour of the specimen (e.g., the compound eye in inset B). One option could be to develop ways to combine the strengths of both approaches: fertile ground for further research.

thumbnail
Figure 10. Comparison of a natural-colour 3D model, a Micro CT reconstruction and 2D image at a similar angle.

The surface geometry of the natural-colour 3D model (A) is less detailed than the Micro CT model (C) and missed concavities such as the antenna socket shown in the enlarged inset of C. However, the natural-colour 3D model can capture useful surface information such as the compound eye in the enlarged insect of B. False-colour Micro CT model (D) and a 2D image (E) are shown for comparison.

https://doi.org/10.1371/journal.pone.0094346.g010

By convention, insect specimens are often mounted horizontally. However this mounting orientation may not be ideal for 3D reconstruction. To investigate the effect of mounting orientation on reconstruction quality, we acquired images of a specimen mounted horizontally, then vertically (Figure 11). For the structure of that particular specimen, vertical mounting gave markedly better reconstruction of both geometry and colour, avoiding occlusions and capturing texture in more detail. Increasing the number and variety of poses by acquiring images at different tilt angles improved the reconstructions of both vertically and horizontally mounted insects (Figure 12). Even in this case, vertical mounting afforded more detail in geometry and colour. We therefore note that the best mounting orientation is specimen dependent: visual hull reconstruction of geometry improves the more surface normals are captured in silhouette, while colour and texture improve the more surface normals are captured parallel to the camera viewing axis.

thumbnail
Figure 11. The impact of mounting orientation on reconstruction quality.

Traditional horizontal mounting (A–C) produces inferior results to vertical mounting (D–F) for this specimen.

https://doi.org/10.1371/journal.pone.0094346.g011

thumbnail
Figure 12. Impacts of mounting orientation and tilt on reconstruction quality.

While additional images at tilting angles of , , and improve reconstruction quality in both horizontal and vertical mounting (in comparison with Figure 11), vertical mounting leads to sharper model with more vivid colours and textures.

https://doi.org/10.1371/journal.pone.0094346.g012

Further surface geometry issues arise as the structures of specimens become more complex. Wings, for example, can be especially challenging as shown in Figure 13(A–C) where self-occlusion causes poor reconstruction of the wings. Fortunately, additional informative views can be obtained to alleviate this problem (Figure 13D–F). Ideally, some of these additional views will be captured tangentially to the wing surface to ensure the reconstructed wings have the correct thickness.

thumbnail
Figure 13. Additional camera poses can improve wing reconstruction.

A) A typical set of camera poses cannot resolve the occlusion created by the wings of this insect, leading to inaccurate reconstruction between its wings (B–C). D) Additional images taken from camera poses looking along the insect body and wing surfaces dramatically improves reconstruction accuracy (E–F).

https://doi.org/10.1371/journal.pone.0094346.g013

We explored ways to achieve an informative mounting orientation even when the specimen cannot be re-pinned (e.g., when the specimen is too precious to handle, or the pin too firmly embedded to remove without certain damage). Previously, we mentioned that vertical orientation provides better quality than the horizontal orientation. However, repinning the specimen to have a vertical orientation causes damage, while keeping the horizontal orientation produces a lower-quality 3D model. To avoid this trade-off, the normally-pinned insect can be attached to a second pin (in this case using yellow Blu-Tack) so that the specimen is rotated on its long axis (Figure 14A). Then, the pins and the Blu-Tack need to be removed digitally to produce a clean final 3D model of the specimen. There are two methods to do this. The first method involves editing the Blu-Tack and mounting pins out of the set of 2D images (Figure 14B) during background removal prior to reconstruction. However, this method does not work well with image views where the pins and Blu-Tack occlude parts of the insect and the resulting reconstruction shows contaminated texture colour (Figure 14C). The second method is to keep the pins and Blu-Tack with the specimen during 3D reconstruction (Figure 14D and E) then remove them from the 3D model using a mesh editor. Overall, this second strategy produces the better result (Figure 14F).

thumbnail
Figure 14. Two methods to deal with an insect whose pin cannot be removed.

A) The raw image shows the pinned specimen attached to a second vertical pin so the long-axis of the insect is vertical. B) An image of the specimen after all other parts of the image are masked to some extent. C) Ventral view of the 3D reconstruction from masked images shows a splotch of contaminated texture colour. D) An image of the specimen and pins retained. E) 3D reconstruction of insect and pins. F) Ventral view of E with pins edited out of the 3D model.

https://doi.org/10.1371/journal.pone.0094346.g014

In this paper, we have shown that high resolution, natural-colour 3D digitization system for insects and other small specimens can be implemented using readily available components with hardware and software cost under AUD8000. As well as being cost effective, the system produces digital 3D models that are fairly efficient in terms of the ratio of information to data. The file size of the 3D granary weevil model shown in Figure 9H is around 10 megabytes. It was reconstructed from 18 megapixel 2D JPEG images (2–4 megabytes/image) taken at 144 different angles and 31 different distances creating 10–17 gigabytes of 2D image data in all for a single specimen. By stacking each set of 31 multi-focus images into a single in-focus one, the image data is reduced approximately 20 times. By transforming this 2D data into a 3D model, the system further achieves a 30∶1 compression of data. This level of compression enables useful information about the specimen to be exchanged via email, presented in web pages and embedded in 3D PDF documents.

This work raises a number of research challenges and opportunities for further improvement, including:

  • Eliminating the need for the printed mat: 3DSOM requires this mat to estimate the camera pose of individual images. We have reached the lower size limit of what we can straightforwardly print and attach to specimens. Furthermore, the range of poses is limited to those in which the mat is viewable. There are reconstruction methods that do not need this kind of pattern to estimate camera pose (e.g., [56], [57]), relying instead on feature matching and bundle adjustment. However, the accuracy of these estimates depend strongly on the geometry of the specimen and other objects captured in the images.
  • Detailed features, such as hairs and surface roughness, demand higher 2D image and 3D model resolution and a concomitant increase in the memory and computation needed to store and visualize the model. Our strategy is to leverage the high resolution 2D image corresponding to a particular pose of interest, reminiscent of the approach used in [29].
  • Concave surfaces: current photo-consistency based methods to resolve concavities can be challenged by the specular reflective properties of many insects.
  • Transparent wings and membranes pose challenges for acquisition, reconstruction, and for representation and rendering of the resulting 3D model.
  • View- and lighting-dependent appearance such as iridescence or sub-surface light scattering is also difficult to capture, represent and render.
  • 3D annotation standards, strategies and software are not yet as developed as 2D approaches. The ability to augment 3D models with additional information is important for taxonomy and other scientific ends, as well as engaging a broader range of end users.

Despite these future challenges, we believe that the proof-of-concept prototype presented in this paper demonstrates that natural-colour 3D model digitization is feasible and affordable enough for insect collections to implement and apply right now.

An initial investigation of the usefulness of 3D insect models, as described in Supplementary Information S1, showed that the quality of 3D insect models were good enough to provide sufficient information for species identification, and allow for easier specimen examination than the actual specimen being viewed under a microscope.

The specific usage scenarios for wider communities such as quarantine officer or educator. A quarantine officer can use 3D models of invasive insects while on duty to improve the speed and the accuracy of identification process. The challenges and possible solutions by using 3D models in quarantine control have been discussed in [58]. For educators, 3D models of insects can be used as rich education materials, allowing students to interact with insects without the need to access to fragile specimens.

Supporting Information

Acknowledgments

We thank Ms Eleanor Gates-Stuart (CSIRO Computational Informatics), Dr Rolf Oberprieler, Ms Jennings Debbie, Dr Beth Mantle, Ms Nicole Fisher, Ms Cate Lemann, Ms Anne Hastings, Mr Eric Hines and Mr Tom Weir (CSIRO Ecosystem Sciences) for their kind assistance during the course of the project. Special thanks goes to Dr Sherry Mayo (CSIRO Materials Science & Engineering) for providing Micro CT model of the granary weevil for comparison. We also thanks Dr Vince Smith (Natural History Museum - London), Prof Pavel Stoev (National Museum of Natural History - Sofia) and another anonymous reviewer for their valuable feedback to our manuscript. We gratefully acknowledge CSIRO's Office of the Chief Executive Postdoctoral Scheme, Transformational Biology initiative, and Business and Service Analytics Program for funding this work.

Author Contributions

Conceived and designed the experiments: CVN. Performed the experiments: CVN. Analyzed the data: CVN. Contributed reagents/materials/analysis tools: CVN DRL MA JLS. Wrote the paper: CVN DRL MA JLS. Created, guided and informed the project: DRL JLS MA.

References

  1. 1. McNally RS, Buschini N (1993) The Harvard Glass Flowers: Materials and Techniques. Journal of the American Institute for Conservation 32: 231–240.
  2. 2. Godfray HCJ (2002) Challenges for taxonomy. Nature 417: 17–9.
  3. 3. La Salle J, Wheeler Q, Jackway P, Winterton S, Hobern D, et al. (2009) Accelerating taxonomic discovery through automated character extraction. Zootaxa 2217: 43–55.
  4. 4. Beaman R, Cellinese N (2012) Mass digitization of scientific collections: New opportunities to transform the use of biological specimens and underwrite biodiversity science. ZooKeys 209: 7–17.
  5. 5. Blagoderov V, Kitching I, Livermore L, Simonsen T, Smith V (2012) No specimen left behind: industrial scale digitization of natural history collections. ZooKeys 209: 133–146.
  6. 6. Mantle BL, La Salle J, Fisher N (2012) Whole-drawer imaging for digital management and curation of a large entomological collection. ZooKeys 163: 147–63.
  7. 7. Smith V, Blagoderov V (2012) Bringing collections out of the dark. ZooKeys 209: 1–6.
  8. 8. Ang Y, Puniamoorthy J, Pont AC, Bartak M, Blanckenhorn WU, et al. (2013) A plea for digital reference collections and other science-based digitization initiatives in taxonomy: Sepsidnet as exemplar. Systematic Entomology 38: 637–644.
  9. 9. Balke M, Schmidt S, Hausmann A, Toussaint E, Bergsten J, et al. (2013) Biodiversity into your hands - A call for a virtual global natural history ‘metacollection’. Frontiers in zoology 10: 55.
  10. 10. Johnson L, Mantle BL, Gardner JL, Backwell PRY (2013) Morphometric measurements of drag-onywings: the accuracy of pinned, scanned and detachedmeasurement methods. ZooKeys 276: 77–84.
  11. 11. Wheeler Q, Bourgoin T, Coddington J, Gostony T, Hamilton A, et al. (2012) Nomenclatural benchmarking: the roles of digital typification and telemicroscopy. ZooKeys 202: 193–202.
  12. 12. Metscher BD (2009) MicroCT for comparative morphology: simple staining methods allow high-contrast 3D imaging of diverse non-mineralized animal tissues. BMC physiology 9: 11.
  13. 13. Faulwetter S, Vasileiadou A, Kouratoras M, Dailianis T, Arvanitidis C (2013) Micro-computed tomography: Introducing new dimensions to taxonomy. ZooKeys 263: 1–45.
  14. 14. Akkari N, Koon-Bong Cheung D, Enghoff H, Stoev P (2013) Revolving SEM images visualising 3D taxonomic characters: application to six species of the millipede genus Ommatoiulus Latzel, 1884, with description of seven new species and an interactive key to the Tunisian members of the genus (Diplopoda, Julida, Jul. ZooKeys 328: 5–45.
  15. 15. Koon-Bong Cheung D, Brunke A, Akkari N, Souza CM, Pape T (2013) Rotational Scanning Elec-tron Micrographs (rSEM): A novel and accessible tool to visualize and communicate complex morphology. ZooKeys 328: 47–57.
  16. 16. Hartley R, Zisserman A (2004) Multiple View Geometry in Computer Vision. Cambridge University Press, 2 edition, 672 pp. doi:10.1017/CBO9780511811685. Available: http://www.robots.ox.ac.uk/~vgg/hzbook/.
  17. 17. Szeliski R (2010) Computer Vision: Algorithms and Applications, volume 5. Springer, 979 pp. doi:10.1007/978-1-84882-935-0. Available: http://research.microsoft.com/en-us/um/people/szeliski/book/drafts/szelski_20080330am_draft.pdf.
  18. 18. Furukawa Y, Ponce J (2010) Accurate, dense, and robust multiview stereopsis. IEEE Transactions on Pattern Analysis and Machine Intelligence 32: 1362–76.
  19. 19. Hernández C, Vogiatzis G (2010) Shape from photographs: a multi-view stereo pipeline. In: Cipolla R, Battiato S, Farinella GM, editors, Computer Vision: Detection, Recongnition and Reconstruction, Springer Berlin Heidelberg. pp. 281–312. doi:10.1007/978-3-642-12848-6 11
  20. 20. Tola E, Strecha C, Fua P (2011) Efficient large-scale multi-view stereo for ultra high-resolution image sets. Machine Vision and Applications 23: 903–920.
  21. 21. Atsushi K, Sueyasu H, Funayama Y, Maekawa T (2011) System for reconstruction of three-dimensional micro objects from multiple photographic images. Computer-Aided Design 43: 1045–1055.
  22. 22. Gallo A, Muzzupappa M, Bruno F (2013) 3D reconstruction of small sized objects from a sequence of multi-focused images. Journal of Cultural Heritage in press.
  23. 23. Chalmers A, Mudge M, Paulo Santos L, Ritz M, Langguth F, et al. (2012) High resolution acqui-sition of detailed surfaces with lens-shifted structured light. Computers & Graphics 36: 16–27.
  24. 24. Murakawa J, Yoon I, Hong T, Lank E (2006) Parts, image, and sketch based 3D modeling method. In: EUROGRAPHICS Workshop on Sketch-Based Interfaces and Modeling (2006). pp. 67–74. doi: 10.2312/SBM/SBM06/067-074. Available: http://dl.acm.org/citation.cfm?id=2386286.
  25. 25. Zhang X, Gao Y, Caelli T (2010) Primitive-based 3D structure inference from a single 2D image for insect modeling: Towards an electronic field guide for insect identification. Control Automation Robotics & Vision (ICARCV).
  26. 26. Polo ME, Felicísimo AM (2012) Analysis of uncertainty and repeatability of a low-cost 3d laser scanner. Sensors 12: 9046–9054.
  27. 27. Kuzminsky SC, Gardiner MS (2012) Three-dimensional laser scanning: potential uses for museum conservation and scientific research. Journal of Archaeological Science 39: 2744–2751.
  28. 28. NextEngine (2013). NextEngine 3D Scanner. Available: http://www.nextengine.com.
  29. 29. Ortery (2013). Photosimile 5000 - Automated 3D Product Photography Stu-dio. Available: http://ortery.com/Photography-Equipment/3D-Photography-Equipment/Photosimile-5000-Automated-3D-Product-Photography.
  30. 30. Nguyen C, Adcock M, Lovell D, Mantle B, La Salle J (2014). Insect collections going 3D. doi:10.4225/08/531FE0104C11A. Available: http://dx.doi.org/10.4225/08/531FE0104C11A. View-able at http://www.youtube.com/watch?v=THvfu6shJjg&list=UUak3NZxjNnWip327vYL8RLA or http://publish.viostream.com/play/xbz189zsuhn8.
  31. 31. Creative Dimension Software Ltd (2013). 3DSOM. Available: http://www.3dsom.com.
  32. 32. GigaPan Systems (2013). The GigaPan Store - Products, software and accessories to create gi-gapixel panoramas. Available: http://gigapan.com/cms/shop/store.
  33. 33. HeliconSoft (2013). Helicon Focus. Available: http://www.heliconsoft.com/heliconsoft-products/helicon-focus/.
  34. 34. Hadley A (2010). CombineZP. Available: http://www.hadleyweb.pwp.blueyonder.co.uk/CZP/News. htm.
  35. 35. D'Angelo P (2013). Hugin & Enfuse. Available: http://hugin.sourceforge.net/tutorials/index. shtml.
  36. 36. Laurentini A (1994) The visual hull concept for silhouette-based image understanding. IEEE Transactions on Pattern Analysis and Machine Intelligence 16: 150–162.
  37. 37. Franco JS, Boyer E (2003) Exact polyhedral visual hulls. In: British Machine Vision Confer-ence (BMVC'03). volume 1 , pp. 329–338. doi:10.5244/C.17.32. Available: http://hal.inria.fr/inria-00349075.
  38. 38. Kutulakos K, Seitz S (2000) A theory of shape by space carving. International Journal of Computer Vision 38: 199–218.
  39. 39. Goesele M, Curless B, Seitz S (2006) Multi-View Stereo Revisited. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2 (CVPR'06). IEEE, volume 2 , pp. 2402–2409. doi:10.1109/CVPR.2006.199. Available: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1641048.
  40. 40. Seitz SM, Curless B, Diebel J, Scharstein D, Szeliski R (2006) A comparison and evaluation of multi-view stereo reconstruction algorithms. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). volume 1 , pp. 519–528. doi:10.1109/CVPR.2006.19. Available: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1640800.
  41. 41. Sinha S, Pollefeys M (2005) Multi-view reconstruction using photo-consistency and exact silhouette constraints: a maximum-ow formulation. In: Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1. IEEE, volume 1, pp. 349–356 Vol. 1. doi:10.1109/ICCV.2005.159. Available: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1541277.
  42. 42. Furukawa Y (2010). Patch-based Multi-view Stereo Software. Available: http://www.di.ens.fr/pmvs/.
  43. 43. Snavely N (2010). Bundler: Structure from Motion (SfM) for Unordered Image Collections. Available: http://www.cs.cornell.edu/~snavely/bundler/.
  44. 44. Forbes K (2009). Visual Hulls from Uncalibrated Snapshots. Available: http://www.dip.ee.uct.ac. za/∼kforbes/DoubleMirror/DoubleMirror.html.
  45. 45. InstantReality (2014). Instantreality. Available: http://www.instantreality.org/.
  46. 46. Nguyen CV, Lovell DR, Adcock M, La Salle J (2013). Demo of natural-color 3D insect models. Available: http://www2.ala.org.au/chuong/.
  47. 47. Nguyen C, Adcock M, Lovell D, Oberprieler R, Mantle B, et al.. (2014). 3d reconstruction from multi-view images of a black palm weevil. doi:10.4225/08/531E56D86DDD5. Available: http://dx.doi.org/10.4225/08/531E56D86DDD5.
  48. 48. Nguyen C, Adcock M, Lovell D, Jennings D, Oberprieler R, et al.. (2014). 3d reconstruction from multi-view images of a granary weevil. doi:10.4225/08/531E570A8D614. Available: http://dx.doi.org/10.4225/08/531E570A8D614.
  49. 49. Nguyen C, Adcock M, Lovell D, Weir T, Mantle B, et al.. (2014). 3d reconstruction from multi-view images of a jewel beetle. doi:10.4225/08/531E57227A11B. Available: http://dx.doi.org/10.4225/08/531E57227A11B.
  50. 50. Nguyen C, Adcock M, Lovell D, Lemann C, Oberprieler R, et al.. (2014). 3d reconstruction from multi-view images of a amycterine ground weevil. doi:10.4225/08/531E56A9C6A60. Available: http://dx.doi.org/10.4225/08/531E56A9C6A60.
  51. 51. Nguyen C, Adcock M, Lovell D, Mantle B, La Salle J (2014). 3d reconstruction from multi-view images of a black field cricket. doi:10.4225/08/531E56C1CD687. Available: http://dx.doi.org/10. 4225/08/531E56C1CD687.
  52. 52. Nguyen C, Adcock M, Lovell D, Mantle B, La Salle J (2014). 3d reconstruction from multi-view images of a sand wasp. doi:10.4225/08/531FCA8D36F44. Available: http://dx.doi.org/10.4225/08/531FCA8D36F44.
  53. 53. Nguyen C, Adcock M, Lovell D, Lemann C, Escalona Garcia H, et al.. (2014). 3d reconstruction from multi-view images of a longhorn beetle. doi:10.4225/08/531E573D7F06C. Available: http://dx.doi.org/10.4225/08/531E573D7F06C.
  54. 54. Nguyen C, Adcock M, Lovell D, Gates-Stuart E, Weir T, et al.. (2014). 3d reconstruction from multi-view images of a christmas beetle. doi:10.4225/08/531E56EFBD298. Available: http://dx.doi.org/10.4225/08/531E56EFBD298.
  55. 55. X3DOM (2014). x3dom Instant 3D HTML way! Available: http://www.x3dom.org/?page_id=2.
  56. 56. Agisoft (2013). Agisoft PhotoScan. Available: http://www.agisoft.ru/.
  57. 57. Autodesk (2013). 123D Catch. Available: http://www.123dapp.com/catch.
  58. 58. Nguyen CV, Lovell D, Oberprieler R, Jennings D, Adcock M, et al.. (2013) Vir-tual 3d models of insects for accelerated quarantine controlvirtual 3d models of in-sects for accelerated quarantine control. In: 2013 IEEE International Conference on Computer Vision Workshops (ICCVW 2013). pp. 161–167. doi:10.1109/ICCVW.2013.27. Available: http://www.cv-foundation.org/openaccess/content_iccv_workshops_2013/W06/papers/Nguyen_Virtual_3D_Models_2013_ICCV_paper.pdf.