Multiresolution and Displacement: Modifier Attribution to Enhance Realistic 3D Photogrammetry for Models of the Face

Purpose: The aim of this study was to present a workflow for obtaining realistic 3D models of human faces, using enhanced tools and features of free software. Methods: Faces of six (6) subjects of varying ages were digitized using monoscopic photogrammetry technology according to PlusID (+ID) methodology by combining smartphone captures and image processing through OrtogOnBlender addon, programmed through Blender®, an Open-Source Software in a PC. Alignment, resizing, unifying texture maps and attribution of multiresolution and displacement modifier tools were applied on the 3D models for the purpose of enhancing the 3D model to achieve more realistic features of the face. Results: Resultant 3D models with medium-quality anatomic features were obtained as a first instance and were enhanced to produce high-quality resolution of enhanced realistic features and textures of the human face for all subjects. Facial anatomy could be reproduced in*.STL, *. OBJ and other file formats with no major irregularities. Conclusion: The combined use of multiresolution and displacement features allowed us to increase mesh density & geometric detail, by using the gray scale images of the UV-mapped surface texture to displace the mesh surface of the digital model for more realistic representation of physical features of the human face. implication: conventionally


Introduction
Digital technology and 3D modeling have been used in industry since the last midcentury to streamline the design and manufacture process, if users had access to adequate software, training, and equipment. Continuous improvement of available software and hardware allows for increasingly better and more efficient results from 3D technologies [1]. Anatomical study and digital reproduction of the human face facilitated the production of more realistic 3D models for many industries: gaming, filming, forensic science for identification & reconstruction, medicine for diagnosis, surgical treatment planning & device fabrication, facial prosthetics as well as numerous other fields & applications [2].
Digital surface scanning of the human face has been proposed since the 1940's. CT-Scans, magnetic resonance imaging (MRI) and laser scanning devices have been instrumental for this purpose. More recently, stereophotogrammetry, monoscopic photogrammetry and structured light devices have provided a means to achieve this with the inclusion of information about skin color and skin texture, without radiation and without any risk of eye damage. Numerous systems are currently available on the international market (3DMD  [3][4][5].
While most systems have demonstrated accuracy, reproducibility, proportional accuracy, and have been sufficient for 3D printing, the smallest and finest details of the skin are generally not reproducible, providing limited realistic representation of facial features in the resultant 3D models [6]. The purpose of the present study was to present a workflow for processing 3D digital models of the human face, using tools and features of free software in order to achieve more realistic details and fine characteristics of skin and its texture.

Data Acquisition
Volunteer subjects accepted to take part of the study and were recruited for data capture of the face. Using an Android® smartphone (Asus Zenfone 2® Asus Inc., Taiwan-China), frontal face protocol of 26 total captures (13 captures & 2 heights) for monoscopic photogrammetry [7] was used to acquire 3D data of the faces of six (6) subjects of varying age and gender (mean age 46, range 27-68; 4 females, 3 males). "Plus ID (+ID)" workflow was followed. Subjects were instructed to maintain a static head position, close their eyes, remove hats, glasses, earrings, and other reflective accessories that might interfere with accurate data capture. Participants were also instructed to keep the mouth and lips closed during the photo capture sequence. Subjects were photographed seated with a uniform, contrasting, background color. Ambient was selected for a non-direct natural sunlight with no less than 1000lux, measured by a Light Meter smartphone app (Trajkovski Labs®). Positioning between subject and the operator was chosen according to minimize evident shadow effect over the face of the patient.
For scaling purposes, a single linear measurement of an anatomical reference was used (inter-alar distance of the nose).
If unfocused images were found, the sequence was repeated.
The completed capture sequence of twenty-six (26) images were uploaded to OrtogOnBlender addon, programmed by Cicero Moraes in Phyton for Blender®, an Open-Source Software for PC, using the OpenMVG+OpenMVS tool for 3D photogrammetry model creation and a computer powered by Intel i7 Linux Ubuntu 16.04 with 12 GB de RAM.

Preparation of digital 3D models and modifiers
application Alignment: 3D Models were aligned in a "front" view on the x-y-z axes. The position of the ears and eyes were evaluated in "side" views and aligned with the orthogonal function activated for this purpose. The Camper plane was aligned with the "ground" of the editing layer.

Area of Interest Selection:
Areas beyond the head that were not of technical use or of interest regarding the face were erased using the tool Knife Project function from the "right" fixed point of view.
Resizing: Scaling was performed using the previously registered measurement from the static anatomical reference taken clinically from the subject (inter-alar nasal distance).
Unification of texture maps: This step is usually done automatically by the OrtogOnBlender photogrammetry, but when photogrammetry, they were joined by the "Bake" process in the Render tab. In that case, instead of having 2 or more "UV maps",

Assignment of "Multiresolution" & "Displacement"
using the "multiresolution" processing up to 3 times to increase the mesh density and geometric detail (Figures 1a & 1b). Following this, "displacement", based on the gray scale of the texture map ("UV map"), was used with "strength" intensity level of 2, in order to optimize the level of detail of the anatomy over the mesh of the 3D model (Figures 1c & 1d). Simple square geometry of 1cm 2 . b.
Square geometries after the multiresolution applied 3 times in 1cm 2 . c.
Sample of a texture map over the 1cm 2 square mesh. d.
Displacement modifier applied over the square mesh of 1cm 2 .

Evaluation
Resultant 3D models were compared to their non-modified versions for subjective comparison and objective analysis of the mesh, file size and geometries.

Results
Faces of all subjects were successfully 3D digitized by monoscopic photogrammetry through a protocolized smartphone capture. Resultant 3D models of medium quality of realistic anatomic features on the mesh were able to be enhanced to produce  Subject 1 Texture maps unified and applied on the OBJ file. c.
Subject 1 OBJ file with texture map visualized. d.
Subject 1 with modifiers applied. OBJ file without texture map.

Discussion
Since its creation in 1995, Blender® has been used in many industries, and more recently including medicine. [8]. More and more, biomedical professionals are utilizing add-ons programmed from Blender® for digital modeling and analysis, recognizing that it is essential that software tools for analyzing biological images have user-friendly interfaces and a reasonable learning curve [9].
The case of OrtogOnBlender is an add-on that combines most of the 3D technologies that could be useful for virtual surgery and prosthetic planning, since the data acquisition, 3D modeling up to prepare the model for 3D printing, like in surgical guide creations or like in PlusID methodology to create a prototype to optimize a final prosthesis [7]. In this study, we explored the es. This study's intent was not to compare accuracy of physical anatomical reproductions from virtual 3D models, [5,6,11], as previous investigators have compared stereophotogrammetry methods with monoscopic photogrammetry through a smartphone method, and found no statistically significant difference between the two methods [14], The authors of the present study suggest a reproducible and predictable protocol for image capture when utilizing monoscopic photogrammetry [4,8]. In this study we defined a reproducible workflow for further processing of 3D models of the human face, using standardized protocols [7] and using native tools from graphic & computer science modifiers like multiresolution and displacement with OrtogOnBlender in order to achieve more realistic details and reproducible characteristics of skin.
Further studies will be necessary to keep evaluating this application in various industries, including clinical applications for healthcare.

Conclusion
The combined use of Multiresolution and Displacement features allowed us to increase the geometries of the mesh and to use the gray scale from the UV map of the texture to displace the surface of the digital model to more accurately represent realistic features of the human face. From monoscopic photogrammetry following the Plus ID (+ID) methodology, this workflow allowed us to reach digital models with realistic features of the faces which can be used for digital analyses or 3D printing purposes.