Research Ideas and Outcomes : Project Report
Print
Project Report
Streamlining the Process of 3D Printing a Brain From a Structural MRI
expand article info Daniel Peterson
Open Access

Abstract

Currently, the process of obtaining a 3D model from a structural MRI requires specialized knowlege and skills. This is not due to the fundamental difficulty and complexity of the process, but is a result of the fact that the neccessary tools were developed for and by neuroimaging researchers. This project describes a publically available utility implemented as a Docker image that takes a structural MRI as input, and gives files for 3D printing as output, along with a rendered image of the surface.

Keywords

Structural MRI, Rapid prototyping, 3D printing, Docker,

Introduction

Rapid prototyping, or 3D printing is becoming an increasingly mature and common techonology. This presents opportunities for neuroscientists in a variety of arenas, including public engagement, education (Madan 2016) , and physical data visualization. While there are advanced methods available for automated segmentation and quantification of brain images (Fischl 2012), obtaining the files required for 3D printing from a structural scan requires familiarity with specialized research software, uncommon file formats, and advanced experience with the command-line. This project is aimed at making it feasible for people with no neuroimaging experience, but some experience with the linux command-line-interface to obtain a 3D print of their brain from an MRI image. The entire process takes about an hour of user interaction, and about a day of processing time. Along with the .stl file output, which is the standard file format for rapid prototyping, animated gifs showing a rendering of the model are also produced. This project is publically available on github: https://github.com/danjonpeterson/brain_printer.

Description

In order to reduce the burden on the user that is associated with installing and configuring various research software packages, a Docker container was constructed, which includes all the software neccessary, and in the correct configuration. Detailed instructions on how to use this Docker container are availabe on the readme file on GitHub. When the main script in the Docker container is run, the following steps are executed: Firstly, FreeSurfer is run on the input brain image (in .nii or format). Then, the right and left pial surfaces are converted to .stl using mris_convert . Next, the .stl mesh files are converted to POV-Ray format (.pov). After that the .pov files are then fed to povray rendering software to produce a series of raytraced images showing the brain model at a range of viewing angles (Fig. 1). These images are then assembled into an animated .gif image using convert from the ImageMagick software package. Finally, the .stl and the .gif files are copied to the output directory. All of these operations occur without user interaction in a ‘headless’ linux environment without display drivers.

Figure 1.

A frame from the quality control animation generated by the Docker image.

Conclusions

Using a containerized solution, it is feasilble for users with no neuroimaging experience to employ advanced research software for specific use-cases. This project has been used to obtain 3D printed brains by users that have learned about this project through social media, using only the documentation on github to guide them. Planned improvements include adding support for DICOM input, and adding an additional decimation/smoothing step in the mesh processing in order to denoise the final surface and to reduce the size of the final output. Possible future directions for this project include making this process available as a public web service, so that users with no technical background whatsover may obtain a 3D print of their brain. If this can be achieved, it will be simple for study coordinators to routinely provide 3D printed models to study volunteers as a form of non-coercive compensation that is unique and memorable.

Acknowledgements

This project was performed at Neurohackweek 2016. Thanks to Valentia Staneva, Matteo Visconti, Chris Madan and everyone else at #NHW16

References

login to comment