Guide to 3D printing a brain

This page last updated 2020/11/25


In this guide I detail a relatively quick pipeline for 3D printing a brain. There are already a handful of guides that cover this topic (e.g. here, here, and here), but this pipleine differs in that it:

Notable speedups are achieved by simply skipping as many of FreeSurfer's recon-all steps as possible. While some of the more computationally-demanding steps are helpful, they make relatively minor improvements on otherwise print-worthy meshes. These slow steps are particularly unnecessary considering most prints will be at a 1:8 or 1:10 scale, where minor defects are often 'smoothed' in slicing and by the lack of resolution in the printing process. Improved subcortical fidelity is achieved by shoehorning the entire brain mesh into FreeSurfer's single-hemisphere cortical mesh functions. This process is far from perfect!


Step 1: Converting DICOMs to niis

First to do is convert the T1 (and any similar-resolution T2/FLAIR) DICOMs to nii. We will call dcm2niix.

#dcm2niix [options] <input_folder>
#inbcluding the option '-x y' enables cropping, and '-z y' enables compression

You will need to edit this to call dcm2niix on your system's location, and input the path of your T1 DICOM folder. Here is my example:

/opt/dcm2niix/build/bin/dcm2niix -x y -z y /home/fordb/temp/t1_DICOMs

Repeat this procedure for any other images you wish to convert. dcm2niix will place .niis into the folder containing the DICOMs. I usually recommend moving and renaming the produced .niis in some structured manner. With cropping and compression enabled, you will want to bring the .nii.gz file(s) beginning with 'co' through to the subsequent steps.

Step 2: Converting niis to meshes

Once you have your nii.gz files moved and renamed as you please, we will begin the lengthy procedure of converting them into surface meshes. First we will need to run freesurfer's setup. Be sure to change '/opt/freesurfer' to the location you installed freesurfer. The default install location is '/usr/local/freesurfer' on Linux and '/Applications/freesurfer' in OSX.

export FREESURFER_HOME='/opt/freesurfer'

Then, we will specify some other variables. The SUBJECT_NAME variable will control the folder that freesurfer uses to store intermediate files created during this procedure. T1_INPUT_VOL should be the path to the T1-weighted .nii of the brain you want to print. MESH_OUTPUT will be the name and location of the final mesh that we will bring into postprocessing.


Now we can begin generating the brain mesh! Run the folowing code as-is. This step can take awhile, so take the opportunity to rest while your computer does the hard work.

recon-all -autorecon1 -normalization2 -segmentation -noaseg -s $SUBJECT_NAME -i $T1_INPUT_VOL
mri_volcluster --in $DS/mri/wm.mgz --thmin 50 --ocn $DS/mri/wm.cluster.thresh.mgz
mri_tessellate $DS/mri/wm.cluster.thresh.mgz 1 $DS/surf/rh.orig.nofix
mris_extract_main_component $DS/surf/rh.orig.nofix $DS/surf/rh.orig
mris_make_surfaces -noaseg -noaparc -mgz -filled wm.cluster.thresh -T1 brain $SUBJECT_NAME rh
mris_convert $DS/surf/rh.pial $MESH_OUTPUT

If you also had a FLAIR in addition to the T1, specify the FLAIR filename:


Then run another pass of the surface adjustment. If you have a T2-weighted image instead of a FLAIR, replace all instances of 'FLAIR' in the code immediately above and below with 'T2'.

mri_convert $FLAIR_INPUT_VOL $DS/mri/orig/FLAIRraw.mgz
ln -s $DS/surf/rh.white $DS/surf/lh.white
ln -s $DS/surf/rh.thickness $DS/surf/lh.thickness
bbregister --s $SUBJECT_NAME --mov $DS/mri/orig/FLAIRraw.mgz --lta $DS/mri/transforms/ --init-coreg --T2
cp $DS/mri/transforms/ $DS/mri/transforms/FLAIRraw.lta
mri_convert -odt float -at $DS/mri/transforms/FLAIRraw.lta -rl $DS/mri/orig.mgz $DS/mri/orig/FLAIRraw.mgz $DS/mri/FLAIR.prenorm.mgz
mri_normalize -sigma 0.5 -nonmax_suppress 0 -min_dist 1 -surface $DS/surf/rh.white identity.nofile -surface $DS/surf/lh.white identity.nofile $DS/mri/FLAIR.prenorm.mgz $DS/mri/FLAIR.norm.mgz
mri_mask $DS/mri/FLAIR.norm.mgz $DS/mri/brainmask.mgz $DS/mri/FLAIR.mgz
cp -v $DS/surf/rh.pial $DS/surf/rh.woFLAIR.pial
mris_make_surfaces -orig_white white -orig_pial woFLAIR.pial -filled wm.cluster.thresh -noaseg -noaparc -nowhite -mgz -T1 brain -FLAIR $DS/mri/FLAIR -nsigma_above 3 -nsigma_below 3 $SUBJECT_NAME rh
mris_convert $DS/surf/rh.pial $MESH_OUTPUT

Step 3: Postprocessing

We have a couple options for post processing, using either Blender or Meshlab. The processes are not equivalent between the two, but both attempt to do heavier smooths on portions of the brain mesh that exceptionally jagged and a lighter smooth across the entire mesh. Regardless if you use Blender or MeshLab, you may need to perform more or fewer smoothing iterations depending on how rough the mesh is. I recommend against decimating, it seems to produce non-manifold meshes that can cause more problems during slicing.

Option a: Postprocessing in Blender

  1. Import the brain mesh and enter edit mode
  2. Select > Deselect All
  3. Select > Sharp Edges > 90° or ~1.57 radians
  4. Select > Select More/Less > More
  5. Tools > Mesh Tools > Smooth Vertex > Smoothing Factor 1, 10 Iterations, select all axes
  6. Select > Select All
  7. Tools > Mesh Tools > Smooth Vertex > Smoothing Factor 1, 1 Iteration, select all axes

Depending on the mesh, you may need to perform more smoothing iterations to get rid of jagged geometry. I'd suggest either relaxing the sharp edge threshold, or performing more iterations on the first (i.e. jagged vertex) smooth. The smoothing options in blender are limited compared to MeshLab, and I would recommend smoothing as little as possible, keeping in mind that scaling down and the slicing will make defects far less noticeable. If you want to automate this process, you can download a python script here, and run it in blender with the following command:

blender --background --python "" -- "full_path_to_inputfile.stl"

Note that the lack of space between --background and --python, but the space between -- "full_path_to_inputfile.stl". This script will output the postprocessed .stl file now ending with '.post.stl' alongside the input file.

Option b: Postprocessing in MeshLab

  1. Filters > Quality... > Per Face Quality according to Triangle... > Compute by inradius/circumradius
  2. Filters > Selection > Select by Face Quality > Min 0, Max 0.3
  3. Filters > Selection > Dilate Selection
  4. Filters > Selection > Dilate Selection
  5. Filters > Smoothing... > Laplacian Smooth > 10 Steps, No 1D Boundary Smoothing, No Cotangent Weighting, Affect only selected faces
  6. Filters > Selection > Select None
  7. Filters > Smoothing... > Taubin Smooth > Lambda 0.5, Mu -0.517, 50-200 Steps

Again, the number of smoothing iterations at each step really depends on the degree of mesh defects, but less is more. If step 2-4 selects too much or too little geometry, consider adjusting the maximum value in step 2. I prefer the Taubin smooth on the entire mesh because I feel it does a better job of preserving the gyrification shape compared to other smoothing options. 150 iterations of a Laplacian smooth, for instance, will produce slightly wider sulci, whereas the same number of iterations of a Taubin smooth can retain the shape of the sulci. When exporting the mesh from MeshLab, be sure to deselect the Color and Materialise Color Encoding options.


I've processed dozens of brains using this procedure, and while it is relatively robust, occasionally some bad meshes slip through. The issue that pops up most frequently is some of the dura mater being classified as grey matter (as it is similar intensity on T1-weighted images), manifesting as an ugly protruding 'bump' on the otherwise nice dorsal cortical surface. This seems more common in children's brains, as there is typally no atrophy yet, so the brain is generally closer to the dura. Having a T2 or FLAIR can often fix this, as they have better contrast between grey matter and dura. Ocassionally these defects can be manually cleaned in Blender, or simply broken off by hand after printing. Particularly stubborn cases may need some additional processing to differentiate the intensity of the meninges and grey matter in your T1 input image. Sometimes you can get away with using SPM / ANTs to get a better segmentation of the brain (BET + FAST almost never perform better than freesurfer for me), and use that segmentation to alter the intensity of the meninges so that freesurfer can keep them apart. If you have a better idea, let me know!

A Two-Script Solution

If you just want to put an .nii in, and get a maybe print-ready .stl out, you can download the main surface-generating bash script here, and the blender postprocessing script here. Then, edit the FREESURFER_HOME='/opt/freesurfer' line in to wherever your freesurfer directory is, and run the scripts: <SUBJECT_NAME> <T1_INPUT_VOL> [FLAIR/T2_INPUT_VOL]
blender --background --python <PATH_TO_POSTPROCESS.PY> -- <PATH_TO_INPUT.STL>

Please note that if you include the optional flair or T2, the filename needs to include either 'FLAIR' or 'T2'. The script looks for the presence of either in the filename and adjusts accordingly. For example:

bash brainPrintA /home/fordb/printA_T1.nii /home/fordb/printA_flair.nii
blender --background --python "/home/fordb/" -- "/home/fordb/printA_T1.nii.plusflair.brainmesh.stl"

Would automatically run the initial T1 mesh creation and the flair-based second-step mesh adjustment. The postprocessed file would be located in /home/fordb/ and named

Change log

2020/11/25: now opens blender with an empty scene.