A deep learning method for simultaneous denoising and missing wedge reconstruction in cryogenic electron tomography
In this section, we describe DeepDeWedge and evaluate its effectiveness on real and simulated data.
The DeepDeWedge algorithm
DeepDeWedge takes a single tilt series t as input and produces a denoised, missingwedgefilled tomogram. The method can also be applied to a dataset containing multiple (typically up to 10) tilt series from different samples of the same type, for example, sections of different cells, which share the same cell type. DeepDeWedge consists of the following three steps, illustrated in Fig. 1:

1.
Step: Data preparation. First, split the tilt series t into two subtiltseries t^{0} and t^{1} with the even/odd split or the framebased split. The even/odd split partitions the tilt series into even and odd projections based on their order of acquisition. The framebased split can be applied if the tilt series is collected using dose fractionation and entails averaging only the even and odd frames recorded at each tilt angle. We recommend the framebased splitting approach whenever possible. After splitting, we have two subtilt series.
Next, reconstruct both subtiltseries independently with FBP and apply CTF correction. This yields a pair of two coarse reconstructions \((\rmFBP\,(\bft^0),\,\rmFBP\,(\bft^1))\) of the sample’s 3D density. Finally, use a 3Dsliding window procedure to extract N overlapping cubic subtomogram pairs \({\(\bfv^0_i,\bfv^1_i)\}_i=1^N\) from both FBP reconstructions. The size and number N of these subtomogram cubes is a hyperparameter. Experiments on synthetic data presented in the supplementary information suggest that larger subtomograms tend to yield better results up to a point.

2.
Step: Model fitting. Fit a randomly initialized network f_{θ}, we use a UNet^{14}, with weights θ by repeating the following steps until convergence:

(a)
Generate model inputs and targets: For each of the subtomogram pairs \({\(\bfv^0_i,\bfv^1_i)\}_i=1^N\) generated in Step 1, sample a rotation \(\bfR_\varphi _i\) parameterized by Euler angles φ_{i} from the uniform distribution on the group of 3D rotations, and construct a model input \(\tilde\bfv^\rm0_i,\varphi _i\) and target \(\bfv^\rm1_i,\varphi _i\) by applying the rotation \(\bfR_\varphi _i\) to both subtomograms and adding an artificial missing wedge to the rotated subtomogram \({{\bfR}}_\varphi _i\bfv^0_i\), as shown in the centre panel of Fig. 1. The missing wedge is added by taking the Fourier transform of the rotated subtomograms and multiplying them with a binary 3D mask M that zeros out all Fourier components that lie inside the missing wedge. Repeating this procedure for all subtomogram pairs \({\(\bfv^0_i,\bfv^1_i)\}_i=1^N\) yields a set of N triplets consisting of model input, target subtomogram, and angle \({\(\tilde\bfv^0_i,\varphi _i,\bfv^1_i,\varphi _i,\varphi _i)\}_i=1^N\).

(b)
Update the model: Update the model weights θ by performing gradient steps to minimize the persample loss
$$\ell \left(\boldsymbol\theta ,i\right)={\left\Vert \left(\bfM\bfM_\varphi _i+2\bfM^C\bfM_\varphi _i\right)\bfF\left(\rmf_\theta \left(\tilde\bfv^{\rm0}_i,\varphi _i\right){\bfv^{\rm1}_i,\varphi _i}\right)\right\Vert }_2^2.$$
(2)
Here, \(\bfM_\varphi _i\) is the rotated version of the wedge mask M and M^{C} ≔ I − M is the complement of the mask. For the gradient updates, we use the Adam optimizer^{15} and perform a single pass through the N model input and target subtomograms generated before.

(c)
Update the missing wedges of the model inputs: For each, i = 1, …, N, update the missing wedge in the ith subtomogram \(\bfv^0_i\) produced in Step 1 by passing it through the current model f_{θ}, and inserting the predicted content of the missing wedge, as follows
$$\bfv^0_i\leftarrow \bfF^1\left(\bfM\bfF\bfv^0_i+\bfM^C\bfF\rmf_\theta \left(\bfv^0_i\right)\right).$$
(3)
In the next inputtarget generation step, the model inputs \({\\tilde\bfv^0_i,\varphi _i\}_i=1^N\) are constructed using the updated subtomograms. We do not update the missing wedges of the subtomograms \({\\bfv^1_i\}_i=1^N\) used to generate the model targets, since their missing wedges are masked out in the persample loss, c.f. Eq. (2), and therefore play no role in model fitting.

(a)

3.
Step: Tomogram refinement. Pass the original, nonupdated subtomograms \({\\bfv^0_i\}_i=1^N\) through the fitted model \(f_\boldsymbol\theta ^*\) from Step 2, and reassemble the model outputs \({\{f_\boldsymbol\theta ^*\left(\bfv^0_i\right)\}}_i=1^N\) into a fullsized tomogram. Repeat the same for the subtomograms \({\\bfv^1_i\}_i=1^N\). Finally, average both reconstructions to obtain the final denoised and missingwedge corrected tomogram.
Motivation for the three steps of the algorithm
In Step 1, we split the tilt series into two disjoint parts to obtain measurements with independent noise. As the noise on the individual projections or frames is assumed to be independent, the reconstructions FBP (t^{0}) and FBP (t^{1}) are noisy observations of the same underlying sample with independent noise terms. Those are used in Step 2 for the selfsupervised Noise2Noiseinspired loss.
Tilt series splitting is also used in popular implementations of Noise2Noiselike denoising methods for cryoET^{11,12,13}. The framebased splitting procedure was proposed by Buchholz et al.^{11}, who found that it can improve the performance of Noise2Noiselike denoising over the even/odd split.
Step 2 of DeepDeWedge is to fit a neural network to perform denoising and missing wedge reconstruction, for which we have designed a specific loss function ℓ (Eq. (2)). We provide a brief justification for the loss function here; a detailed theoretical motivation is presented in the following section.
As the masks \(\bfM\bfM_\varphi _i\), and \(\bfM^C\bfM_\varphi _i\) are orthogonal, the loss value ℓ(i, θ) can be expressed as the sum of two the two terms \({\Vert \bfM\bfM_\varphi _i\bfF(\rmf_\theta (\tilde\bfv^0_i,\varphi _i)\bfv^1_i,\varphi _i)\Vert }_2^2\), and \({\Vert 2\bfM^C\bfM_\varphi _i\bfF(\rmf_\theta (\tilde\bfv^0_i,\varphi _i)\bfv^1_i,\varphi _i)\Vert }_2^2\). The first summand, i.e. \({\Vert \bfM\bfM_\varphi _i\bfF(\rmf_\theta (\tilde\bfv^0_i,\varphi _i)\bfv^1_i,\varphi _i)\Vert }_2^2\), is the squared L2 distance between the network output and the target subtomogram \(\bfv^1_i,\varphi _i\) on all Fourier components that were not masked out by the two missing wedge masks M and \(\bfM_\varphi _i\). As we assume the noise in the target to be independent of the noise in the input \(\tilde\bfv^0_i,\varphi _i\), minimizing this part incentivizes the network to learn to denoise these Fourier components. This is inspired by the Noise2Noise principle.
The second summand, i.e. \({\Vert 2\bfM^C\bfM_\varphi _i\bfF(\rmf_\theta (\tilde\bfv_i,\varphi _i^0)\bfv^1_i,\varphi _i)\Vert }_2^2\), incentivizes the network f_{θ} to restore the data that we artificially removed with the mask M, and can be considered as a Noisier2Noiselike loss^{16} (see Supplementary Information 1 for background). For this part, it is important that we rotate both volumes, which moves their original missing wedges to a new, random location.
In the last part of Step 2, we correct the missing wedges of the subtomograms \({\{\bfv^0_i\}}_i=1^N\) using the current model f_{θ}. Therefore, as the model fitting proceeds, the model inputs \({\{\tilde\bfv^0_i,\varphi _i\}}_i=1^N\) will more and more resemble subtomograms with only one missing wedge, i.e. the one we artificially remove from the partially corrected subtomograms \({\{\bfv^0_i\}}_i=1^N\) with the mask \(\bfM.\) This is a heuristic which is supposed to help the model perform well on the original subtomograms \({\{\bfv^0_i\}}_i=1^N\) and \({\{\bfv^1_i\}}_i=1^N\), which have only one missing wedge and which we use as model inputs in Step 3. An analogous approach for Noisier2Noisebased image denoising was proposed by Zhang et al.^{17}.
In Step 3, we use the fitted model from Step 2 to produce the final denoised and missing wedgefilled tomogram. To use all the information contained in the tilt series t for the final reconstruction, we separately refine the subtomograms from the FBP reconstructions \(\rmFBP\left(\bft^0\right)\) and \(\rmFBP\left({{{\bft}}}^1\right)\) of the subtiltseries t^{0} and t^{1}, and average them. For this, we apply a special normalization to the model inputs, which is described in Supplementary Information 2.
Theoretical motivation for the loss function
Here, we present a theoretical result that motivates the choice of our persample loss ℓ defined in Eq. (2). The discussion in this section does not consider the heuristic of updating the missing wedges of the model inputs, which is part of DeepDeWedge’s model fitting step. Moreover, we consider an idealized setup that deviates from practice to motivate our loss.
We assume access to data that consists of many noisy, missingwedgeaffected 3D observations of a fixed ground truth 3D structure \(\bfv^*\in \mathbbR^N\times N\times N\). Specifically, data is generated as two measurements (in the form of volumes) of the unknown groundtruth volume of interest, \(\bfv^*\), as
$$\bfv^0=\bfF^1\bfM\bfF\left(\bfv^*+\bfn^0\right),\qquad \bfv^1=\bfF^1\bfM\bfF\left(\bfv^*+\bfn^1\right),$$
(4)
where \(\bfn^0,\bfn^1\in \mathbbR^N\times N\times N\) are random noise terms and \(\bfM \in \0,1\^N \times N \times N\) is the missing wedge mask. From the first measurement, we generate a noisier observation \({\tilde\bfv}^0={{\bfF}}^1\tilde\bfM{{\bfF}}\bfv^0\) by applying a second missing wedge mask \(\tilde\bfM\). The noisier observation has two missing wedges: the wedge introduced by the first and the wedge introduced by the second mask. We assume that the two masks follow a joint and symmetric distribution, e.g. that for each mask, a random wedge is chosen uniformly at random. Figure 2 illustrates four data points in 2D.
We then train a neural network f_{θ} to minimize the loss
$$\rmL(\boldsymbol\theta )=\mathbbE_{\bfM,\tilde\bfM,\bfn^0,{{{\bfn}}}^1}\left[{\left\Vert \left(\tilde\bfM\bfM+2{\tilde\bfM}^C\bfM\right){{\bfF}}\left({{{\rmf}}}_\theta \left({\tilde\bfv}^0\right)\bfv^1\right)\right\Vert }_2^2\right],$$
(5)
where the expectation is over the random masks and the noise terms. Note that this resembles training on infinitely many data points, with a very similar loss to the original loss (2); the main difference is that in the original loss, the volume is rotated randomly but the mask M is fixed, while in the setup considered in this section, the volume is fixed but the masks M and \(\tilde\bfM\) are random.
After training, we can use the network to estimate the groundtruth volume by applying the network to another noisy observation \({\tilde\bfv}^0\). The following proposition, whose proof can be found in Supplementary Information 4, establishes that this is equivalent to training the network on a supervised loss to reconstruct the input \({\tilde\bfv}^0\), provided the two masks are nonoverlapping.
Proposition 1
Assume that the noise n^{1} is zeromean and independent of the noise n^{0}, and of the masks \(({{\bfM}},\;\tilde{{{\bfM}}})\), and assume that the noise n^{0} is also independent of the masks. Moreover, assume that the joint probability distribution P of the missing wedge masks M and \(\tilde{{{\bfM}}}\) is symmetric, i.e. \(\rmP({{\bfM}},\;\tilde{{{\bfM}}})=\rmP(\tilde{{{\bfM}}},\;{{\bfM}})\), and that the missing wedges do not overlap. Then the loss L is proportional to the supervised loss
$$\rmR({{\boldsymbol\theta }})=\mathbbE_{{{\bfM}},\tilde{{{\bfM}}},{{{\bfn}}}^0}\left[{\left\Vert {{{\rmf}}}_0\left({\tilde{{{\bfv}}}}^0\right){{{\bfv}}} ^ * \right\Vert }_2^2\right],$$
(6)
i.e. L (θ) = R(θ) + c, where c is a numerical constant independent of the network parameters θ.
In practice, we do not apply our approach to the problem of reconstructing a single fixed structure \({{\bfv}}^*\) from multiple pairs of noisy observations with random missing wedges. Instead, we consider the problem of reconstructing several unique biological samples using a small dataset of tilt series. To this end, we fit a model with an empirical estimate of risk similar to the one considered in Proposition 1. We fit the model on subtomogram pairs extracted from the FBP reconstructions of the even and odd subtilt series, which exhibit independent noise. Moreover, as already mentioned above, in the setup of our algorithm, the two missing wedge masks M and \(\tilde{{{\bfM}}}\) themselves are not random. However, as we randomly rotate the model input subtomograms during model fitting, the missing wedges appear at a random location with respect to an arbitrary fixed orientation of the subtomogram.
Related work
DeepDeWedge builds on Noise2Noisebased denoising methods and is related to the denoising and missing wedgefilling method IsoNet^{4}. We first discuss the relation between DeepDeWedge and Noise2Noisebased methods, which do not reconstruct the missing wedge.
Noise2Noisebased denoising algorithms for cryoET as implemented in CryoCARE^{11} or Warp^{13} take one or more tilt series as input and return denoised tomograms. A randomly initialized network f_{θ} is fitted for denoising on subtomograms of FBP reconstructions FBP(t^{0}) and FBP(t^{1}) of subtiltseries t^{0} and t^{1} obtained from a full tilt series t. The model is fitted by minimizing a loss function (typically the meansquared error), between the output of the model f_{θ} applied to one noisy subtomogram and the corresponding other noisy subtomogram. The fitted model is then used to denoise the two reconstructions FBP(t^{0}) and FBP(t^{1}), which are then averaged to obtain the final denoised tomogram. Contrary to those denoising methods, DeepDeWedge fits a network not only to denoise, but also to fill the missing wedge.
Our method is most closely related to IsoNet, which can also do denoising and missing wedge reconstruction. IsoNet takes a small set of, say, one to ten tomograms and produces denoised, missingwedge corrected versions of those tomograms. Similar to DeepDeWedge, a randomly initialized network is fitted for denoising and missing wedge reconstruction on subtomograms of these tomograms, however, the fitting process is different: Inspired by Noisier2Noise, the model is fitted on the task of mapping subtomograms that are further corrupted with an additional missing wedge and additional noise onto their noncorrupted versions. After each iteration, the intermediate model is used to predict the content of the original missing wedges of all subtomograms. The predicted missing wedge content is inserted into all subtomograms, which serve as input to the next iteration of the algorithm.
Different to IsoNet, our denoising approach is Noise2Noiselike, as in CryoCARE. This leads to better denoising performance, as we will see later, as well as requiring fewer assumptions and no hyperparameter tuning. Specifically, Noisier2Noiselike denoising requires knowledge of the noise model and strength (see Supplementary Information 1). As this knowledge is typically unavailable, Liu et al.^{4} propose approximate noise models from which the user has to choose.
After model fitting, the user must manually decide which iteration and noise level gave the best reconstruction. Thus, IsoNet’s Noisier2Noiseinspired denoising approach requires several hyperparameters for which good values exist but are unknown. Therefore, IsoNet requires tuning to achieve good results. Our denoising approach introduces no additional hyperparameters and does not require knowledge of the noise model and strength.
The main commonality between DeepDeWedge and IsoNet is the Noisier2Noiselike mechanism for missing wedge reconstruction, which consists of artificially removing another wedge from the subtomograms and fitting the model to reconstruct the wedge.
Moreover, like IsoNet, DeepDeWedge fills in the missing wedges of the model inputs. In IsoNet, one also has to fill in the missing wedges of the model targets. This is necessary because, contrary to our loss ℓ form Eq. (2), IsoNet’s loss function does not ignore the targets’ missing wedges via masking in the Fourier domain.
Another line of work related to DeepDeWedge considers domainspecific tomographic reconstruction methods that incorporate prior knowledge of biological samples into the reconstruction process to compensate for missing wedge artefacts, for example, ICON^{18}, and MBIR^{19}. For an overview of such reconstruction methods, we refer to the introductory sections of works by Ding et al.^{20} and Bohning et al.^{21}. Liu et al.^{4} found that IsoNet outperforms both ICON and MBIR.
DeepDeWedge is also conceptually related to untrained neural networks, which reconstruct an image or volume based on fitting a neural network to data^{22,23}. Untrained networks also only rely on fitting a neural network to given measurements. However, they rely on the bias of convolutional neural networks towards natural images^{24,25}, whereas in our setup, we fit a network on measurement data to be able to reconstruct from the same measurements.
For cryoEMrelated problems other than tomographic reconstruction, deep learning approaches for missing data reconstruction and denoising have also recently been proposed. Zhang et al.^{26} proposed a method to restore the state of individual particles inside tomograms, and Liu et al.^{27} proposed a variant of IsoNet to resolve the preferred orientation problem in singleparticle cryoEM^{27}.
Experiments on purified Saccharomyces cerevisiae 80S ribosomes
In this and the following experiments, we compare DeepDeWedge to (a reimplementation of) CryoCARE, IsoNet, and a twostep approach of fitting IsoNet to tomograms denoised with CryoCARE. The twostep approach, which we call CryoCARE + IsoNet, is considered a stateoftheart pipeline for denoising and missing wedge reconstruction.
The first dataset we consider is the commonly used EMPIAR10045 dataset, which contains seven tilt series collected from samples of purified S. cerevisiae 80S Ribosomes.
Figure 3 shows a tomogram refined with IsoNet, CryoCARE + IsoNet and DeepDeWedge using the even/odd tilt series splitting approach. Note that while IsoNet’s builtin Noisier2Noiselike denoiser removes some of the noise contained in the FBP reconstruction, its performance is considerably worse than that of Noise2Noisebased CryoCARE. This can be seen by comparing to the result of applying IsoNet with a disabled denoiser to a tomogram denoised with CryoCARE. DeepDeWedge produces a denoised and missingwedgecorrected tomogram similar to the CryoCARE + IsoNet combination. The main difference between these reconstructions is that the DeepDeWedgerefined tomogram has a smoother background and contains fewer highfrequency components.
Regarding missing wedge correction, we find the performance of DeepDeWedge and IsoNet to be similar. In slices parallel to the x–zplane, where the effects of the missing wedge on the FBP reconstruction are most prominent, both IsoNet and DeepDeWedge reduce artefacts and correct artificial elongations of the ribosomes. The central x–zslices through the reconstructions’ Fourier transforms confirm that all methods but FBP fill in most of the missing wedge.
Experiments on flagella of Chlamydomonas
reinhardtii
Next, we evaluate DeepDeWedge on another realworld dataset of tomograms of the flagella of C. reinhardtii, which is the tutorial dataset for CryoCARE. Since we observed above that IsoNet performs better when applying it to CryoCAREdenoised tomograms, we compare only to CryoCARE + IsoNet. In addition, we investigate the impact of splitting the tiltseries into even and odd projections versus using the framebased split for DeepDeWedge and CryoCARE + IsoNet.
The reconstructions obtained with all methods are shown in Fig. 4. We find that when using the even/oddbased split, CryoCARE + IsoNet produces a crisper reconstruction than DeepDeWedge (see zoomedin region). This may be because the model inputs and targets of IsoNet stem from denoised FBP reconstruction of the full tilt series in which the information is more densely sampled than in the subtomograms of the even and odd FBP to the reconstructions used for fitting DeepDeWedge.
When using the framebased splitting method, in which DeepDeWedge also operates on the more densely sampled FBP reconstruction, DeepDeWedge removes more noise than CryoCARE + IsoNet and produces higher contrast, which is most noticeable in the x–zslice. Especially in background areas, the DeepDeWedge reconstruction has fewer highfrequency components and is smoother. Therefore, the CryoCARE + IsoNet reconstruction is slightly more faithful to the FBP reconstruction but is also noisier.
The central x–zslices through the reconstructions’ Fourier transforms’ indicate that both CryoCARE + IsoNet and DeepDeWedge fill in most of the missing wedge. Both methods fix the missingwedgecaused distortions of the microtubules exhibited by the FBP reconstruction, as seen in the x–zslices. DeepDeWedge reconstructs more of the flagellas’ outer parts.
Experiments on the ciliary transit zone of C. reinhardtii
Finally, we apply DeepDeWedge to an in situ dataset. We chose EMPIAR11078^{28}, which contains a tilt series of the ciliary transit zone of C. reinhardtii. The crowded cellular environment and low contrast and SNR of the tilt series make EMPIAR11078 significantly more challenging for denoising and missing wedge reconstruction than the two datasets from our previous experiments.
Slices through reconstructions of two tomograms obtained with FBP, CryoCARE + IsoNet and DeepDeWedge are shown in Fig. 5. In the x–z– and z–yplanes, the DeepDeWedge reconstructions are more crisp and less noisy than those produced with CryoCARE + IsoNet. Especially in the x–zplane, where the effects of the missing wedge are strongest, DeepDeWedge produces higher contrast than CryoCARE + IsoNet and removes more of the artefacts. Again, the CryoCARE + IsoNet reconstructions contain more highfrequency components and are closer to the FBP reconstruction, whereas the DeepDeWedge reconstructions are smoother and more denoised, especially in empty or background regions.
Remarkably, as can be seen in the zoomedin regions in the second row of Fig. 5, both CryoCARE + IsoNet and DeepDeWedge reconstruct parts of the sample that are barely present in the FBP reconstruction since they are perpendicular to the electron beam direction, which means that a large portion of their Fourier components are masked out by the missing wedge.
Note that both CryoCARE + IsoNet and DeepDeWedge appear less effective at reconstructing the missing wedge compared to the two experiments presented above. This is indicated by the central x–zslices through the Fourier transform of the reconstructions and is likely due to the challenging, crowded nature and low SNR of the data.
Experiments on synthetic data
Here, we compare DeepDeWedge to IsoNet, CryoCARE, and the CryoCARE + IsoNet combination on synthetic data to quantify our qualitative findings on real data.
We used a dataset by Gubins et al.^{29} containing 10 noiseless synthetic ground truth volumes with a spatial resolution of 10 Å^{3}/voxel and a size of 179 × 512 × 512 voxels. All volumes contain typical objects found in cryoET samples, such as proteins (up to 1500 uniformly rotated samples from a set of 13 structures), membranes, and gold fiducial markers.
For our comparison, we fitted models on the first three tomograms of the SHREC 2021 dataset. We used the Python library tomosipo^{30} to compute clean projections of size 512 × 512 in the angular range ±60° with 2° increment. From these clean tilt series, we generated datasets with different noise levels by adding pixelwise independent Gaussian noise to the projections. We simulated three datasets with tilt series SNR 1/2, 1/4, and 1/6, respectively.
To measure the overall quality of a tomogram \(\hat{{{\bfv}}}\) obtained with any of the three methods, we calculated the normalized correlation coefficient \(\,\rmCC\,(\hat{{{\bfv}}},{{{\bfv}}}^*)\) between the reconstruction \(\hat{{{\bfv}}}\) and the corresponding ground truth \({{\bfv}}^*\), which is defined as
$$\rmCC\,(\hat{{{\bfv}}},{{{\bfv}}}^*)=\frac{\left\langle \hat{{{\bfv}}}\,\rmmean\,(\hat{{{\bfv}}}),{{{\bfv}}}^*\,\rmmean\,({{{\bfv}}}^*)\right\rangle} {{\left\Vert \hat{{{\bfv}}}\rmmean (\hat{{{\bfv}}})\right\Vert }_2{\left\Vert {\bfv}^*\rmmean ({\bfv}^*)\right\Vert }_2}.$$
(7)
By definition, it holds that \(0\le \,\rmCC\,(\hat{{{\bfv}}},{{{\bfv}}}^*)\le 1\), and the higher the correlation between reconstruction and ground truth, the better. The correlation coefficient measures the reconstruction quality (both the denoising and the missing wedge reconstruction capabilities) of the methods. To isolate the denoising performance of a method from its ability to reconstruct the missing wedge, we also report the correlation coefficient between the refined reconstructions and the ground truth after applying a 60° missing wedge filter to both of them. We refer to this metric as “CC outside the missing wedge” and it is used to compare the denoising performance of CryoCARE, which does not perform missing wedge reconstruction.
As a central application of cryoET is the analysis of biomolecules, we also report the resolution of all proteins in the refined tomograms. For this, we extracted all proteins from the ground truth and refined tomograms and calculated the average 0.143 Fourier shell correlation cutoff (0.143FSC) between the refined proteins and the ground truth ones. The 0.143FSC is commonly used in cryoEM applications. Its unit is Angstroms, and it seeks to express up to which spatial frequency the information in the reconstruction is reliable. In contrast to the correlation coefficient, a lower 0.143FSC value is better. To measure how well each method filled in the missing wedges of the structures, we also report the average 0.143FSC calculated only on the true and predicted missing wedge data. We refer to this value as (average) “0.143FSC inside the missing wedge”.
Figure 6 shows the metrics for decreasing SNR of the tilt series. All metrics suggest that DeepDeWedge yields higherquality reconstructions than IsoNet, CryoCARE, and CryoCARE + IsoNet.
CryoCARE achieves a lower correlation coefficient than IsoNet in the highSNR regime, while the order is reversed for low SNR. A likely explanation is that for lower noise levels, the correlation coefficient is more sensitive to the missing wedge artefacts in the reconstructions. CryoCARE does not perform missing wedge reconstruction, so it has a lower correlation coefficient than IsoNet for higher SNR. For lower SNR, the correlation coefficient is dominated by the noise. Regarding denoising, we and others^{9} have observed that CryoCARE performs better than IsoNet, which is confirmed here by looking at the correlation coefficient outside the missing wedge. As expected, CryoCARE + IsoNet combines the strengths of both methods. Compared to this combination, the overall quality of DeepDeWedge reconstructions is on par or better, depending on the noise level.
The FSC metrics in the second row of Fig. 6 indicate that the average resolution of the proteins in the refined tomograms is approximately the same for IsoNet and DeepDeWedge and that they perform similarly for missing wedge reconstruction.
link