User's Question: Usually after I collect data, I reprocess it and only use the real time autoXDS at the beamline for quick feedback. This time, I have a data set that I cant, for whatever reason got to process well in my hands, but the autoprocessing seemed to do fine. Rather than beating my head against it, I decided it would be better to just take the mtz from the automatic runs. There are many mtzs generated. There's aimless.mtz, pointless.mtz, ccp4_truncated.mtz, truncated.mtz, truncated_1.mtz, and so on. Which should I use?!?! And what's the difference (other than the obvious use of different software). Holton's Answer: The 8.3.1 automatic processing I call "proctrigger", but all it does is launch my "xds_runme.com". If that succeeded where other xds runs (such as xds_rollup.com) failed, it is usually because it used early images only. Since the job launches as soon as data collection starts, the first round will often complete on a subset of images. It automatically runs again if new data appear, but if the 2nd run fails the output files from the 1st one are not erased. That may have been what happened. You can check the header of XDS_ASCII.HKL to see what image range it used. The important file is XDS_ASCII.HKL, all the others are derived from that. Some programs these days even accept it natively. What I did in the auto processing is just implement exactly what Kay Diederichs told me to do with xds. This is to run xds, xscale, and xdsconv, followed by f2mtz. The result of that is "truncated.mtz". Since it runs more than once, old copies are backed up as truncated_1.mtz, truncated_2.mtz, etc. Now, despite the compact, fast and linear route, some people really really really wanted to see the output of aimless. So, as a side branch, I drop the same XDS_ASCII.HKL file into pointless/aimless/truncate using CCP4. The result of that is ccp4_truncated.mtz. The aimless job does not do any scaling, only merging. It is faster that way, and also avoids double-scaling which can create problems in certain cases. As for which one is better? That is an excellent question. At ESRF they run seven different data processing pipelines on every data set. Five of those are just different front-ends to XDS. My colleague Max Nanao was given the task of evaluating the downstream success rates of all these pipelines because they really wanted to eliminate at least one or two of them. What he found is that there is no clear answer. In many cases one particular pipeline may be the only way to succeed in phasing, whereas for another data set a different pipeline might be the only "good one". And in plenty of other cases they all work and there is no clear distinction either. All in all, what I think is going on is different processing methods let you push the noise around in different ways. Sacrificing resolution for accurate anomalous, for example. Most of the time that doesn't make any difference because the noise is either way too high for what you are trying to do, or it is way too low to notice it and re-partitioning it doesn't change the result either. It is only cases where you are at a transition point that lots of little changes can make-or-break the experiment. The one I usually go with is truncated.mtz