Difference between revisions of "Combined X-Ray and Neutron Refinements"

From Ug11bm
Jump to navigationJump to search
Line 24: Line 24:


On the other hand, if one finds that dropping either set of data results in a much better fit, then that implies the two datasets are best fit by conflicting models. At this point you need to stop, and think about what might physically explain these model differences.  See Experiment Hints above.
On the other hand, if one finds that dropping either set of data results in a much better fit, then that implies the two datasets are best fit by conflicting models. At this point you need to stop, and think about what might physically explain these model differences.  See Experiment Hints above.


== Acknowledgements ==  
== Acknowledgements ==  
thanks to Ashfia Huq, Robert Von Dreele, and Brian Toby for their insight on this topic
thanks to Ashfia Huq, Robert Von Dreele, and Brian Toby for their insight on this topic

Revision as of 00:23, 4 January 2013

Because of the distinct elemental sensitivity of x-ray and neutron probes in powder diffraction, many systems benefit enormously from a complementary study exploiting both. The combined analysis of these two datasets often can reveal subtle structural details and understanding far beyond that possible with a single measurement.

News: Approved ORNL Spallation Neutron Source (SNS) users of the POWGEN neutron powder diffraction (NPD) beamline may now obtain streamlined access to high-resolution synchrotron diffraction (SXPD) data on the same samples at beamline 11-BM of the Advanced Photon Source. (read more)


Experiment Hints

  • Sample Powder: It is normally important to make both measurements on the *same* sample powder. It is not always safe to assume that different samples (made on a different day, different furnace, etc) will give the same diffraction pattern.
  • Temperature It is difficult to make both the x-ray and neutron measurements at exactly the same temperature. For example, one could be at 25 C, and the other at 298 K; close but not exactly the same. Is this important? The answer may depend on the sample.


Weighting datasets

In principle, one should not need to weight or scale any of the powder diffraction patterns. If the exact same sample is measured at precisely the same temperature on calibrated instruments, then a single structural model for your refinement should fit both datasets perfectly. To state this in a different way; based on statistical arguments alone, weighting factors should be 1. When this is done, the resulting model is fit against all data weighted by its experimental uncertainty.

In practice, your combined refinement may not always be so simple. The data from distinct probes will likely be sensitive to different aspects of the structure, and may have different systematic errors. Changing the weighting of these respective datasets can bias the final refinement (which might be a good or bad thing!).

For example, a high resolution synchrotron measurements may give more precise and accurate lattice parameters values when compared to a low resolution neutron experiment. On the other hand, x-ray will be very insensitive to refined site occupancies for a split Mn-Fe site, whereas NPD could be strongly sensitive so long as those element isotopes' scattering cross sections are sufficiently different.

So - should you weight the data? This is not a simple question. Try this test: what happens when you weight one of the datasets very highly (or even eliminate the other data set)? What changes in the refinement? Does the quality of the fit (for example as measured by Rwp) change?

Ideally, one will only see small changes in the fitted parameters, but the quality of the fit (for example as measured by Rwp) should not change very much. If this is the case, then a useful starting point might be to weight the respective datasets such that the Sum([weight*(obs-calc)]**2)/Ndata are approximately the same for all data sets.

On the other hand, if one finds that dropping either set of data results in a much better fit, then that implies the two datasets are best fit by conflicting models. At this point you need to stop, and think about what might physically explain these model differences. See Experiment Hints above.


Acknowledgements

thanks to Ashfia Huq, Robert Von Dreele, and Brian Toby for their insight on this topic