Difference between revisions of "Combined X-Ray and Neutron Refinements"

From Ug11bm
Jump to navigationJump to search
 
(11 intermediate revisions by the same user not shown)
Line 1: Line 1:
Because of the distinct elemental sensitivity of x-ray and neutron probes in powder diffraction, many systems benefit enormously from a complementary study exploiting both. The combined analysis of these two datasets often can reveal subtle structural details and understanding far beyond that possible with a single measurement.
Because x-ray and neutron probes have differing atomic sensitivities, many diffraction studies benefit enormously from a complementary experiment exploiting both. This could be a combined x-ray/neutron powder fit or could couple x-ray single-crystal diffraction with neutron powder diffraction. The combined analysis of these multiple datasets often can reveal subtle structural details and understanding far beyond that possible with a single measurement. In many cases, neutrons offer better sensitivity to lighter atoms, and having two sets of data with differing scattering factors and sources of systematic error will almost always yield a more accurate structural model. 


'''News:''' Approved ORNL Spallation Neutron Source (SNS) users of the [http://neutrons.ornl.gov/powgen/ POWGEN] neutron powder diffraction (NPD) beamline may now obtain streamlined access to high-resolution synchrotron diffraction (SXPD) data on the same samples at beamline [http://11bm.xray.aps.anl.gov/home.html 11-BM] of the Advanced Photon Source. [https://wiki-ext.aps.anl.gov/ug11bm/index.php/11-BM_News#New_Partnership_for_Complementary_NPD_.26_SXPD_Access_at_DOE_User_Facilities_November_2012  (read more) ]
'''News:''' Approved ORNL Spallation Neutron Source (SNS) users of the [http://neutrons.ornl.gov/powgen/ POWGEN] neutron powder diffraction (NPD) beamline may now obtain streamlined access to high-resolution synchrotron diffraction (SXPD) data on the same samples at beamline [http://11bm.xray.aps.anl.gov/home.html 11-BM] of the Advanced Photon Source. [https://wiki-ext.aps.anl.gov/ug11bm/index.php/11-BM_News#New_Partnership_for_Complementary_NPD_.26_SXPD_Access_at_DOE_User_Facilities_November_2012  (read more) ]


== Experimental Hints ==


== Experiment Hints ==
*'''Sample Powder:'''  It is normally important to make both measurements on the *same* sample powder. Due to compositional variation, defects, and other issues, it is not always safe to assume that different samples (made on a different day, different furnace, etc) are actually the same material giving identical diffraction patterns.


*'''Sample Powder:''' It is normally important to make both measurements on the *same* sample powder. It is not always safe to assume that different samples (made on a different day, different furnace, etc) will give the same diffraction pattern.
*'''Temperature:''' It is difficult to make both the x-ray and neutron measurements at exactly the same temperature. For example, one could be at 25 C, and the other at 300 K; close but not exactly the same. Even nominally identical measurement temperatures may not be identical due to calibration differences. Is this important? The answer may depend on the sample.


*'''Temperature'''  It is difficult to make both the x-ray and neutron measurements at exactly the same temperature. For example, one could be at 25 C, and the other at 298 K; close but not exactly the sameIs this important?  The answer may depend on the sample.
== Refinement Hints ==
In principle, one should not need to weight or scale any of the powder diffraction patterns. If the exact same sample is measured at precisely the same temperature on calibrated instruments, then a single structural model for your refinement should fit both datasets within ideal statistical errors (chi-squared=1). To state this in a different way; based on statistical arguments alone, weighting factors for each dataset should be 1. When there is no systematic error and the resulting model is fit against all data weighted by its experimental uncertainty, the smallest uncertainties are obtained for the coordinates and other fitted parametersAny other weighting is equivalent to discarding data.


In practice, your combined refinement may not always be so straightforward. The data from distinct probes will likely be sensitive to different aspects of the structure, and may have different systematic errors. If the datasets are in conflict, changing their respective weighting will result in different fitted parameters and larger uncertainties (which might be a good or bad thing!).


== Weighting datasets ==
For example, a high resolution synchrotron measurement may give more precise and accurate lattice parameters values, when compared to a lower resolution neutron experiment. On the other hand, x-rays will be very insensitive to refined site occupancies for elements with similar Z sharing a site (for example Mn & Fe), whereas NPD could be strongly sensitive so long as those element isotopes' scattering cross sections are sufficiently different.  X-rays can be more sensitive to positions of high Z elements at low concentrations, while neutrons are better at finding positions of D (and at low concentration, H) atoms.


In principle, one should not need to weight or scale any of the powder diffraction patterns. If the exact same sample is measured at precisely the same temperature on calibrated instruments, then a single structural model for your refinement should fit both datasets perfectlyTo state this in a different way; based on statistical arguments alone, weighting factors should be 1. When this is done, the resulting model is fit against all data weighted by its experimental uncertainty.
But one should be always be vigilant to potential problems when combining multiple datasets.  For example, the x-ray data (because of the sample preparation or holder) might have subtle preferred orientation while the neutron data do not. If one does not (or can not) properly include the orientation effect in the fit of the x-ray, it will result in incorrectly fitted parametersSince there are usually far more total counts in n x-ray pattern than in a neutron pattern, the x-ray data will have higher leverage – in other words, without weighting the x-ray will overpower the neutron data (and give bias your combined refinement to the incorrect x-ray parameters!)


In practice, your combined refinement may not always be so simple. The data from distinct probes will likely be sensitive to different aspects of the structure, and may have different systematic errors. Changing the weighting of these respective datasets can bias the final refinement (which might be a good or bad thing!).
'''Temperature (again):'''  You've just learned that one dataset was measured at 25 C, and the other at 300 K.  Not a disaster.  There are ways within Rietveld refinement programs to compensate for the thermal expansion effects on the lattice parameters, '''BUT''' not for the corresponding changes in thermal motion. Fortunately x-ray data is pretty insensitive to small changes in thermal motion so it almost doesn't matter (unless deltaT between the measurements is very large!). There is an old example tutorial between lab x-ray data & old D1a ILL powder data - this is the PbSO4 combined refinement GSAS-II exercise.  (added by RVD)


For example, a high resolution synchrotron measurements may give more precise and accurate lattice parameters values when compared to a low resolution neutron experiment. On the other hand, x-ray will be very insensitive to refined site occupancies for a split Mn-Fe site, whereas NPD could be strongly sensitive so long as those element isotopes' scattering cross sections are sufficiently different.
== Weighting Datasets ''(to weight or not to weight)''  ==
So - should you weight the data? This is not a simple question. Try this test: what happens when you weight one of the datasets very highly (or even eliminate the other data set)? What changes in the refinement? Does the quality of the fit (for example as measured by Rwp) change?


So - should you weight the data?  This is not a simple question.  Try this test: what happens when you weight one of the datasets very highly (or even eliminate the other data set)?  What changes in the refinement?  Does the quality of the fit (for example as measured by Rwp) change? 
While one may see substantial changes in the fitted parameters (which is why the combined fit is a likely a better model than one obtained from a single dataset), ideally the overall quality of the fit (for example, as measured by Rwp) should not change very much. If this is the case, then even significant differences in weighting (say, a factor of two) should make very little difference in the final parameters. Some researchers choose to weight the respective datasets such that the ''Sum([weight*(obs-calc)]**2)/Ndata'' are approximately the same for all data sets. This increases uncertainties slightly, but decreases possible influence of systematic error.  
 
Ideally, one will only see small changes in the fitted parameters, but the quality of the fit (for example as measured by Rwp) should not change very much. If this is the case, then a useful starting point might be to weight the respective datasets such that the ''Sum([weight*(obs-calc)]**2)/Ndata'' are approximately the same for all data sets.
 
On the other hand, if one finds that dropping either set of data results in a much better fit, then that implies the two datasets are best fit by conflicting models. At this point you need to stop, and think about what might physically explain these model differences.  See Experiment Hints above.


On the other hand, if one finds that dropping (or significantly down-weighting) either set of data results in a much lower Rwp, then this implies the two datasets are best fit by conflicting models. At this point you need to stop, and think about what might physically explain these model differences. No weighting system and no model should be accepted as optimal, without understanding the source of the conflict. See Experiment Hints above.


== Acknowledgements ==  
== Acknowledgements ==  
thanks to Ashfia Huq, Robert Von Dreele, and Brian Toby for their insight on this topic
many thanks to Ashfia Huq, Robert Von Dreele, Brian Toby, and Matthew Suchomel for their insight on this topic

Latest revision as of 17:11, 5 January 2013

Because x-ray and neutron probes have differing atomic sensitivities, many diffraction studies benefit enormously from a complementary experiment exploiting both. This could be a combined x-ray/neutron powder fit or could couple x-ray single-crystal diffraction with neutron powder diffraction. The combined analysis of these multiple datasets often can reveal subtle structural details and understanding far beyond that possible with a single measurement. In many cases, neutrons offer better sensitivity to lighter atoms, and having two sets of data with differing scattering factors and sources of systematic error will almost always yield a more accurate structural model.

News: Approved ORNL Spallation Neutron Source (SNS) users of the POWGEN neutron powder diffraction (NPD) beamline may now obtain streamlined access to high-resolution synchrotron diffraction (SXPD) data on the same samples at beamline 11-BM of the Advanced Photon Source. (read more)

Experimental Hints

  • Sample Powder: It is normally important to make both measurements on the *same* sample powder. Due to compositional variation, defects, and other issues, it is not always safe to assume that different samples (made on a different day, different furnace, etc) are actually the same material giving identical diffraction patterns.
  • Temperature: It is difficult to make both the x-ray and neutron measurements at exactly the same temperature. For example, one could be at 25 C, and the other at 300 K; close but not exactly the same. Even nominally identical measurement temperatures may not be identical due to calibration differences. Is this important? The answer may depend on the sample.

Refinement Hints

In principle, one should not need to weight or scale any of the powder diffraction patterns. If the exact same sample is measured at precisely the same temperature on calibrated instruments, then a single structural model for your refinement should fit both datasets within ideal statistical errors (chi-squared=1). To state this in a different way; based on statistical arguments alone, weighting factors for each dataset should be 1. When there is no systematic error and the resulting model is fit against all data weighted by its experimental uncertainty, the smallest uncertainties are obtained for the coordinates and other fitted parameters. Any other weighting is equivalent to discarding data.

In practice, your combined refinement may not always be so straightforward. The data from distinct probes will likely be sensitive to different aspects of the structure, and may have different systematic errors. If the datasets are in conflict, changing their respective weighting will result in different fitted parameters and larger uncertainties (which might be a good or bad thing!).

For example, a high resolution synchrotron measurement may give more precise and accurate lattice parameters values, when compared to a lower resolution neutron experiment. On the other hand, x-rays will be very insensitive to refined site occupancies for elements with similar Z sharing a site (for example Mn & Fe), whereas NPD could be strongly sensitive so long as those element isotopes' scattering cross sections are sufficiently different. X-rays can be more sensitive to positions of high Z elements at low concentrations, while neutrons are better at finding positions of D (and at low concentration, H) atoms.

But one should be always be vigilant to potential problems when combining multiple datasets. For example, the x-ray data (because of the sample preparation or holder) might have subtle preferred orientation while the neutron data do not. If one does not (or can not) properly include the orientation effect in the fit of the x-ray, it will result in incorrectly fitted parameters. Since there are usually far more total counts in n x-ray pattern than in a neutron pattern, the x-ray data will have higher leverage – in other words, without weighting the x-ray will overpower the neutron data (and give bias your combined refinement to the incorrect x-ray parameters!)

Temperature (again): You've just learned that one dataset was measured at 25 C, and the other at 300 K. Not a disaster. There are ways within Rietveld refinement programs to compensate for the thermal expansion effects on the lattice parameters, BUT not for the corresponding changes in thermal motion. Fortunately x-ray data is pretty insensitive to small changes in thermal motion so it almost doesn't matter (unless deltaT between the measurements is very large!). There is an old example tutorial between lab x-ray data & old D1a ILL powder data - this is the PbSO4 combined refinement GSAS-II exercise. (added by RVD)

Weighting Datasets (to weight or not to weight)

So - should you weight the data? This is not a simple question. Try this test: what happens when you weight one of the datasets very highly (or even eliminate the other data set)? What changes in the refinement? Does the quality of the fit (for example as measured by Rwp) change?

While one may see substantial changes in the fitted parameters (which is why the combined fit is a likely a better model than one obtained from a single dataset), ideally the overall quality of the fit (for example, as measured by Rwp) should not change very much. If this is the case, then even significant differences in weighting (say, a factor of two) should make very little difference in the final parameters. Some researchers choose to weight the respective datasets such that the Sum([weight*(obs-calc)]**2)/Ndata are approximately the same for all data sets. This increases uncertainties slightly, but decreases possible influence of systematic error.

On the other hand, if one finds that dropping (or significantly down-weighting) either set of data results in a much lower Rwp, then this implies the two datasets are best fit by conflicting models. At this point you need to stop, and think about what might physically explain these model differences. No weighting system and no model should be accepted as optimal, without understanding the source of the conflict. See Experiment Hints above.

Acknowledgements

many thanks to Ashfia Huq, Robert Von Dreele, Brian Toby, and Matthew Suchomel for their insight on this topic