Wolfgang Laemmle's Posts (8)

You only get what you pay for!  Are low budget instruments worth what you pay for it?

“Nano” is one of the most attractive and modern terms in research, industry and marketing.
Due to the excellent definition work of the ISO committee TC229 since August 2008 there is a precise determination what “Nano” really means (ISO/TS27687). Also development in this field, as well as increasing knowledge is growing daily at an accelerating speed.

Only the size measurement technologies and evaluation methods don’t keep pace with the changing trends. Today most sizing instruments are based on a more than 30 year old PCS (photon correlation spectroscopy) technique applying micrometer liquid layers for detection of high concentrated samples in order to minimize the influence of multiple scatter. Even with today incomparably increased  computing  power they still stick to simplest 2nd Cumulant evaluation methods or in some cases to highly damped evaluation modes of Contin style or NNLS (non negative least square). All these instruments only present result graphs or data without checking the relevance. This means without proving, that the issued result represents the measured data correctly. Issued simplifying values as e.g. “error of fit” cannot replace the direct comparison of the measured correlation function to that one representing the issued result. Also the popular heading for “Olympic targets, faster, wider, higher” has lead to an increasing number of easy to handle but inaccurate nano size measuring instruments.

In most cases the increased speed is gained by a pre selection of measured data to skip signals that disturb the smooth curve progression of correlation curves.  All this old fashioned evaluation procedures were due to the fact that bad conditioned sets of equations could not be evaluated properly with old days computing capacity. Nowadays this is possible if one takes a little effort in adjusting the evaluation range correctly, which is assisted by the software itself in a close to perfect way already.  Not to mention the incorrectness of using an intensity matrix instead of a volume matrix for evaluation.

In general terms applying PCCS technology together with not damped  NNLS evaluation based on volume matrix, today can lead to highest sensitivity and most correct results.

Nevertheless, the market is flushed with less sensitive and incorrect instruments at lowest cost. Many of those pretend additional user value by also measuring Zeta-potential in the same instrument without extra cost. But is this kind of zeta-potential measurement worthwhile?

According to Smoluchowski’s formula applied for Zeta-potential determination (pictures see the attached PDF file please) The speed and thus the Zeta-potential is depending on the particle size. Commonly the particle size used for Zeta-potential calculation is the so called Z-average diameter or cumulative mean diameter. Already from this point of view the achieved results are highly doubtable as many samples are not mono modal and thus using a mean diameter is incorrect.    

Even though  the Smoluchowski theory is very powerful because it works for dispersed particles of any shape at any concentration. Unfortunately, it has limitations on its validity. As in real systems in addition also other properties take effect like e.g. hydrodynamic hull, ionic atmosphere and degree of dissociation of the electrolyte. It does for instance not  include the Debye length 1/k, the ratio between core size and ionic layer diameters as well as the contribution of surface conductivity, expressed as condition of small Dukhin number. Approximations try to overcome this:

The Huckle approximation for the Henry function takes care of this for
small particles in low dielectric constant media (eg non-aqueous media) in this case
f(κa) becomes 1.0 .  While the Smoluchowski approximation for the Henry function takes care of particles larger than about 200 nm in electrolytes containing more that 103 mol of salt,
in this case  f(κa) is set to 1.5 (pictures see the attached PDF file please).

This means that using a simple PCS instrument (mostly leading to a mono modal 2nd Cumulant result only or calculating for a mean diameter) applying the described rough estimating evaluation will lead to a Zeta-potential value of lowest precision. If this is combined with a simple dilution of the sample as necessary for PCS detection, the potential may even be changed before measurement due to non iso-ionic dilution.

This is why the use of a separate acoustic Zeta-meter instead of PCS based ones is highly recommended.

Again, with respect to size determination, only PCCS instruments with most modern evaluation algorithms can supply highest resolution results and can prove the relevance of those directly. 

For more details you may refer to http://www.sympatec.com/EN/PCCS/PCCS.html


Full text as PDF: Size- & Zeta-pot.-measurement you only get what you pay for!

Read more…

According to the valid definitions in ISO/TS 27687: August 2008 and the standardl particle definitions in accordance with  ISO TC 24/SC 4,  TC 146 and TC 209 there is a clear definition of nano particles respectively nano objects. Only for these nano objects, where one, two or three external dimensions are in the nanometer range below 100nm, the term nano particles should be used.
All coarser particles are submicron particles and should not be called nano particles anymore. The same is valid for so called nano structured material, which is aggregated from nano objects.

The theme of this blog is especially the proper dispersion of nano objects as preparation for particle size measurement using PCS/PCCS. All other aspects as general handling safety etc. is not part of this blog.

To find the most suitable way of sample preparation it is necessary to understand the special    character of nano particles first. Today it is common knowledge that the physical property of a specific material changes when its particle size gets as small as below 100nm. What is the reason for this change of character? A detailed explanation would lead us quite far into atomic structures and probability areas for electrons to stay in like e.g. Niels Bohr has described them for single atoms. But this would be too complex and also a model only.  Let us try to explain it from a more global sight but even this needs to employ at least some physic rules:
The second law of thermodynamics is an expression of the finding that over time, differences in temperature, pressure, and chemical potential tend to equilibrate in an isolated physical system.
This means that always the status of lowest energy differences is headed for. Nano particles are characterized by extremely high surface areas and because of that, by extremely high surface loadings. This is due to the fact, that the probability areas for electrons to stay in is quite limited in tiny particles.                                     
In a much bigger aggregates the areas are much more flexible by coordinating the areas of many small components to a combined one.
According to the mentioned law, agglomeration will appear instantly enabled mainly by Van der Waals attraction as long as the electrons/loading is not compensated by surrounding ions.
This describes the main principle of stable dispersing nano particles. The key point is to offer enough ions to compensate the surface loading of the nano particles completely. In such a case the particles will be surrounded by an ionic layer that is rejecting the coated particles from each other securely.

This coating by an ionic layer can only be achieved while single particles are available. Therefore it is necessary to generate single particles for a perfect dispersion. But how to?
The safest and most successful way is to do it by growing particles to the desired size. This is the bottom up way.  You will gain easy to measure samples, that can even be diluted in a save way.
Most complicated is the way to generate nano particles by top down technologies like e.g. grinding. Also this way it is possible to gain ion coated nano particles as long as wet grinding in an ionic liquid is applied, but the difficulty is to separate these from coarse remainder. As long as this coarse remainder is present, agglomeration is possible as well as outshining the low energy signals of the nano particle in PCS/PCCS measurement. The reason for this outshining is that with the growth of particles by times 10, the scattered light intensity arises by 106 in the Reyleigh regime and still by 102 in the Mie regime. In such cases the only way to gain a measurable sample with respect to the nano particles is by filtering off the coarse remainder.
Many so called nano mills are able to generate some nano particles but all of them end up with a bigger amount of µm sized remainder too.
Not to talk about the difficulties of dispersing dry nano particle powder into a liquid. The wetting of such material depends on the surface tension of the liquid.  For µm particles the use of sonication is most popular and successful. In case of nano particles this success is by far less because the particles are much smaller than the wavelength of the ultra sound. This means that ordinary ultra sonic baths cannot be used. The use of high energy ultrasonic fingers generates different additional effects that can lead to a certain state of dispersion. In the high energy sonication area cavitation as well as punctual high temperature is generating evaporation of dispersing liquid and similar effects that can overcome surface tension .

In general until now only real nano particles/objects were mentioned. In many cases common terminology also calls nano structured material nano particles. This is not only wrong by definition, but is a completely different case. In nano structured material the nano particles are aggregated not agglomerated. This means that they have to be crushed to become single particles, dispersing them to single particles is by principle impossible. For sure light bound aggregates can sometimes also be diminished by hard local sonication, but this is different from dispersion and will never reach a complete dispersion.

Wrapping up:

  1. Only nano objects can be measured by PCS/PCCS to gain primary nano particle diameter.
  2. Stable dispersion is a question of the proper ionic layer.
  3. Bottom up designed nano particles are easiest to disperse and measure.
  4. Top down designed nano particles always suffer from coarse remainders and in most
    cases need to be filtered.
  5. Dispersing by means of ultra sonication should be tried only if a high energy sonication finger is available. The dispersion is enabled by side effects only.
  6. Dispersing dry nano powders is the most complicated task dependent on the surface tension of the liquid. Sonication support means the same as given under 5..
  7. Dispersion of nano structured material is only possible to a certain extent and in special cases only. In general it is diminishing but no real dispersing.     



Read more…

The use of Zeta-potential in Nano analytics

In relation with production and stabilization of  Nano-objects there is very often the talk about observing/ setting the right Zeta-potential. Also a lot of actual analytical instruments  promote Zeta-potential determination together with e.g. size determination.

But let us have a closer look to the use of Zeta-potential today and its value.


In research of specific structure analysis Zeta-potential measurement is irreplaceable for:

  • Finding reasons for surface loading resp. The structure of “double layers”.
  • Detecting the adsorption equilibrium (and adsorption kinetic) of emulsifiers, dispersants, etc.
  • Tracing dependency of interface characteristics on material properties of the disperse phase.


In contrary to this for industrial purposes the Zeta-potential is merely used for:

  •  Research on flocculation mechanisms.
  •  Predicting stability of emulsions and suspensions.
  •  Behavior forecasting of multiple component dispersions resp. of adsorption characteristic of fine particles passing through porous media.
  • Determination of optimum amounts of stabilizing additives resp. flocculants.
  • Tracing the change of interface characteristics during milling additives/ dispersing / emulsifying aiming for a standardized addition of additives.


These two different tasks also need different accuracy in Zeta-potential measurement.
Research needs as accurate as possible measurement of the Zeta-potential. Most industrial tasks are just interested in stability/instability determination.

This determines the specific needs of instruments for research or stability control.


Zeta-potential is, as we have seen in my last blog: “Sample preparation for particle size determination.”  , highly affected by the concentration of surrounding ions that are forming the electrical double layers. This means that correct Zeta-potential measurement needs to be done always in original concentration or at least iso-ionic dilution. As concentration of industrial nano products are mostly very high, only instruments based on acoustic principle can be used.  All optical instruments need dilution to make the light beam pass and to avoid inhibiting amounts of multiple scattering.

This is the extremely limiting background for measuring Zeta-potential in very high diluted systems by means of extended light scattering instruments. Only if the dilution is done in accurate iso-ionic manner such a measurement would be acceptably correct but dilution means reduction of statistics which is critical as I had pointed out in my blog: “Particle size measuring in the nanometer range.”


In general can be stated, that for reasonable Zeta-potential measurement in research, only acoustic measuring principles should be applied because only those can work in original concentration with high accuracy. Light scattering instruments cannot accurately enough determine Zeta-potential as high dilution has to be applied.
The mere determination of stability does not need Zeta-potential measurement and in no case a non accurate one. The observation of changes of the amplitude of correlation function within a sequence of repeated PCS/PCCS measurements is able to indicate stability/instability extremely sensitively because slightest growth of particle size is resulting in an amplified increase of scattered light intensity. The reason for this is, that with a growth of particle size by times 10 the scattered light arises by 106 in the Reyleigh area and at least still 102 in the MIE regime.  This means that the value of instrument combination of PCS/PCCS size determination and Zeta-potential is mainly promotional only. Accurate PCS/PCCS measurement can do as well and if Zeta-potential is really important acoustic principle instruments should be preferred.


Read more…

In my recent blog on this theme I stated:

These early instruments needed a very robust and simple calculation mode, the ‘2nd Cumulant’ method, for evaluation because the calculation power of the then available computers were still very limited. Still today this simplified calculation method is positioned in the related ISO 13321 framework. It generates as results a mean particle diameter and a value for the width of an assumed Gaussian distribution only.

Regarding this rather abreviated definition of the results given by 2nd Cumulant method I was asked to correct it to a precise one.

As 2nd Cumulant is a typical series expansion it is not resulting in any, whatever shaped curve but in a series of moments. As the name indicates only the first and second moment of this series are evaluted. The first moment gives the value for the mean diameter and the second moment gives the width . These two values are the only result of the 2nd Cumulant method. Today nearly all instruments also do graphical reports and only for this they show an assumed Gaussian distribution based on the two values given by the first two moments of the series expansion.

This means that scientifically and mathematically correct the result of a 2nd Cumulant evaluation is only the mean diameter and the width. The presented graph as a Gaussian distribution is correctly not the result but only one possibility to ease understanding to visual skilled creatures as we are.   

Read more…
PARTICLE-SIZE-MEASURING IN THE NANOMETER-RANGENANO is an in many cases even in technical literature incorrectly used term. The Technical Committee 229 of ISO is deserving thanks for an exact definition with respect to particle size measurement in August 2008. (ISO/TS27687).According to this definition only NANO-objects should be called NANO-particles if its three coordinate dimensions are all within the NANO-range of about 1 nm to 100 nm.Regrettably even today many scientific publications still use the NANO-term in very imprecise manners. One reason for this might be the difficulty to measure NANO-Particles sufficiently precise. Imprecise knowledge of dimensions often leads to inaccurate classification.In general can be stated that NANO-particles are smaller than the wavelength of visible light. Because of this particle size determination by technologies based on image interpretation cannot be applied by principle e.g. light microscopes, cameras etc. Merely light scattering effects can be applied, but those enable indirect information only.HISTORY:The most popular and best researched method, using light scattering, is Photon Correlation Spectroscopy PCS including its variations that observe scattered light under different angles. PCS-conferences started as informal discussions between members of different institutions in the late 1960ies known as the colloquiums of the ‘Correlator Club’.First PCS instruments were available from the late 1970ies manufactured by different companies.These early instruments needed a very robust and simple calculation mode, the ‘2nd Cumulant’ method, for evaluation because the calculation power of the then available computers were still very limited. Still today this simplified calculation method is positioned in the related ISO 13321 framework. It generates as results a mean particle diameter and a value for the width of an assumed Gaussian distribution only. Because of this, such early instruments were unable to describe multi modal distributions correctly and had a very limited applicability only. Increasing calculation power enabled the use of more sophisticated calculation methods for evaluation, e.g. the patent-registered CONTIN-Method got into focus. Later on this was improved further by still less damped methods like the ‚Non-Negative-Least-Square‘ NNLS method, in which an extraordinary instable set of equations demands extremely high calculation power. This fact is taken into account today by revision of the established outdated ISO 13321.PRINCIPLE:Dynamic light scattering, as basic principle of the PCS method, makes use of the so called Brownian motion. This is based on the effect, described in 1905/1906 by Einstein and Smoluchowski, that particles in suspension are moved around by elastic pulsing of the surrounding liquid molecules and the dimension of the resulting movement is related to mass and volume of the particular particles. This movement creates a fluctuation of the scattered light intensity in the visual field of a stationary observer and the frequency of this fluctuation is directly related to the size of the moving particles.But this very direct relation is valid only while fulfilling the following restrictive conditions:1. There have to be freely moving spherical particles only. Deviation from perfect sphericity is not decisive, as the particles are surrounded by an ionic shell, that will smooth out extreme forms anyway. Free movement on the other hand is a critical parameter because the ionic shells may interact when due to higher particle concentration the distance to each other is decreasing too far.2. Unrestricted visibility of the moving particles respectively the emitted scattered light has to be granted. In case of high concentration and a detection area deeply inside the suspension the observation of the fluctuation of the scattered light intensity will be interfered by other particles located between detection area and observer. Multiple scatter will occurre that is capable of distorting the result by far. Particles with a real diameter of 100nm can be distortedly measured as small as some 20nm only, if a certain amount of multiple scatter is present. For this reason it had been necessary to measure at very small concentration only, to avoid this effect. Doing so resulted in a bad signal to noise ratio and potential modification of the sample due to dilution related changes in the ionic surrounding. In many cases this influence locked even this back door. Better solutions to overcome this problem had been tried by observation of the scattered light intensity fluctuation under different angles to gain clearer information and later also by applying a nearly backwards directed signal (168°) combined with a minimal depth of invasion, the so called ‘back scattering‘. The multiple angle measuring method has been abolished meanwhile because it could only be performed consecutively and always failed in case of fast changes. Further the evaluation of multiple results from different angles in principle is already quite complex and in addition strongly depending on the scattering capability of the sample. In contrary the back scatter method has become very popular and is realized in nearly all actual instruments today. This technology however neglects the fact, that while observing thin marginalized layers close to the cuvette wall the area of elastic pulsing is left. The essential basic principle of the Stokes-Einstein-relation and by that also of the PCS-measuring method, the interaction of particles and liquid molecules by elastic pulsing is increasingly overlaid by non elastic wall contacts the closer the observation area is located to the wall. There is no commonly known correction available for this kind of error.3. A basic rule in mechanical engineering tells that for the measurement of a particle size distribution with a standard deviation of 1%, a number of at least 105 particles per size class is needed. It has to be doubted that such an amount of data and by that reliable results can be achieved during extremely short but customer friendly measuring times under consideration of elected time intervals with very smooth correlation functions only. The theoretically necessary duration of a correct measurement related to a corresponding high number of events is 106 * decay time of the coarsest measured particle. Even more critical is to dare to elaborate a distribution result for an entire sample by tracing the movement of a few single particles only as propagated for a recently developed new instrument. Such extremely shortened measurements are as meaningless as e.g. results from electron microscopy that are also based on a limited number of particles only. The attractiveness of such measurements is based on the detailed presentation of single objects. With respect to the characterization of an entire sample such a method is not really suitable but in spite of this in many cases it is over-interpreted.INSTRUMENTS:The only instrumental setup that is able to eliminate multiple scatter via cross correlation entirely and thus is able to measure fairly independent from given concentration, is Photon-Cross-Correlation-Spectroscopy PCCS. By this principle the influence of the above mentioned disturbances and errors is entirely eliminated. Two different light beams of identical wave length and intensity are focused to an identical measuring volume and the fluctuation of the scattered light is detected each under 90°. Only the identical part in both measurements is separated by cross correlation and taken into account. Hence independent from concentration only unadulterated primary signals contribute to the result. This technology is further supported by exact and automated cuvette positioning for different types of cuvettes that also takes advantage of minimized depth of invasion but without exceeding it into the area of non elastic pulsing close to the wall. In addition the optimization of the raw signal is gained by use of a laser intensity dimmer that adjusts the yield of primarily scattered light to an optimum. Thus is warranted that widely independent from the sample property and from concentration the best possible yield of signal gained. Special cases of far to high concentration are easy to detect and don’t generate any results because in such cases after cross correlation nothing will be left for evaluation. After appropriate accurate evaluation the results will be either reliable or none but never wrong ones.RELEVANCE:Every modern particle size measuring instrument provides kind of smooth diagrams and the related data but only very seldom a hint for the relevance of the presented result. In many cases most simple automated handling as well as the possibility of additional information on e.g. Zeta-potential are the key points of advertisement. Even knowing that by necessary diluting the original suspension in most cases a change of the Zeta-potential is initiated. A hint for the relevance or correctness of the produced result is only sometimes given by a number for the accuracy of ‘Fit’. Much more important than the interpretation of the result ist he careful inspection of the correlation function diagram which provides information if the result generated by the selected evaluation mode is matching the raw signal correlation diagram in a proper way. In case meaningful differences are obvious, the generated result has to be taken for a misinterpretation, directly eliminated and a more sophisticated evaluation mode should be applied. For this there are complex evaluation methods are available like e.g. NNLS method which can be used by everybody easily despite of its complexity. Even though this is contradicting the requirement of simplest handling it is the only way to generate reliable accurate results of highest resolution.CONCLUSION:Promises made in high gloss brochures qualify themselves in application with real samples very often as misleading because they disregard above mentioned regards as well in PRINZIPLE, in INSTRUMENTS as in RELEVANCE.Increasingly the good news is spreading that nano measurement can be done today in a much more scientific and exact way even without highly qualified employees and in that connection more and more often the name of NANOPHOX and SYMPATEC is mentioned. Details to be found at www.sympatec.com
Read more…