Skip to main content
Top
Published in: Metallurgical and Materials Transactions A 1/2020

Open Access 30-10-2019 | 5th World Congress on Integrated Computational Materials Engineering

Exploring Correlations Between Properties Using Artificial Neural Networks

Authors: Yiming Zhang, Julian R. G. Evans, Shoufeng Yang

Published in: Metallurgical and Materials Transactions A | Issue 1/2020

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The traditional aim of materials science is to establish the causal relationships between composition, processing, structure, and properties with the intention that, eventually, these relationships will make it possible to design materials to meet specifications. This paper explores another approach. If properties are related to structure at different scales, there may be relationships between properties that can be discerned and used to make predictions so that knowledge of some properties in a compositional field can be used to predict others. We use the physical properties of the elements as a dataset because it is expected to be both extensive and reliable and we explore this method by showing how it can be applied to predict the polarizability of the elements from other properties.
Notes
Manuscript submitted April 25, 2019.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

The discovery of correlations between datasets has led to many important findings historically[13] but there are two essential prerequisites: reliable data and an inspired guess at where to look for correlations. The increase in data handling capacity and advances in intelligent search methods could, it is claimed,[4] change the way in which some sectors of science proceed. Large databases in materials science could make it possible to search for correlations between properties that would not normally be sought. At present, researchers tend to focus on one set of properties in which they are expert rather than connecting one property with another.
The traditional methodological framework for materials science is the identification of the composition-processing-structure-properties causal pathways from which many of the successes in materials science have emerged. Once these relationships are in place, it is thought, it will be possible both to understand why existing materials behave as they do and to predict how materials can be chosen and modified to behave as we want. However, the quantitative prediction of properties from the structure is very complex partly because many different scales must be considered and partly because intrinsic and extrinsic imperfections must be taken into account as well.
The “high throughput” or “combinatorial” methods are an attempt to increase the pace of materials development in increasingly complex compositional spaces.[5] Combinatorial libraries can be regarded as a capital asset upon which a multitude of properties can be measured to determine structure–property relations of materials behavior.[6] Potentially, these could reveal relationships between different properties of each material experimentally but this strategy has rarely been adopted primarily because each investigator tends to be an expert in a given property regime and to have limited goals in terms of applications which are often set by the research funding source.
Computational chemistry provides a means to model the structure and functional properties of real materials quantitatively and consequently to design and predict novel materials and devices with the improved performance.[7] However, the large number of atoms and many-body interactions place considerable demands on computer resources[8] at higher structural scales.
As pointed out by Ashby,[9] all the properties of materials can be derived ultimately from the structure and bonding, or can be considered to have their ultimate origin in Schrödinger’s equation, so the properties of a material are, to varying degrees, interrelated (Figure 1). Binary correlations among materials properties abound and there is a clear mechanistic, causal interpretation. (i) Specific heat is related to atomic or molecular mass (Dulong and Petit’s law); the heat energy arises partly from the number of atoms or molecules that are vibrating and if a substance has a lower molar mass, then each unit mass has more atoms or molecules available to store heat energy. (ii) The electrical and thermal conductivities in metals were related by the Franz–Wiedemann rule in 1853,[2] which was developed in the electronic theory of Drude[10,11] since both heat and electrical fluxes in metals are strongly influenced by the motion of their electrons. (iii) Melting and boiling temperature can be correlated with the depth of the potential energy well.[12]
Examples of indirect correlations include (i) the specific heat and density in solids are related because the density of a solid is mainly determined by its atomic weight, while to a lesser degree by the atom size and the way in which they are packed.[13] Due to the correlation between density and atomic weight, and between atomic weight and specific heat capacity, there is a strong, inverse correlation between solid density and constant-pressure-specific heat capacity; (ii) the thermal expansion coefficient and melting points of materials with comparable atomic packing vary inversely because the higher melting-point materials have deeper and more symmetrical energy wells; (iii) hardness and melting point are indirectly related because hardness depends on the stress required to separate atoms and initiate dislocation motion. Higher inter-atomic forces imply deeper energy wells so materials with high melting points such as diamond, Al2O3, and TiC are the harder materials. The exceptions occur where more than one type of bond is present, such as graphite and polyethylene. For similar reasons, melting point and bulk modulus are related through bond energy.
Many other examples abound with varying degrees of correlation: inverse correlation for toughness and hardness; inverse correlation between dielectric loss and dielectric strength; in porous materials, the mechanical strength and the dielectric strength; in functional ceramics, a dielectric with high loss may show ionic conduction at a higher temperature; in oxides, a change of color may be associated with electrical conduction, both being influenced by point defects.
Ashby points out that some correlations have a simple theoretical basis; others can be found by search routines and empirical methods.[9] Generally, the correlations derived in a direct way from the nature of the atomic bond and structure are strong, such as modulus and melting point, or specific heat and density, while those derived from properties which depend on defects in the structure are less strong, such as strength and toughness, and are further weakened when interaction with the environment is involved, such as corrosion and wear.
A journey into materials science that explores correlations of properties is rather unconventional but the considerable success of Ashby’s property mapping[1416] suggests that it could provide a way of identifying compositional zones that are worthy of more detailed exploration and therefore narrow the hugely complex space that confronts the discovery of new materials. Actually, the idea of exploring property–property relationships rather than structure–property relationships seems less unconventional when it is noticed that examples of binary correlations among materials properties abound. In most cases, there is a sound mechanistic connection and the scientific practitioner uses a well-trenched radial path in Figure 1 while being barely conscious of the circumferential relationships. The work described in this paper participates in Ashby’s scientific journey.

2 Methodological Choices for Exploring Property Correlations

The exploration of correlations can be classified into three different types: (I) purely empirical, (II) partly empirical but based on some theoretical concept, and (III) purely theoretical.[17] Data mining, which is defined as a process for extracting useful hidden information directly from data rather than from basic laws of physics, is regarded as a useful tool that could help to probe implicit correlations between different properties empirically. In this work, we employ artificial neural networks, one of several data mining methods, to explore cross-property correlations. It is ideally suited not just for binary, but also for ternary and more complex correlations.
There are several precedents for applying neural networks to explore cross-property correlations. Egolf and Jurs[18] used both regression and neural network techniques to predict boiling points of organic heterocyclic compounds using the molecular weight, dipole moment, 1st order molecule connectivity, and other structure descriptors. Michon and Hanquet[19] used quantitative structure–property relationship methods and neural networks to find non-linear relations between chemical and rheological properties. Homer et al.[20] developed ANN with equilibrium physical properties and structural indicators for prediction of viscosity, density, heat of vaporization, boiling point, and Pitzer’s acentric factor for pure organic liquid hydrocarbons. Boozarjomehry et al.[21] developed a set of ANNs to predict properties such as critical temperature, acentric factor, and molecular weight of pure compounds and petroleum fractions based on their normal boiling point and liquid density at 293 K. Strechan et al.[22] obtained correlations between the enthalpy of vaporization, the surface tension, the molar volume, and the molar mass of a substance using ANNs. Mohammadi and Richon[23] used ANN to predict the enthalpy of vaporization of hydrocarbons, especially heavy hydrocarbons and petroleum fractions from the specific gravity and normal boiling temperatures. Karabulut and Koyuncu[24] developed neural network models to establish correlations of thermal conductivity with temperature and density for propane. Giordani et al.[25] used ANN for correlating a wider range of properties, principally mechanical properties of modified natural rubber. However, all these investigators used prior knowledge to select the properties for causal significance in the prediction.
In this work, the aim embraces a wider and more flexible principle of machine learning, made increasingly possible by the remarkable expansion in information processing of the computer. It is to find correlations between specific properties from a large portfolio of different properties and to reflect on the underlying physical principles post facto.

3 Experimental Details

The two main tasks are data collection and neural network construction. The collected data are used to construct the neural network and then the neural network is used to find cross-property correlations. Believing the elements to have the most reliable data, whole datasets of the physical properties of solid elements were collected from different handbooks, including Chemistry Data Handbook (CDH),[26] The Lange’s Handbook of Chemistry (LHC),[27] The Elements (ELE),[28] Table of Physical and Chemical Constants (TPC),[29] and CRC Handbook of Chemistry and Physics (CRC).[30] The properties used in this work are those recorded under 0.1 MPa, in a small temperature range (293 K to 298 K) and in the solid state in order to minimize the effects from phases, temperatures, and pressures. Sixteen different properties were collected: (i) normal melting point, (ii) normal boiling point, (iii) heat of fusion under normal melting point, (iv) heat of vaporization under normal boiling point, (v) molar heat capacity, (vi) specific heat capacity, (vii) thermal conductivity, (viii) electrical conductivity, (ix) photoelectric work function, (x) linear thermal expansion coefficient, (xi) atomic weight, (xii) density, (xiii) electronegativity (Pauling scale), (xiv) first ionization potential, (xv) polarizability, and (xvi) atomic volume. The elements collected satisfied the phase, temperature, and pressure criteria and had full records of all sixteen properties from the five handbooks. The main criterion for the selection of the 16 properties was that reliable data must be available for all the listed elements if the aim is to show a general, systematic method for exploring correlations between different properties. For this reason, refractive index, for example, which should correlate well with polarizability was excluded, a complete dataset being unavailable. There were 75 elements included in total.
It is worth noting that in our previous work[31,32] the use of ANN revealed some surprising incorrect data in handbooks. In this work, we treat the properties which have close recorded values from the five different sources as reliable. The outliers, treated as incorrect, may have incorrect unit conversions, different reference conditions, or decimal point misplacements.[32] The median values of each property were used here. Provided most of the property data are correct, the general trend of the correlations can be treated as reliable. Certainly, there may be some predictions that do not follow the correlation and we can look back at these data to find the reasons: it may be that the correlation hypothesis is violated for these special elements or the recorded values in handbooks were incorrect, in which case we can use the method developed previously[31] to select the correct ones.

4 Pre-treatment of the Data

It is well known that materials properties vary over a great range and are generally logarithmically distributed.[9,13,33] Sha[34] points out that when training a neural network with skewed data, it can be misled by a few data far away from average because, unlike linear regression training, neural network training is not based on a definitive starting formula. A logarithmic pre-treatment for properties that are logarithmically distributed is needed. The original property data distributions are shown in Figures 2(i) through (xvi). Observation of these figures shows that (iii) heat of fusion, (vi) specific heat capacity, (vii) thermal conductivity, (viii) electrical conductivity, (x) linear thermal expansion coefficient, (xv) polarizability, and (xvi) atomic volume are skewed and these were logarithmically pre-treated. However, from the data shown in Figure 2(xi), the electrical conductivity is distributed over such a great range that even logarithmic pre-treatment cannot normalize the distribution and this introduces major uncertainties for extracting general correlations and so electrical conductivity was excluded from the trial. Figures 3(i) through (vi) show the distribution of the six properties given logarithmic pre-treatment. From Figures 3(i) through (vi), the values for atomic volume, polarizability, linear thermal expansion coefficient, and heat of fusion become uniformly distributed, while for thermal conductivity and specific heat capacity, the distributions are not totally uniform, but enhanced. Double or even triple logarithms could be used, but it is undesirable to compress the whole range of values into too narrow a region such that most values become nearly the same.
All these property values, appropriately pre-treated, constituted the neural network inputs and each in turn was used as an output. When a property value was used for output, the original values were adopted because the neural network training is based on minimization of the difference between predicted values and experimental values. Small differences in logarithmic value would correspond to a large difference in original value so that the satisfied predictions of logarithmic values may have large differences between predicted and experimental original values.

5 Neural Network Construction

Back-propagation ANNs were constructed, trained, and simulated by MATLAB 7.4.0.287 (R2007a) software. For most function approximation problems, one hidden layer is sufficient to approximate continuous functions[35,36]; two hidden layers must generally be necessary for learning functions with discontinuities.[37] Also, the neural network user’s guide (MATLAB R2007a) suggested that a two-hidden-layer sigmoid/linear network can represent any function of input/output relationship.[38] As a result, a two-hidden-layer network with tan-sigmoid transfer function in the first hidden layer and a linear transfer function in the second hidden layer was adopted. Bayesian regularization, implemented as trainbr command in MATLAB R2007a, was employed for improving generalization during network training, which updates the weight and bias values according to Levenberg–Marquardt global optimization.[38] Strictly, the Marquardt–Levenberg algorithm searches for local minima; however, in each iteration, it selects a new parameter value (damping factor). Thus, the MATLAB Manual[38] recommends it as generally the best in terms of its performance, memory requirement, and computing efficiency. A loop program was used to redistribute the database in order to make the training set cover the problem domain as recommended by Malinov and Sha.[39] Employing Bayesian regularization and database redistribution can alleviate the overfitting problem. More detail can be found in the author’s previous paper.[40]
Taking one property at a time to be predicted (output of the neural network) and all other properties as inputs, the process was repeated. When property values can be reasonably predicted from groups of other properties, then we can say that part or all of these properties are correlated. The criterion used for highest performance in both training and testing sets was the lowest value of \( \omega \, = \left| {\varphi_{\text{training}}^2 - \varphi_{\text{testing}}^2} \right| \), where \( \varphi = \left| {M - 1} \right| + (1 - R) \), and the smallest value of ω is chosen.

6 Results

As the range of applications for materials which depend on electric polarizability and hyper-polarizability has expanded dramatically,[41,42] we take the example of the prediction of polarizability from the other 14 properties to illustrate how the method behaves in exploring correlations between properties that appear to stem from different physical principles.
First, using the prediction of polarizability from each of 14 properties individually, those which provide strong predictability for polarizability were selected. The other properties were treated as properties that have weak or no direct correlation with polarizability. However, care is needed in deciding on exclusions. There is a possibility that a combination of properties excluded in this way could have delivered enhanced prediction, compared with the case in which they are omitted.
The square of the correlation coefficient, R2, is the proportion of the variation in the values of y that is explained by the least-squares regression of y on x. It ignores the distinction between explanatory and response variables. The correlation between input and output property values was expressed by R2; in this case, the proportion of variation in the experimental values accounted for in a linear relation between predicted and experimental values. Here, we use the criterion of R = 0.9, meaning that about 80 pct of the variation is accounted for, and designate the correlations with R ≥ 0.9 as having significant correlations. Figures 4(a) through (d) show the results of prediction of polarizability with R values greater than 0.9 and Table I lists the statistical analysis for results shown in Figure 4.
Table I
Statistical Analysis for the Results Shown in Fig. 4
Conditions
Test Set
Whole Set
M
R
MME (10−30 m3)
SDME (10−30 m3)
MPME (Pct)
SDPME (Pct)
M
R
MME (10−30 m3)
SDME (10−30 m3)
MPME (Pct)
SDPME (Pct)
Atomic Weight (AW)
1.01
0.980
1.15
1.29
13.7
19.1
0.970
0.980
1.49
1.80
15.1
19.6
First Ionization Potential (EI)
1.04
0.860
5.40
5.22
36.0
39.5
0.925
0.920
3.21
3.53
27.3
35.0
Electronegativity (χ)
0.912
0.937
2.64
1.90
28.9
21.2
0.883
0.940
2.90
2.80
25.4
24.7
Work Function (Φ)
0.852
0.920
3.78
3.12
42.8
76.9
0.830
0.913
3.45
3.39
38.2
65.7
MME Mean of error modulus, SDME standard deviation of error modulus, MPME mean of percentage error modulus, SDPME standard deviation of percentage error modulus
Now that we have located four properties that have relatively strong correlations with polarizability and have relegated ten properties with weak or even no correlation, the next step is systematically to introduce other properties to assess improvements in the predictions and hence reveal the ‘effect’ of each property on the prediction of polarizability. Here, it needs to be noted that if the effects of different properties on the prediction of polarizability are combined, the influence of one property cannot be distinguished from the influence of others and it cannot be said how strong the effect of one property on polarizability is. It also means that some properties, which cannot make a strong prediction alone, may have effects or even strong effects on the prediction when combined with other properties.
So this step focuses on the results that show a high degree of correlation (here we take R2 = 99 pct, which corresponds to R = 0.995 and the slope M is equal to or greater than 0.99) between polarizability and different combinations of other properties and then from these results, we note the underlying physical principles in an attempt to assess why different combinations of properties can have similar predictive performance. It is found that the prediction of polarizability obtained using the minimum number of other properties involves melting point, heat of vaporization, specific heat capacity, and first ionization potential. The result is shown in Figure 5, and the statistical analysis is shown in Table II.
Table II
Statistical Analysis for the Results Shown in Fig. 5
Input Properties
Test Set
Whole Set
M
R
MME (10−30 m3)
SDME (10−30 m3)
MPME (Pct)
SDPME (Pct)
M
R
MME (10−30 m3)
SDME (10−30 m3)
MPME (Pct)
SDPME (Pct)
Melting Point (Tm), Heat of Vaporization (ΔHV), Specific Heat Capacity (CP) and First Ionization Potential (EI)
1.01
0.97
1.66
1.35
15.3
15.2
0.994
0.995
0.808
0.893
6.80
8.95
The discussion of these five results (Figures 4 and 5) that follows comprises (1) exploration of the underlying physical principles for results shown in Figure 4 in order to justify this method for exploring cross-property relationships, (2) analyzing the results shown in Figure 5 and comparing this result with other results to explore the possible confounding effect of different properties, and (3) exploring possible mathematical equations that can formulate these correlations.

7 Discussion

7.1 Exploring Underlying Physical Principles

The polarizability is the average static electric dipole polarizability with units C m2 V−1 rendered as a volume by dividing by 4πε0 where ε0 is the permittivity of free space. The polarizability of an atom or molecule is the average induced dipole moment resulting from distortion of the electron cloud divided by the microscopic electric field applied to the molecule and is a measure of the ease with which its electron cloud can be pulled away from the nucleus. For dielectrics, the polarizability, α is related to dielectric constant and atomic weight by the Clausius–Mossotti relation.[43]
The correlation between polarizability and atomic weight (Figure 4(a)) is the strongest compared with other combinations and it is well known that polarizability increases with atomic weight for elements in the same family as atomic size increases, as shown by many including Debye,[44] Clark,[45] Denbigh,[46] Atoji,[47] Pauling,[48] and Ghanty and Ghosh[49] and decreases with increasing atomic weight for elements in the same row of the periodic table as the outer-shell orbitals are increasingly filled.[50] Drawing these two properties in Cartesian coordinates (Figure 6) demonstrates the periodic trend and the neural network immediately finds this strong correlation.
While polarizability measures the response of an electronic system to an external electric field, the first ionization potential measures the extraction energy of the outermost electron of the atom. Dmitrieva and Plindov[51] pointed out the correlation between first ionization potential (IP) and polarizability (α) follows α1/3 = 1.09/IP. Fricke[52] also argued that an increasing first ionization potential implies a decreasing polarizability, and they obey direct IP ~ 1/α correlation when plotted on a double-logarithmic scale. Schwerdtfeger[53] stated the relationship is in a form of α ~ 1/IP2. However, for all the above three cases, the trends are visible but the two quantities are not correlated perfectly in a general way for all the elements. This is explained by the fact that the structure of the valence electrons of each element is very different and relativistic effects change the trend in polarizability within a Group of the periodic table. The neural network also finds this correlation easily as shown in Figure 4(b).
The correlation between polarizability and electronegativity (shown in Figure 4(c)) has been explored by Komorowski[54] who applied an electrodynamical equation to the chemical potential by analogy and obtained an inverse relationship between polarizability and electronegativity. Van Genechten et al.[55] applied the electronegativity equalization method to calculate values of average electronegativity and related these values to the polarizability: large electronegativity is consistent with low polarizability. However, in these two works, the correlations are not explored in detail. Nagle[56] employed the concept of valence electron density,[57,58] and got a function of the number of valence electrons divided by polarizability, n/α. Then, the cube root of this ratio, (n/α)1/3, can be used for calculating the electronegativity χ:χ = 1.66 (n/α)1/3 + 0.37 for s- and p-block elements and it can also be applied to d- and f-block elements if the number of “valence” electrons for these elements can be determined from a careful analysis of their atomic spectra. Further proofs can be derived from the correlations between atomic radii and polarizability and between atomic radii and electronegativity, such as the work done by Ghanty and Ghosh.[49] The discussion in these cases describes the relationship between polarizability and electronegativity from a physical perspective but points out there is no universal quantitative relationship between them. In order to make a comprehensive and general prediction, other parameters need to be introduced. The ANN finds this correlation with R = 0.94.
The correlation found by ANN between work function and polarizability is shown in Figure 4(d). The electron work function ø is a measure of the minimum energy required to extract an electron from the surface of a solid.[59] It can be measured from thermionic, photoelectric, or contact potential methods. Michaelson[60] observes that the thermionic method cannot give an absolute value for polycrystalline or other patchy surfaces, while the photoelectric method does not yield the true work function for semiconductors because the emission contains contributions of both volume and surface origin. The critical review of different measurement methods and the rationale for selecting preferred values are discussed by Rivière.[61] Like most of the chemical properties of the elements, the work function is a periodic function of atomic number when the values are carefully selected.[6266] As a result, the work function has an established correlation with atomic number, which is the same trend as the variations in polarizability. Furthermore, an empirical correlation between work function and atomic weight was derived by Rother and Bomke.[67] Bedreag[68] pointed out that a correlation between work function and first ionization potential exists within the alkali metals. Since we have the periodic correlation between polarizability and atomic weight, there is indeed some correlation between polarizability and the work function.
However, from Figure 4(d) this correlation is not very strong with R = 0.91. The reasons are as follows. (1) We use a single value (polycrystalline or unweighted mean values for all facets) taken from handbooks, whereas the choice of preferred single value is complicated by the variations produced from the purity of the specimen, the measurement method, and the surface distribution of crystal facets.[60] (2) The measurements of work function are extremely sensitive to the presence of surface impurities, such as oxides and gases.[66] When the measurement is not carried out under ultra high vacuum, it is affected by trace impurities.[60] (3) The anisotropy,[69,70] allotropy,[71,72] and temperature dependence[7376] complicate the values of work functions and although the difference is not great, the data recorded in handbooks have these uncertainties. (4) For semiconductor elements, variations, although not great, exist among the values obtained from different methods of measurement.[77,78] Similarly, for the data of As,[79] Te,[80] and Se[80] semiconductors are derived from photoelectric methods. It is stated above that the photoelectric method cannot yield the true work function for semiconductors. Actually, these values cannot be confirmed by measurements made by ultrahigh vacuum techniques and so, as suggested by Michaelson,[60] these values can only be treated as possibly valid but of unknown reliability and can only be accepted as being the best available and not necessarily as absolute physical quantities. (5) The periodic trend found by Michaelson,[60,66] as shown in Figure 7, is obvious, but not rigorous. It has been found that in each period, the work function value tends to rise with increasing atomic number, as electron shells and sub-shells gradually become filled; however, the relation becomes complex in the intervals occupied by the transition metals.

7.2 Exploring Confounding Effects of Different Properties

The results shown in Figure 5 indicate the confounding effect of melting point, heat of vaporization, specific heat capacity, and first ionization potential on polarizability and it is desirable to see the relative importance of each input property but, before that, we wish to find the correlation between each of these four properties themselves. The neural network was run to predict each of the four properties from one of others; totally there are six pairs (4C2). It was found that there is a strong correlation between melting point and heat of vaporization, but there is no correlation between each of the other five pairs: as shown in Table III, only melting point and heat of vaporization have high R and M values which indicate a strong correlation. As a result, it can be said that the prediction of polarizability emerges from three distinct parts: first ionization potential, specific heat capacity, and melting point/heat of vaporization taken together.
Table III
Correlations Between Input Properties
Conditions
M
R
Predicted Property
Input Properties
Melting Point
heat of vaporization
0.886
0.912
Heat of Vaporization
melting point
0.854
0.914
Melting Point
specific heat capacity
0.480
0.472
Specific Heat Capacity
melting point
0.275
0.541
Melting Point
first ionization potential
0.400
0.665
First Ionization Potential
melting point
0.111
0.326
Heat of Vaporization
specific heat capacity
0.0606
0.251
Specific Heat Capacity
heat of vaporization
0.592
0.665
Heat of Vaporization
first ionization potential
0.562
0.669
First Ionization Potential
heat of vaporization
0.697
0.724
Specific Heat Capacity
first ionization potential
0.00108
0.132
First Ionization Potential
specific heat capacity
0.463
0.638
A strong correlation only exists between melting point and heat of vaporization
In the next stage, the relative importance of each property is explored by running the network with one input property omitted at a time and the results are shown in Table IV. It is quickly seen that the relative importance of each property for the prediction of polarizability follows the descending order: first ionization potential, melting point, heat of vaporization, and specific heat capacity. That is, the predictability of polarizability mostly comes from the first ionization potential, then smaller parts from melting point and heat of vaporization (also, melting point contributes more than heat of vaporization), and the smallest part comes from specific heat capacity.
Table IV
Comparison of Criteria for Predicting Polarizability Using Different Combinations of Three Parameters
Conditions
Test Set
Whole Set
M
R
MME (10−30 m3)
SDME (10−30 m3)
MPME (Pct)
SDPME (Pct)
M
R
MME (10−30 m3)
SDME (10−30 m3)
MPME (Pct)
SDPME (Pct)
Tm, ΔHV, and CP
0.0674
0.314
9.80
7.17
84.6
86.2
0.0619
0.291
9.16
6.73
94.3
96.3
ΔHV, CP, and EI
0.913
0.932
3.28
3.18
25.7
29.6
0.902
0.947
2.59
2.82
23.6
31.4
Tm, CP, and EI
0.995
0.948
2.24
1.68
24.4
23.1
0.961
0.978
1.71
1.79
15.4
20.2
Tm, ΔHV, and EI
1.01
0.963
2.21
2.32
17.8
25.0
0.98
0.983
1.61
1.50
16.0
18.6
The strong correlation between polarizability and first ionization potential was discussed above. The correlations between polarizability and the other three properties are compared in Table V from which it can be seen that the correlations between polarizability and these three properties are weak. It is worth noting that the R2 values are 0.27, 0.54, and 0.17 for melting point, heat of vaporization, and specific heat capacity, respectively (for whole set), yet it cannot be said that these three properties follow the descending order of degree of importance: heat of vaporization, melting point, and specific heat capacity. The reason is that when the value of m is far from 1, less reliability attends the value of R. From the above analysis, we can say although these properties have a weak correlation with polarizability, they can improve the prediction when combined and their strengths depend upon how they are combined. Here, we begin to see how a much larger ANN analysis could be structured to accommodate large numbers of properties with the intention of predicting properties not yet known and, if the scope included compounds rather than elements, even introducing compositions not yet made.
Table V
Comparison of Correlations Between Polarizability and Melting Point, and Heat of Vaporization, and Specific Heat Capacity
Conditions
Test Set
Whole Set
M
R
MME (10−30 m3)
SDME (10−30 m3)
MPME (Pct)
SDPME (Pct)
M
R
MME (10−30 m3)
SDME (10−30 m3)
MPME (Pct)
SDPME (Pct)
Tm
0.210
0.482
6.80
5.31
84.6
145
0.247
0.523
7.86
6.35
90.0
132
ΔHV
0.682
0.695
9.71
7.05
85.9
78.4
0.615
0.735
6.37
5.09
65.1
76.9
CP
0.158
0.279
7.62
5.36
158
192
0.155
0.415
8.00
7.31
93.1
118
In the next stage, pairs of parameters are selected to predict polarizability and the results are shown in Table VI. The combination of melting point and heat of vaporization has the weakest predictability. The reason is that, as mentioned before, there is already a correlation between melting point and heat of vaporization and this ‘single input’ does not make a strong prediction. Then from second and third rows, the role of specific heat capacity is introduced and the performance is improved little, and is still very weak. From rows 4 to 6, it is clear that when the first ionization potential is introduced, it has the single strongest correlation with polarizability. These three rows render the ascending effect of specific heat capacity, heat of vaporization, and melting point on the prediction of polarizability when combined with first ionization potential.
Table VI
Comparison of Criteria for Predicting Polarizability Using Different Combinations of Two Parameters
Conditions
Test Set
Whole Set
M
R
MME (10−30 m3)
SDME (10−30 m3)
MPME (Pct)
SDPME (Pct)
M
R
MME (10−30 m3)
SDME (10−30 m3)
MPME (Pct)
SDPME (Pct)
Tm, ΔHV
0.0253
0.206
9.66
9.78
80.6
69.7
0.0304
0.203
9.56
6.57
103
113
Tm, CP
0.0277
0.319
11.6
10.4
103
106
0.0511
0.277
9.28
6.67
94.7
91.3
ΔHV, CP
0.0919
0.273
8.04
4.89
102
125
0.0693
0.285
9.28
6.53
100
107
CP, EI
0.901
0.913
2.65
4.26
28.4
53.2
0.887
0.936
2.70
3.18
25.8
36.5
ΔHV, EI
0.909
0.874
3.24
3.98
32.7
55.1
0.874
0.927
3.04
3.26
28.9
42.1
Tm, EI
0.958
0.966
2.12
2.23
25.2
39.8
0.900
0.948
2.59
2.75
24.4
34.5
From the discussion above, it can be concluded that the first ionization potential plays the most important part, the melting point and heat of vaporization play the second most important part, and the specific heat capacity plays the least important effect. For melting point and heat of vaporization, the melting point has a higher performance than the heat of vaporization. In all cases, without adopting the first ionization potential, the correlations with polarizability are very weak; however, when they are combined with first ionization potential, the performance can be improved a lot compared with employing first ionization potential alone (from M = 0.925, R = 0.92 to M = 0.994, R = 0.995).

7.3 Exploring Possible Mathematical Equations that can Formulate Correlations

It would be useful to have mathematical functions that can describe the correlations found by the neural network. Recently, this has been demonstrated by Schmidt and Lipson[81] who used genetic programming to extract Hamiltonians and other laws by automatically searching motion-tracking data captured from chaotic double pendula. So it will be possible to find mathematical equations from these correlations in the future. However, in the method proposed by Schmidt and Lipson,[81] it is still necessary to identify mathematical building blocks such as algebraic operators and analytical functions. So, it is reasonable to speculate on such building blocks by visualizing the functional relationship which is captured by neural networks in order to see the variation in polarizability in terms of the input properties. However, for the results shown in Figure 4, the neural network captures correlations between polarizability and four other properties and the functional correlation locates within a 5D space.
In order to visualize the functional relationship that the neural network captured, we analyzed the result for the prediction of polarizability from two other properties taking atomic weight and electronegativity as an example, which has M = 0.994 and R = 0.994 as shown in Figure 8. Now in this case, it is possible to interpret the results visually by drawing a 3D diagram. The interpretation is shown in Figure 9, which is constructed as follows:
1.
The atomic weight AW is placed on the x-axis, electronegativity χ is placed on the y-axis, and polarizability α is placed on the z axis.
 
2.
The property data for 75 elements are plotted directly. The training set and testing set are shown as red and green dots, respectively. For these data, the atomic weight values are within the range of 0.0069 to 0.238 kg mol−1, while the electronegativity values are within the range of 0.7 to 2.5.
 
3.
The ANN, which was constructed from the training set (red dots), was fed with artificial atomic weights from 0.0069 to 0.238 in the form of 50 equally spaced data points and electronegativity from 0.7 to 2.5 also as 50 equally spaced data points to predict the corresponding polarizability. Those data were then used to draw the surface, which is shown in Figure 9 as a semi-transparent net. It is important to realize that the net represents atomic weights which both exist and those which do not exist.
 
From Figure 9, it can be found that, from the training set, the neural network has captured a functional surface and nearly all the testing set are located on this surface. This means the choice of the training set covers the problem domain, and the neural network captured the complex functional relationships.
Since the correlation between polarizability and atomic weight follows a periodic trend and the correlation between polarizability and electronegativity follows an inverse relation, it can be speculated that the polarizability observed in Figure 9 is the sum of a function of atomic weight f(AW) and a function of electronegativity g(χ): α = f(Aw+ g(χ). The type of periodic function needed here corresponds to free vibration with damping and as shown in Figure 10(a), and the equation found is
$$ f\left( {{A_{\text{w}}}} \right) \, = 3.5 \times {e^{5{A_{\text{W}}}}} \times \sin (80{A_{\text{W}}} + 30) + 10 $$
(1)
and the inverse function can be simulated as a kind of power function with the power of − 3, such as the one shown in Figure 10(b),
$$ g\left( \chi \right) \, = 15 \times {\chi^{ - 3}}. $$
(2)
The sum of functions as shown in Figure 11 is
$$ a = \, f\left( {{A_{\text{w}}}} \right) \, + \, g\left( \chi \right) \, = 3.5 \times {e^{5{A_{\text{W}}}}} \times \sin (80{A_{\text{W}}} + 30) + 10 + 15 \times {\chi^{ - 3}} $$
(3)
which is very similar to Figure 12 (which is redrawn from Figure 9 from the same viewpoint as in Figure 11). So it is reasonable to present the correlation using mathematical building blocks based on the discrete parts of multiple correlations located by the ANN as shown in Eq. [3].
It is arguable that this visualization method is only workable with one to one or two to one correlations. For higher dimensions, it may be difficult to visualize the equation in 3D pictures. However, it is possible to fix values for some properties and show only two or three properties in a series of lower dimension pictures, which are equivalent to projections of the high dimension to two or three dimensions.

8 The Validity of Exploring Cross-Properties Relationship by Using ANNs

The prediction of properties from structures by computational methods is widespread but the interactions between different levels of structure can make these problems very complex. In this work, we apply the principle that all the properties of a material are determined by, or are a common response to, composition and structure, and use this principle to explore the correlations between different properties which should therefore exist by using artificial neural networks. However, interactions between input properties still exist. In neural networks, the nature of the interactions is implicit in the values of the weights. In cases like the one studied in this work, there exist more than just pairwise interactions and, as a result, it is difficult to visualize them from the examination of the weights. As suggested by Bhadeshia,[82] the better method is to use the network to make predictions and to see how these depend on various combinations of inputs. In this work, we made use of underlying physical principles to explain the different results and found that employing neural networks to explore the cross-properties relationships is both reasonable and feasible.

9 Summary

The correlations that exist between different properties are explored by employing artificial neural network methods using the example of prediction of polarizability from combinations of other properties. Through this example, we provide a general, systematic method for exploring correlations between different properties for different types of materials under specified conditions of phase, temperature, and pressure. The method applied in this work depends strongly on the availability of correct data. It is the restrictive availability of such data for compounds that presently limits this novel methodological step.
The advent of e-science has meant that scientific communication can employ media not previously recognized and data can be made accessible globally so that many geographically dispersed groups can analyze raw data according to their own skills. The sharing of data in raw form rather than through the highly processed medium of refereed journals means that the construction of global shared databases is a reality and it follows from that multi-property data can be put up, shared, and processed in novel ways.
Once data sharing is in place, computational processes for mining the relationships are needed. Methods are required for identifying how values of properties p1, p2, p3 … can be used to estimate the likely magnitude of property pn. This will narrow down considerably the sample space for experimental high-throughput methods for finding materials with a desired range pn and the computational cost of predicting such materials properties. As the global database grows, this cross-correlation will produce a new type of materials science that allows the scientific world to home in on new materials at a rate previously thought impossible. In the same way that high-throughput methods have compressed laboratory time, multi-property mapping might compress the time taken for new materials discovery. Linked in this way, the mapping would define the compositional space for combinatorial discovery.
The results show how the predictive power of some parameters depends on those with which they are combined and so we begin to see how a much larger ANN analysis could be structured to accommodate large numbers of properties both to predict properties not yet known and to point the direction of compositions not yet made.
However, a prerequisite for all such methods is that the shared databases should be cleansed from unreliable data; this is the basis for getting meaningful and useful information out from them with certainty.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literature
1.
go back to reference Voelkel J. R. Johannes Kepler and the New Astronomy. Oxford: Oxford University Press, 1999. pp. 47-93 Voelkel J. R. Johannes Kepler and the New Astronomy. Oxford: Oxford University Press, 1999. pp. 47-93
2.
go back to reference Franz R., Wiedemann G. Ann. Phys. Chem. 1853, 165, 497-531. Franz R., Wiedemann G. Ann. Phys. Chem. 1853, 165, 497-531.
3.
go back to reference P. Langley, H.A. Simon, G.L. Bradshaw, J.M. Zytkow: Scientific Discovery: Computational Explorations of the Creative Processes. MIT Press, Cambridge, 1987 (Second printing, 1992). pp. 3–62 P. Langley, H.A. Simon, G.L. Bradshaw, J.M. Zytkow: Scientific Discovery: Computational Explorations of the Creative Processes. MIT Press, Cambridge, 1987 (Second printing, 1992). pp. 3–62
4.
go back to reference T. Hey, S. Tansley, and K. Tolle, eds.: The Fourth Paradigm: Data Intensive Scientific Discovery. Microsoft Corporation (Second printing, version 1.1), 2009. pp. xi–xxxi. T. Hey, S. Tansley, and K. Tolle, eds.: The Fourth Paradigm: Data Intensive Scientific Discovery. Microsoft Corporation (Second printing, version 1.1), 2009. pp. xi–xxxi.
5.
go back to reference J.N. Cawse: in Experimental Design for Combinatorial and High Throughput Materials Development, J.N. Cawse, ed., Wiley, New York, 2003. pp. 1–26. J.N. Cawse: in Experimental Design for Combinatorial and High Throughput Materials Development, J.N. Cawse, ed., Wiley, New York, 2003. pp. 1–26.
6.
go back to reference Amis E. J., Xiang X. D., Zhao J. C. MRS Bull. 2002, 27, 295-297. Amis E. J., Xiang X. D., Zhao J. C. MRS Bull. 2002, 27, 295-297.
7.
go back to reference Ch. Elsässer, C.A.J. Fisher, A. Howe, M. Parrinello, M. Scheffler, H. Gao: in European White Book on Fundamental Research in Materials Science. Max-Planck-Institut für Metallforschung, Stuttgart, 2001. pp. 126–28. Ch. Elsässer, C.A.J. Fisher, A. Howe, M. Parrinello, M. Scheffler, H. Gao: in European White Book on Fundamental Research in Materials Science. Max-Planck-Institut für Metallforschung, Stuttgart, 2001. pp. 126–28.
8.
9.
go back to reference M.F. Ashby: Proc. R. Soc. Lond. Ser. A 1998, 454, 1301-1321. M.F. Ashby: Proc. R. Soc. Lond. Ser. A 1998, 454, 1301-1321.
10.
11.
12.
go back to reference L.H. Van Vlack: Elements of Materials Science and Engineering, 6th ed. Addision-Wesley Publisher, Reading, MA, 1989. pp. 51–52. L.H. Van Vlack: Elements of Materials Science and Engineering, 6th ed. Addision-Wesley Publisher, Reading, MA, 1989. pp. 51–52.
13.
go back to reference M.F. Ashby, H. Shercliff, D. Cebon: Materials: Engineering, Science, Processing and Design. Butterworth-Heinemann, Elsevier, Oxford, 2007. pp. 22–24, 58–59. M.F. Ashby, H. Shercliff, D. Cebon: Materials: Engineering, Science, Processing and Design. Butterworth-Heinemann, Elsevier, Oxford, 2007. pp. 22–24, 58–59.
14.
go back to reference Ashby M. F. Acta Metall. 1989, 37, 1273-1293. Ashby M. F. Acta Metall. 1989, 37, 1273-1293.
15.
go back to reference Ashby M. F. Materials Selection in Mechanical Design, 4th Edition. Elsevier, Amsterdam, 2011. pp. 57–96. Ashby M. F. Materials Selection in Mechanical Design, 4th Edition. Elsevier, Amsterdam, 2011. pp. 57–96.
17.
go back to reference Reid R. C., Sherwood T. K. The Properties of Gases and Liquids: Their Estimation and Correlation. New York, McGraw-Hill, 1958. p.2. Reid R. C., Sherwood T. K. The Properties of Gases and Liquids: Their Estimation and Correlation. New York, McGraw-Hill, 1958. p.2.
18.
go back to reference Egolf L. M., Jurs P. C. J. Chem. Inf. Comput. Sci. 1993, 33, 616-625. Egolf L. M., Jurs P. C. J. Chem. Inf. Comput. Sci. 1993, 33, 616-625.
19.
go back to reference Michon L., Hanquet B. Energy Fuels 1997, 11, 1188-1193. Michon L., Hanquet B. Energy Fuels 1997, 11, 1188-1193.
20.
go back to reference Homer J., Generalis S. C., Robson J. H. PCCP. 1999, 1, 4075-4081. Homer J., Generalis S. C., Robson J. H. PCCP. 1999, 1, 4075-4081.
21.
go back to reference Boozarjomehry R. B., Abdolahi F., Moosavian M. A. Fluid Phase Equilib. 2005, 231, 188-196. Boozarjomehry R. B., Abdolahi F., Moosavian M. A. Fluid Phase Equilib. 2005, 231, 188-196.
22.
go back to reference Strechan A. A., Kabo G. J., Paulechka Y. U. Fluid Phase Equilib. 2006, 250, 125-130. Strechan A. A., Kabo G. J., Paulechka Y. U. Fluid Phase Equilib. 2006, 250, 125-130.
23.
go back to reference Mohammadi A. H., Richon D. Ind. Eng. Chem. Res. 2007, 46, 2665-2671. Mohammadi A. H., Richon D. Ind. Eng. Chem. Res. 2007, 46, 2665-2671.
24.
go back to reference Karabulut E. Ö., Koyuncu M. Fluid Phase Equilib. 2007, 257, 6-17. Karabulut E. Ö., Koyuncu M. Fluid Phase Equilib. 2007, 257, 6-17.
25.
go back to reference Giordani D. S., Oliveira P. C., Guimarăes A., Guimarăes R. C. O. Polym. Eng. Sci. 2009, 49, 499-505. Giordani D. S., Oliveira P. C., Guimarăes A., Guimarăes R. C. O. Polym. Eng. Sci. 2009, 49, 499-505.
26.
go back to reference J.G. Stark, H.G. Wallace: Chemistry Data Book, 2nd ed. John Murray, London, 1982 (1984 reprinted), pp. 8–11, 24, 27–29, 50–51. J.G. Stark, H.G. Wallace: Chemistry Data Book, 2nd ed. John Murray, London, 1982 (1984 reprinted), pp. 8–11, 24, 27–29, 50–51.
27.
go back to reference J.G. Speight, ed.: Lange’s Handbook of Chemistry, 16th ed., 70th Anniversary ed. McGraw-Hill, New York; London, 2005. pp. 1.18–1.62, 1.124–1.127, 1.280–1.298. J.G. Speight, ed.: Lange’s Handbook of Chemistry, 16th ed., 70th Anniversary ed. McGraw-Hill, New York; London, 2005. pp. 1.18–1.62, 1.124–1.127, 1.280–1.298.
28.
go back to reference Emsley J. The Elements, 3rdEdition. Oxford: Oxford University Press, 1998. Emsley J. The Elements, 3rdEdition. Oxford: Oxford University Press, 1998.
29.
go back to reference G.W.C. Kaye, T.H. Laby: Tables of Physical and Chemical Constants, 16th ed. Longman, Harlow, 1995, pp. 212–14, 338–42. G.W.C. Kaye, T.H. Laby: Tables of Physical and Chemical Constants, 16th ed. Longman, Harlow, 1995, pp. 212–14, 338–42.
30.
go back to reference R.L. David, ed.: CRC Handbook of Chemistry and Physics. 2000-2001, 81st ed. CRC Press, Boca Raton, c2000, pp. (4)124, (6)105–(6)106. R.L. David, ed.: CRC Handbook of Chemistry and Physics. 2000-2001, 81st ed. CRC Press, Boca Raton, c2000, pp. (4)124, (6)105–(6)106.
31.
go back to reference Zhang, Y. M., Evans, J. R. G., Yang, S. F. Phil. Mag. 2010, 90, 4453-4474. Zhang, Y. M., Evans, J. R. G., Yang, S. F. Phil. Mag. 2010, 90, 4453-4474.
32.
go back to reference Zhang, Y. M., Evans, J. R. G., Yang, S. F. J. Chem. & Eng. Data 2011, 56, 328-337. Zhang, Y. M., Evans, J. R. G., Yang, S. F. J. Chem. & Eng. Data 2011, 56, 328-337.
33.
go back to reference D. Bassetti, Y. Brechet, M.F. Ashby: Proc. R. Soc. Lond. Ser. A 1998, 454, 1323–1336. D. Bassetti, Y. Brechet, M.F. Ashby: Proc. R. Soc. Lond. Ser. A 1998, 454, 1323–1336.
34.
go back to reference Sha W. Private communication on 27th May 2008 via email. Sha W. Private communication on 27th May 2008 via email.
35.
go back to reference Hecht-Nielsen R. Neurocomputing. Reading, MA: Addison-Wesley; 1990. Hecht-Nielsen R. Neurocomputing. Reading, MA: Addison-Wesley; 1990.
36.
go back to reference Basheer I. A. Comput Aided Civil Infrastruct Eng 2000, 15, 440-458. Basheer I. A. Comput Aided Civil Infrastruct Eng 2000, 15, 440-458.
37.
go back to reference Masters T. (Ed.) Practical Neural Network Recipes in C++. Boston, MA: Academic Press, 1994, pp. 174-176. Masters T. (Ed.) Practical Neural Network Recipes in C++. Boston, MA: Academic Press, 1994, pp. 174-176.
38.
go back to reference Mathworks: Neural Network Toolbox 6 User’s Guide. 2007. Mathworks: Neural Network Toolbox 6 User’s Guide. 2007.
39.
go back to reference Malinov S., Sha W. Comput. Mater. Sci. 2003, 28, 179-198. Malinov S., Sha W. Comput. Mater. Sci. 2003, 28, 179-198.
40.
go back to reference Zhang Y. M., Yang S., Evans J. R. G. Acta Mater. 2008, 56, 1094-1105. Zhang Y. M., Yang S., Evans J. R. G. Acta Mater. 2008, 56, 1094-1105.
41.
go back to reference K.D. Bonin, V.V. Kresin: Electric-Dipole Polarizabilities of Atoms, Molecules and Clusters. World Scientific Publishing Co. Pte. Ltd., Singapore, 1997. pp. vii–viii. K.D. Bonin, V.V. Kresin: Electric-Dipole Polarizabilities of Atoms, Molecules and Clusters. World Scientific Publishing Co. Pte. Ltd., Singapore, 1997. pp. vii–viii.
42.
go back to reference G. Maroulis, ed.: Atoms, Molecules and Clusters in Electric Fields: Theoretical Approaches to the Caculation of Electric Polarizability. Imperical College Press, London, 2006. pp. v–viii. G. Maroulis, ed.: Atoms, Molecules and Clusters in Electric Fields: Theoretical Approaches to the Caculation of Electric Polarizability. Imperical College Press, London, 2006. pp. v–viii.
43.
go back to reference Kittel C. Introduction to Solid State Physics, 8thEdition. John Wiley & Sons, Inc. 2005. pp. 463-466. Kittel C. Introduction to Solid State Physics, 8thEdition. John Wiley & Sons, Inc. 2005. pp. 463-466.
44.
go back to reference Debye P. Polar Molecules. New York: Chemical Catalog Company, Inc., 1929. pp. 15-35. Debye P. Polar Molecules. New York: Chemical Catalog Company, Inc., 1929. pp. 15-35.
45.
go back to reference C.H.D. Clark: Proc. Leeds Philos. Lit. Soc. Sci. Sect., 1934, 2, 502–12. C.H.D. Clark: Proc. Leeds Philos. Lit. Soc. Sci. Sect., 1934, 2, 502–12.
46.
go back to reference Denbigh K. G. Trans. Faraday Soc. 1940, 36, 936-948. Denbigh K. G. Trans. Faraday Soc. 1940, 36, 936-948.
47.
48.
go back to reference Pauling L. The Nature of the Chemical Bond and the Structure of Molecules and Crystals: An Introduction to Modern Structural Chemistry. Cornell University Press; Oxford University Press, 1960. pp. 505-562. Pauling L. The Nature of the Chemical Bond and the Structure of Molecules and Crystals: An Introduction to Modern Structural Chemistry. Cornell University Press; Oxford University Press, 1960. pp. 505-562.
49.
go back to reference Ghanty T. K., Ghosh S. K. J. Phys. Chem. 1996, 100, 17429-17433. Ghanty T. K., Ghosh S. K. J. Phys. Chem. 1996, 100, 17429-17433.
50.
go back to reference Yang R. T. Adsorbents: Fundamentals and Applications. John Wiley & Sons, Inc., Hoboken, New Jersey, 2003. p. 12. Yang R. T. Adsorbents: Fundamentals and Applications. John Wiley & Sons, Inc., Hoboken, New Jersey, 2003. p. 12.
51.
go back to reference Dmitrieva I. K., Plindov G. I. Phys. Scr. 1983, 27, 402-406. Dmitrieva I. K., Plindov G. I. Phys. Scr. 1983, 27, 402-406.
52.
go back to reference Fricke B. J. Chem. Phys. 1986, 84, 862-866. Fricke B. J. Chem. Phys. 1986, 84, 862-866.
53.
go back to reference P. Schwerdtfeger: in Atoms, Molecules and Clusters in Electric Fields, G. Maroulis, ed., Imperial College Press, London, 2006. pp. 1–32. P. Schwerdtfeger: in Atoms, Molecules and Clusters in Electric Fields, G. Maroulis, ed., Imperial College Press, London, 2006. pp. 1–32.
54.
go back to reference Komorowski L. Chem. Phys. 1987, 114, 55-71. Komorowski L. Chem. Phys. 1987, 114, 55-71.
55.
go back to reference van Genechten K. A., Mortier W. J., Geerlings P. J. Chem. Phys. 1987, 86, 5063-5071. van Genechten K. A., Mortier W. J., Geerlings P. J. Chem. Phys. 1987, 86, 5063-5071.
56.
57.
go back to reference Gorbunov A. I., Kaganyuk D. S. Russ. J. Phys. Chem. 1986, 60, 1406-1407. Gorbunov A. I., Kaganyuk D. S. Russ. J. Phys. Chem. 1986, 60, 1406-1407.
58.
go back to reference Gorbunov A. I., Filippov G. G. Russ. J. Phys. Chem. 1988, 62, 974-976. Gorbunov A. I., Filippov G. G. Russ. J. Phys. Chem. 1988, 62, 974-976.
59.
go back to reference Lester H. H. Philos. Mag. 1916, 31, 197-221. Lester H. H. Philos. Mag. 1916, 31, 197-221.
60.
go back to reference Michaelson H. B. J. Appl. Phys. 1977, 48, 4729-4733. Michaelson H. B. J. Appl. Phys. 1977, 48, 4729-4733.
61.
go back to reference J.C. Rivière: in Solid State Surface Science, M. Green, ed., Marcel Dekker, New York, 1969. pp. 179–289. J.C. Rivière: in Solid State Surface Science, M. Green, ed., Marcel Dekker, New York, 1969. pp. 179–289.
62.
go back to reference Morecroft J. H. Electron Tubes and Their Applications. New York: Wiley and Sons, Inc., 1936. p. 39. Morecroft J. H. Electron Tubes and Their Applications. New York: Wiley and Sons, Inc., 1936. p. 39.
63.
go back to reference Klein O., Lange E. Z Elektrochem. 1938, 44, 542-562. Klein O., Lange E. Z Elektrochem. 1938, 44, 542-562.
64.
65.
go back to reference Scarpa O. Atti (Rendiconti) della Reale Accademia Nazionale deiLincei. Classediscienze fisiche, matematiche e naturali, Roma. 1941, 2, 1062-1069. Scarpa O. Atti (Rendiconti) della Reale Accademia Nazionale deiLincei. Classediscienze fisiche, matematiche e naturali, Roma. 1941, 2, 1062-1069.
66.
go back to reference Michaelson H. B. J. Appl. Phys. 1950, 21, 536-540. Michaelson H. B. J. Appl. Phys. 1950, 21, 536-540.
67.
go back to reference Rother F., Bomke H. Z. Phys. A 1933, 86, 231-240. Rother F., Bomke H. Z. Phys. A 1933, 86, 231-240.
68.
go back to reference Bedreag C. G. Comptes Rendus. 1946, 223, 354-354. Bedreag C. G. Comptes Rendus. 1946, 223, 354-354.
69.
70.
go back to reference Smoluchowski R. Phys. Rev. 1941, 60, 661-674. Smoluchowski R. Phys. Rev. 1941, 60, 661-674.
71.
72.
73.
74.
75.
76.
go back to reference Markham J. J., Jr. Miller P. H. Phys. Rev. 1949, 75, 959-967. Markham J. J., Jr. Miller P. H. Phys. Rev. 1949, 75, 959-967.
77.
go back to reference Condon E. U. Phys. Rev. 1938, 54, 1089-1091. Condon E. U. Phys. Rev. 1938, 54, 1089-1091.
78.
go back to reference Apker L., Taft E., Dickey J. Phys. Rev. 1948, 74, 1462-1474. Apker L., Taft E., Dickey J. Phys. Rev. 1948, 74, 1462-1474.
79.
go back to reference Raisin C., Pinchaux R. Solid State Commun. 1975, 16, 941-944. Raisin C., Pinchaux R. Solid State Commun. 1975, 16, 941-944.
80.
go back to reference Williams R. H., Polanco J. I. J. Phys. C: Solid State Phys. 1974, 7, 2745-2759. Williams R. H., Polanco J. I. J. Phys. C: Solid State Phys. 1974, 7, 2745-2759.
81.
go back to reference Schmidt M., Lipson H. Science. 2009, 324, 81-85. Schmidt M., Lipson H. Science. 2009, 324, 81-85.
82.
go back to reference Bhadeshia H. K. D. H. ISIJ Int. 1999, 39, 966-979. Bhadeshia H. K. D. H. ISIJ Int. 1999, 39, 966-979.
Metadata
Title
Exploring Correlations Between Properties Using Artificial Neural Networks
Authors
Yiming Zhang
Julian R. G. Evans
Shoufeng Yang
Publication date
30-10-2019
Publisher
Springer US
Published in
Metallurgical and Materials Transactions A / Issue 1/2020
Print ISSN: 1073-5623
Electronic ISSN: 1543-1940
DOI
https://doi.org/10.1007/s11661-019-05502-8

Other articles of this Issue 1/2020

Metallurgical and Materials Transactions A 1/2020 Go to the issue

Premium Partners