This approximation gives power law only for nodes with weights
\(w \le \frac{\theta }{w_0}\). But the expected number of nodes with weights not satisfying this inequality
\(\mathrm {E}m\) is extremely small
$$\begin{aligned} \mathrm {E} m = n P\left(w > \frac{\theta }{w_0}\right) = n\left( \frac{w_0^2}{\theta }\right) ^a = o(1). \end{aligned}$$
(10)
As it was shown in Lemma
1, the probability of the node
\(\vec {v_i} = w_i \vec {x_i}\) with weight
\(w_i = w \le \frac{\theta }{w_0}\) to have an edge to another random node is
$$\begin{aligned} P_{e}(w) = \frac{w_{0}^a}{2 \theta ^a (a + 1)} w^a. \end{aligned}$$
Let
\(k_i(w)\) be the degree of the node
\(v_i\). Then
$$\begin{aligned} k_i(w) = \sum _{i \ne j} I[v_i \text { is connected to } v_j], \end{aligned}$$
where
I stands for the indicator function.
As all nodes are independent, we get
$$\begin{aligned} E k_i(w) = (n-1) P_{e}(w). \end{aligned}$$
In the mean-field approximation, we assume that
\(k_i(w)\) is really close to its expectation and we can substitute it by
\({(n-1) P_{e}(w)}\) in the following expression for the degree distribution
\(P(k) = f(w) \frac{\mathrm {d}w}{\mathrm {d}k},\) where
f(
w) is a density of weights. Thus,
$$\begin{aligned} P(k) = \frac{2 a w_0^a\theta ^a(a+1)}{(n-1) w^{2a}} \propto k^{-2} \end{aligned}$$
\(\square \)
Proof Degree
\(k_i\) of the node
\(v_i\) is a binomial random variable. Using the probability
\(P_{e}(w)\) of the node
\(v_i\) with weight
\({w_i = w}\) to have an edge to another random node, we can get the probability that
\(k_i\) equals
k:
$$\begin{aligned} P(k_i = k | w_i = w) = {n -1 \atopwithdelims ()k} \left( P_{e}(w)\right) ^k (1-P_{e}(w))^{n-k-1}. \end{aligned}$$
To get the total probability, we need to integrate this expression with respect to
w
$$\begin{aligned} P(k_i = k) = {n - 1 \atopwithdelims ()k} \int\limits _{w_0}^{\infty } \left( P_{e}(w)\right) ^k (1-P_{e}(w))^{n-k-1} \frac{aw_0^a}{w^{a+1}} \mathrm {d}w. \end{aligned}$$
Because of
\(P_{e}(w)\) is a composite function, the integral breaks up into two parts.
$$\begin{aligned} I_1 = \int\limits _{w_0}^{\theta /w_0} \left( P_{e}(w)\right) ^k (1-P_{e}(w))^{n-k-1} \frac{aw_0^a}{w^{a+1}} \mathrm {d}w,\\ I_2 = \int\limits _{\theta /w_0}^{\infty } \left( P_{e}(w)\right) ^k (1 - P_{e}(w))^{n-k-1} \frac{aw_0^a}{w^{a+1}}\mathrm {d}w. \end{aligned}$$
Thus,
$$\begin{aligned} P(k_i = k) = {n - 1 \atopwithdelims ()k} (I_1 + I_2). \end{aligned}$$
For estimating
\(I_1\) we can use the formula
\(P_{e}(w) = \frac{1}{2}\frac{w_{0}^a}{\theta ^a (a + 1)} w^a\) from Lemma
1. After making the substitution to integrate with respect to
\(P_{e}(w)\) and using the incomplete beta-function, we get
$$\begin{aligned} I_1 = \frac{w_0^{2a}}{2\theta ^a a(a+1)} \cdot \left( B \left( \frac{1}{2(a+1)}; k-1, n-k \right) - B \left( \frac{w_0^{2a}}{2\theta ^a(a+1)}; k-1, n-k \right) \right) . \end{aligned}$$
For
\(I_2\) we can derive an upper bound. Note that for
\(w \ge \theta /w_0\) we have
$$\begin{aligned} P_{e}(w) = \frac{1}{2}\left( 1 - \frac{a\theta }{w(a+1)w_0}\right) < \frac{1}{2} \end{aligned}$$
$$\begin{aligned} 1 - P_{e}(w) \le 1 - P_{e}(\theta / w_0) = \frac{1}{2}\left( 1 + \frac{a}{a+1} \right) = \varepsilon _0 < 1. \end{aligned}$$
Therefore, we obtain the following upper estimate
$$\begin{aligned} I_2 = O\left( \frac{ (\varepsilon _0)^{n-k-1} }{2^k} \int\limits _{\theta /w_0}^{\infty } \frac{aw_0^a}{w^{a+1}}\mathrm {d}w \right) = O\left( \frac{ (\varepsilon _0)^{n-k-1}}{\theta ^a 2^k} \right) \end{aligned}$$
We now combine estimates for
\(I_1\),
\(I_2\) and the following estimates for the incomplete beta-function:
$$\begin{aligned} B(x; a, b) = O\Big (\frac{x^a}{a}\Big ),\\ B(x; a, b) = B(a, b) + O\Big (\frac{(1-x)^b}{b}\Big ),\\ \frac{1}{B(d-1, n-d)} = \frac{\Gamma (n-1)}{\Gamma (d-1)\Gamma (n-d)} = O\Big (\frac{n^{d-1}}{\Gamma (d-1)}\Big ). \end{aligned}$$
This gives us
$$\begin{aligned} P(k_i = k) &= \left( {\begin{array}{c}n-1\\ k\end{array}}\right) \frac{w_0^{2a}}{2 \theta ^a a(a+1)} \left[ B(k-1, n-k) + O\left( \frac{\left( 1-\frac{1}{2(a+1)}\right) ^{n-k}}{n - k}\right) \right. \\ & \qquad \qquad \qquad \qquad \qquad \left. - O\left( \frac{\left( \frac{w_0^{2a}}{2\theta ^a(a+1)}\right) ^{k-1}}{k-1} \right) + O\left( \frac{ (\varepsilon _0)^{n-k-1}}{\theta ^a 2^k} \right) \right] \\ &= \left( {\begin{array}{c}n-1\\ k\end{array}}\right) \frac{w_0^{2a}}{2 \theta ^a a(a+1)} B(k-1, n-k) \\ & \quad \left[ 1+ O\left( \frac{\left( \varepsilon _1\right) ^{n-k} n^{k-1}}{(n - k)\Gamma (k-1)}\right) + \ O\left( \frac{\left( \frac{w_0^{2a}}{2\theta ^a(a+1)}\right) ^{k-1}n^{k-1}}{(k-1)\Gamma (k-1)} \right) \right. \\& \qquad \left. + \ O\left( \frac{ (\varepsilon _0)^{n-k-1}}{\theta ^a 2^k} \frac{n^{k-1}}{\Gamma (k-1)} \right) \right] . \end{aligned}$$
Let us introduce the following notations:
$$\begin{aligned} A &= O\left( \frac{\left( \varepsilon _1\right) ^{n-k} n^{k-1}}{(n - k)\Gamma (k-1)}\right) , \text {where } \varepsilon _1 = 1 - \frac{1}{2(a+1)}, \\ &B = O\left( \frac{\left( \frac{w_0^{2a}}{2\theta ^a(a+1)}\right) ^{k-1}n^{k-1}}{(k-1)\Gamma (k-1)} \right) ,\\ C &= O\left( \frac{ (\varepsilon _0)^{n-k-1}}{\theta ^a 2^k} \frac{n^{k-1}}{\Gamma (k-1)} \right) , \text {where } \varepsilon _0 =\frac{1}{2}\left( 1 + \frac{a}{a+1} \right) . \end{aligned}$$
Using
\(\frac{n}{\theta ^a(n)} = o(1)\), for
\(k(n) < C_0n\) we get
$$\begin{aligned} B = O\left( \frac{\left( \frac{w_0^{2a}}{2(a+1)}\right) ^{k-1}(\frac{n}{\theta ^a})^{k-1}}{\Gamma (k)} \right) = o(1). \end{aligned}$$
If
k(
n) is a bounded function, then since
\(\varepsilon _0 < 1\) and
\(\varepsilon _1 < 1\) we have
$$\begin{aligned} A = O\left( \left( \varepsilon _1\right) ^\frac{n-k}{k-1} n^{k-1}\right) = o(1),\\ C = O\left( \left( \varepsilon _0\right) ^{n-k} n^{k-1} \right) = o(1). \end{aligned}$$
If
\(k(n) \rightarrow \infty \) as
\(n \rightarrow \infty \), using Stirling’s approximation
\(\Gamma (k-1) \sim \sqrt{2 \pi (k-2)} \left( \frac{e}{k-2}\right) ^{k-2}\) we get
$$\begin{aligned} A &= O \left( \frac{k - 2}{(n-k) \sqrt{k-2}} \left( (\varepsilon _1)^\frac{n-k}{k-1} \frac{n}{k - 2}\right) ^{k-1}\right) ,\\ C &= O \left( \frac{\sqrt{k-2}}{\theta ^a} \left( (\varepsilon _0)^{\frac{n-k-1}{k-1}} \frac{n}{k-2} \right) ^{k-1} \right) . \end{aligned}$$
Since
\( \varepsilon ^x x \rightarrow 0\) for
\(\varepsilon < 1\) as
\(x \rightarrow \infty \) there exist constants
\(C_0\) and
\(N_0\) such that for
\(n > N_0\) and
\(k(n) < C_0n\) we have
\((\varepsilon _1)^\frac{n-k}{k-1} \frac{n}{k - 2} < 1\) and
\((\varepsilon _0)^{\frac{n-k-1}{k-1}} \frac{n}{k-2} < 1\). This implies that
\(A=o(1)\) and
\(C=o(1)\).
Note that regardless of the shape parameter of the Pareto distribution of weights we always generate networks with a degree distribution following a power law with an exponent equals 2. In the next section, we modify our model to change the exponent of the degree destribution and some other properties of the resulting networks.