Our model of social network is inspired by question–answer networks [
38] where knowledge is shared from expert agents (with respect to a certain topic) answering questions received from less skilled ones. We assume a set of agents and a set of
topics to be given. Each agent has a certain level of
interest and
skill on each topic, and both parameters could change when interacting with other agents.
Together, Friend states define the time-varying adjacency matrix
\(\fancyscript{A}(k)\) of the directed and weighted graph
\(\fancyscript{G}(k) = \{\fancyscript{V},\fancyscript{E}^k\}\) describing the network at time step
k, where
\(\fancyscript{V} := N\) is the set of nodes and
\(\fancyscript{E}^k = \bigcup _{i \in N}FS^k_{i}\) is the set of weighted edges recorded by nodes’ Friend states. The element of the adjacency matrix
\(\fancyscript{A}(k)\) is computed as follows:
$$\begin{aligned} a_{ij}(k)=\left\{ \begin{array}{ll} x^k_{ij} &{} \mathrm{if~(i,j)}~\in {\fancyscript{E}^{k}} \\ 0 &{} \mathrm{otherwise} \end{array} \right. \end{aligned}$$
Network construction
A network is dynamically created according to the following steps:
1.
Setup At setup (time step 0), for each agent i, the Personal state
\({\text {PS}}^0_{i}\) has a number of topics \(T^0_i\) and corresponding qualities \(s^0_{it}\) selected randomly, while the level of interest \(l^0_{it}\) is the same for all topics \(T^0_i\). The Friend state
\({\text {FS}}^0_{i}\), instead, is empty, because we do not assume any preset network structure.
2.
Topic selection At each time step, an agent \(i'\) is randomly selected and for that agent a certain topic (\(t^* \in T^k_{i'}\)) is selected from its Personal state. The choice of the topic is a weighted random selection with values of the associated interests \(l^k_{i't^*}\) as weights; this way topics with higher interest are more likely to be selected.
3.
Peer selection Among agent \(i'\) friends and friends-of-friends holding topic \(t^*\), the agent \(i''\) with the higher skill in topic \(t^*\) is selected (\(max\;(s^k_{jt^*} \forall j \in {\text {FS}}^k_{i'} \wedge {\text {FS}}^k_{{\text {FS}}^k_{i'}})\)).
4.
Successful communication The communication between the requester agent \(i'\) and the selected peer \(i''\) succeeds if the peer is more skilled than the requester in topic \(t^*\). The condition for establishing a communication involves the skill parameter: \(s^k_{i''t^*} > s^k_{i't^*}\).
5.
Failed communication, random selected peer Otherwise, if either Step 3 or 4 fails (i.e., no peer holds topic \(t^*\) or the selected peer is less skilled than the requester), then select an agent \(i'''\) at random.
6.
Communication with random selected peer If the randomly selected node holds the selected topic and has a skill greater than that of the requester agent, the communication succeeds (WHILE
\((t^* \in T^k_{i'''}\)
AND
\(s^k_{i'''t^*} > s^k_{i't^*})\)
Communication Succeed).
For sake of clarity, it is worth noting that the inclusion of friend-of-friend agents in Step 2 (
Peer selection) is key for network transitivity and the closure of triangles, two peculiar characteristics of social networks.
The network formation mechanism at start-up, instead, with agents having no friends, is purely random. In particular, considering the six steps just described, a communication attempt always fails at start-up because there is no peer to select; Step 5 (Failed communication, random selection) and Step 6 (Condition for communication with random selection) are then executed and the initial ties are created by those interactions with randomly selected peers that succeed.
Skill and interest update
After a successful interaction, the agent that started the communication is updated. For simplicity, no change in the respondent’s state is produced, assuming that knowledge, being an intangible good, does not decrease when shared, and there is no cost of processing and transmission.
Skill and
interest are always non-negative quantities. For the skill parameter associated to each topic an agent owns, we assumes that it simply increases in chunks calculated as a fraction of the knowledge difference between two interacting agents. This implies that in subsequent interactions between two agents the less skilled one accumulates knowledge in chunks of diminishing size. In this simple model of knowledge transmission, skill is a monotonic positive function with diminishing marginal increments. For completeness, although not specifically relevant for this work, the model also considers the effect of
trust as an enabling factor for sharing knowledge: the more the trust between the two agents, the better the diffusion of knowledge. Here the assumption is that trust between two interacting agents is built with successful communication. In this case, when agents interact for the first time, the chunk of knowledge transferred is reduced by a discounting factor representing the absence of trust. This discount factor progressively vanishes with subsequent communications. This is a simplified form of trust (and distrust) modeling, but motivations could be found in the literature about collective behavior [
39,
40] and refers both to the prevalence of egocentrism in assimilating new information and to trust dynamics.
The function modeling how agent
\(i'\) improves the skill associated to a certain topic
\(t^*\) by interacting with agent
\(i''\) is as follows:
$$\begin{aligned} \delta s_{i't^*} =\frac{s_{i'',t^*} - s_{i', t^*}}{\gamma + \rho e^{-\frac{\nu }{\theta }}}, \end{aligned}$$
(1)
where
-
\(s_{i'',t^*} - s_{i', t^*}\) is the difference of skill level associated to topic \(t^*\) between agent \(i'\) and \(i''\);
-
\(\gamma \ge 1\) is the control factor for the size of the chunk of knowledge \(\delta \;skill\) that agent \(i'\) could learn from agent \(i''\);
-
\(\rho e^{-\frac{\nu }{\theta }}\) is the discount factor representing trust;
-
\(\nu = x^k_{i'i''}\) represents the number of successful communications between agent \(n_{i'}\) and agent \(n_{i''}\) at time step k;
-
\(-\frac{\nu }{\theta }\) controls the slope of the trust function between the two agents with \(\theta\) being the control factor; and
-
\(\rho\) is the control factor for the actual value of the trust component.
In short, the skill function says that one agent improves its skill with respect of a certain topic by learning from someone more skilled. The improvement is always a fraction of the difference of skills between the two and this passage of knowledge could be influenced by trust relations. From this derives the monotonicity with decreasing marginal gains.
The dynamics we have assumed for the
interest associated to the topic for which the interaction takes place is slightly different with respect to skills. The difference is the assumption of
bounded total interest an agent could have. In other words, one cannot extend indefinitely the number of topics he is interested in or the amount of interest in a certain topic, for the simple reason that time and efforts are finite quantities. Besides the common sense, motivations for this assumption can be found in cognitive science studies, which have shown the tendency of people to shift their attention and interest, rather than behave incrementally [
40], and in associating the interest for a topic to the time spent dealing with that topic (studying, experimenting, etc.).
With respect to the model and how agent’s state parameters are managed, this assumption means that either interests in topics are simply assumed as fixed quantities (a grossly unrealistic assumption) or when the interest associated to a certain topic increases, some other interests associated to all other topics should decrease. We have modeled this second case letting interests dynamically change, and for simplicity we assume that the increase of interest in one topic is compensated by uniformly decreasing all other interests. A full discussion of these model assumptions has been presented in [
36].
The function modeling the interest function, for agent
\(i'\) and associated to topic
\(t^*,\) depends on the gain in skill defined by function (
1), meaning that the more one learns the more he is interested to learn with respect to that topic:
$$\begin{aligned} \delta l_{i',t^*} = \alpha \left(1 - e^{-\frac{\delta s_{i',t^*}}{\beta }}\right) \end{aligned}$$
(2)
with
\(\alpha > 1\) and
\(\beta > 1\) being the two parameters that control, respectively, the scale and the slope of the interest function, which, again, present diminishing marginal increments. A specific analysis of self-organization strategies based on a manipulation of these two parameters could be found in a previous work [
36].
Finally, as mentioned before, the increase of a specific interest should be compensated by an equal decrease proportionally spread among the other interests.
Together, these skill and interest functions produce a rich dynamics with regard to knowledge diffusion on the network. In particular, agents acquire different roles with respect to communication, with a core of tightly connected agents and hubs, and a large periphery. The network tends to form a giant component, but specific setups may break it into several independent ones. In general, the diffusion of knowledge produced is typically uneven and locally skewed on just few topics. These are the basic characteristics often debated for real case studies with populations showing a strong tendency to polarization of interests, echo chambers, and forms of cultural isolation even in the presence of a social network with a high degree of connectivity, as the online space now provides.
These are also the key observations that have driven this work, which is focused on studying how, realistically, a social network with the characteristics just presented could be controlled in order to govern the process of knowledge diffusion, if the goal is to achieve a less skewed distribution of skills, a larger pool of interests shared by agents, and a more effective knowledge diffusion process.