The objective of this section is twofold. First to offer a formal detailed description of the model, in order to be as transparent as possible about our assumptions. Second, to discuss these assumptions in light of our main results. Before jumping into the description, it is useful to group the parameters that govern our simulations into two categories: fundamental and auxiliary. Fundamental parameters are those that are critical for our arguments, for example the parameters that describe exploitation and worker mobility. Auxiliary parameters are those which we believe we should be explicit about, not only to keep the model more general, which is useful for subsequent research; but also because it facilitates presentation. This categorization of parameters notwithstanding, some auxiliary parameters are actually critical for our results, however discussing this in the main text would unnecessarily complicate our discussion. Such discussion is therefore one of the purposes of this section, and it will help further delineate our scope conditions.
The Organization, its actors, and the performance function
In our agent-based model there are M projects, indexed by m; and N workers, indexed by i, each having a default assignment; and one manager who controls how financial capital is allocated. All projects are symmetric, in the sense that a similar number of workers are assigned to each project initially; and the statistical properties of project quality are the same for all m. As described in the main text, most of our analyses refer to an organization where N=25 and M=5.
Total performance: The total performance of our organization at each point in time, denoted as Qt, is the sum of project-level performances, Qt = ∑mqm, t; where qm, t equals the performance of project m at time t:
$$ {q}_{m,t}={k}_{m,t}\times {v}_{m,t}\times \left[1+\phi \times \sum \limits_{i=1}^Na\left(i,m\right)\right] $$
(1)
km, t is the amount of capital the manager invests in project
m for time period
t. The investment decision is made at the end of time period
t-1. Our manager has a fixed budget, which we set equal to 1,
6 and invests the full amount of capital; so ∑
mkm, t = 1
.
vm, t refers to the quality of project
m at time period
t. In our organization, at any point in time
t, the quality of project
m can take on one of two values, a low value
vL or a high value
vH. To capture the idea of investment under uncertainty, we allow the quality of project
m to change. The probability of project
m remaining at the same level of quality during the time interval
t-1 and
t is given by
γv ∈ [0.5,1]. All of our projects are parameterized in the same way and the changes in their quality are statistically independent from one another. Technically, project quality follows a two-state Markov chain (for a similar model, see Anjos and Reagans
2013). To illustrate how this process works, consider a simple example where the organization has two projects and
γv = 0.7. Further assume that at time
t both projects are in the high-economic-potential regime, meaning
v1, t =
v2, t =
vH. Then in time period
t+1 both projects remain in the same regime with probability 0.7
2; both projects become low-potential with probability 0.3
2; and only one of the projects changes potential with probability 1-0.7
2-0.3
2= 0.42.
The closer
γv is to 1, the more stable a project’s quality.
7 In our examples we always employ a calibration where
γv = 0.7, but our results would go through as long as
γv is distant enough from both 0.5 and 1. If
γv is too close to 1, there is no uncertainty and learning is trivial (done after a few rounds); on the other hand if
γv is too close to 0.5 there is too much uncertainty and knowledge becomes stale so quickly that it is hard for the manager to use it in a timely and useful way.
In our calibration we set
vL = − 1 and
vH = 1. This means that without any information, the long-run average performance of the organization would be 0 (see equation (
1) and set the average
vm, t at 0). This is mostly a useful normalization, although the wedge between
vL and
vH does affect results. In particular this wedge influences both the volatility that the organization is exposed to, as well as the returns to information (consider the extreme case where
vL =
vH; then there is no need to learn).
Finally, the last term in equation (
1) captures labor’s contribution to performance. The term
a(
i,
m) is an indicator function taking the value of 1 if worker
i is working in project
m (and 0 otherwise), and
ϕ scales the contribution of labor to performance. In our calibration we set this parameter to 0.1, however our results obtain for a range of values. The role played by
ϕ is as follows. As
ϕ becomes close to zero, the only channel through which worker movements affect performance is through learning. In this extreme case, worker movement has no direct impact on performance, yet it still affects whether and how networks form. As
ϕ increases, then worker movements also impact performance directly; the more workers a project has, the higher its performance, as shown in equation (
1). Moreover, if the manager is well-informed and frequently allocates much financial capital to high-quality projects, then labor amplifies the performance returns to managerial knowledge via workers flocking precisely to these projects. In other words, a better informed manager also fosters a better matching of workers and projects.
Manager beliefs and capital allocation: At the beginning of every time period t, the manager holds the belief bm, t about the quality of project m. This belief is the subjective probability that project m has high quality. As the manager gathers information, the subjective belief is updated. We describe this process in greater detail in the Information Environment section below. Currently, we are concerned with how the manager’s subjective belief about a project’s quality translates into an amount of allocated capital. Our manager uses a simple heuristic:
$$ {k}_{m,t}=\left\{\begin{array}{c}\left(1-{\sigma}_{fin}\right)\times \frac{1}{M}\kern9em if\kern0.5em {b}_{m,t}\ne \underset{j}{\max}\left\{{b}_{j,t}\right\}\\ {}1-\left(M-1\right)\times \left(1-{\sigma}_{fin}\right)\times \frac{1}{M}\kern1.75em if\kern0.5em {b}_{m,t}=\underset{j}{\max}\left\{{b}_{j,t}\right\}\end{array}\right. $$
(2)
where parameter σfin ∈ [0, 1] captures the manager's tendency to exploit what he or she knows, which we interpret as a stable feature of the organization. The project with the maximum subjective belief is always allocated the most financial capital. How much more than the remaining projects depends on σfin. If σfin is 1, there is a “tight” link between allocated resources and quality. This approach corresponds to an exploitation strategy, where the manager maximizes the immediate payoff of his or her current knowledge. If σfin is 0, capital is allocated across projects equally, independent of quality. This approach corresponds to complete exploration.
Worker mobility and network formation
Worker mobility: We assume that there is one expert per project, and experts always remain in their default assignment. In contrast, individual workers can move from their default project to one they prefer more. In particular, the individual workers in our organization prefer to work on projects which have been allocated a larger amount of capital. The allocated amounts of capital are common knowledge in our organization. The details of our modeling in this regard are as follows. The worker starts in his or her default assignment and decides if he or she would prefer to work on a different project. Specifically, at every time period t each worker is presented with an alternative project assignment (randomly drawn from the other M-1 projects); and in case the budget of the default assignment is smaller than σlab times the budget of the alternative assignment, then the worker switches (for that time period). Parameter σlab ∈ [0, 1] therefore measures how strongly workers prefer to work on projects with more funding. If σlab = 0, budget-size motives are completely turned off (slowest possible labor), if σlab = 1 then the smallest difference in budget will make the focal worker switch projects (fastest possible labor).
Network formation and decay: We model network formation and decay as a function of the amount of time individuals spend working on the same projects. Reagans and McEvily (
2003, pp. 252–253) describe the positive effect working on the same projects can have on network formation. Time spent working together is an organizational equivalent of physical proximity. Proximate individuals have more opportunities to develop and maintain a network connection. Our network connections form and decay as time spent working together changes. At the beginning of time period
t, if two individuals were connected at
t-1 and continue to work on the same project, their network connection is maintained. If two individuals were not connected at
t-1, but are working on the same project at time period
t, a tie will form between the two individuals with probability
α ×
overlap, where “overlap” measures the number of consecutive time periods the workers have been together on the same project.
8 For example, suppose
α = 0.2 and two workers who join the same assignment are disconnected. After the first time period, there is a 20% probability a network connection will develop between them. If the tie is not formed at the end of the first time period together, then there is a 40% probability a network connection will develop at the end of the second time period, and so on. In our calibration we set
α = 0.01. This parameter naturally plays an important role in the model. If
α is too close to 1, then ties form immediately and in a sense it does not matter much how workers move; a minimal amount of overlap makes the learning network emerge. On the other hand, if
α is too close to zero, then the social network is always poor (the initial network is completely disconnected).
If two individuals who were connected at t-1 are no longer working on the same project at time period t, their network connection will decay with hazard rate σnet. This is the parameter which in the main text we referred to as network fragility, since a higher σnet makes network connections more responsive to worker movement.
The information environment
Learning: The mechanisms through which information is disseminated and eventually affects managerial allocations are summarized as follows.
At the beginning of every time period, the manager gathers information from a worker i about project m with probability λ. In our main calibration this parameter is set to 0.05. For the results to be interesting, λ cannot be too close to zero (no managerial knowledge possible); neither too close to 1 (all managerial knowledge is obtained directly from experts, and there is no role for the social network).
We assume that both experts and generalist workers truthfully convey their beliefs to the manager. Given a set of worker beliefs, the manager adopts the most informative belief, which in our case is the most extreme belief. This amounts to rational decision-making in our case.
At every time period, both experts and individual workers also collect their contacts’ opinions about all projects; and adopt the most informative beliefs about a project (potentially their own). If an actor does not obtain new relevant information about project m at a given round, his or her beliefs about its quality mean-revert in accordance with the true stochastic process governing vm, t (assumed to be common knowledge in the organization).
Belief dynamics: Let us denote by
bi, m, t the subjective probability that actor
i (expert or individual worker) assigns to project
m being high-potential (i.e., that
vm, t =
vH). Our mechanism of information transmission is as follows:
1.
Experts are always informed about the true vm, t of their own project. This is the informational seed.
2.
Given the social network at time period t, every actor communicates his or her existing beliefs bi, m, t to all his or her contacts. This is done before updating beliefs with others' opinions. A way to conceive of the mechanism is that everyone first writes and sends out all “recommendation letters”; and everyone opens recommendation letters only at the end. In this way, there is no need to specify the timeline of communication.
3.
Given a set of beliefs received by i from his or her contacts, i adopts the one that is the most extreme, i.e., farthest away from 1/2 (the unconditional probability that a project is high-potential). This makes sense in our setting where beliefs are taken to be rational; one can think about extreme beliefs intuitively as the beliefs of those actors who have seen the true state more recently.
4.
If no information is transmitted during time period t, then beliefs evolve according to the following law of motion:
$$ {b}_{i,m,t}=\delta {b}_{i,m,t-1}+\left(1-\delta \right)\frac{1}{2} $$
(3)
where
$$ \delta =2{\gamma}_v-1. $$
(4)
δ gages how much weight actor
i places on past information vs. simply taking an agnostic/unconditional view (probability of 1/2). Note that equation (
3) implies that as time goes by the subjective beliefs held by
i about
m converge gradually to 1/2 (assuming no information is observed in the meantime). A “rational” (or Bayesian) actor would want to update beliefs according to (3), since that is consistent with our probabilistic assumptions: Without the arrival of new information, it needs to be the case that
$$ \Pr \left\{{v}_{m,t}={v}_H\right\}=\underset{\mathrm{remains}\ \mathrm{high}}{\underbrace{\Pr \left\{{v}_{m,t-1}={v}_H\right\}{\gamma}_v}}+\underset{\mathrm{becomes}\ \mathrm{high}}{\underbrace{\left(1-\Pr \left\{{v}_{m,t-1}={v}_H\right\}\right)\left(1-{\gamma}_v\right)}}= $$
$$ =\Pr \left\{{v}_{m,t-1}={v}_H\right\}\underset{=\delta }{\underbrace{\left(2{\gamma}_v-1\right)}}+\underset{=\left(1-\delta \right)/2}{\underbrace{1-{\gamma}_v.}} $$
Our assumptions regarding belief dynamics, albeit in the spirit of the rational actor framework, were not dictated by a strict adherence thereto. Instead, we adopted a very simplistic assumption regarding how individuals construct stocks of subjective knowledge; and this simplistic assumption facilitates our analysis. In further work it would certainly be interesting and relevant to understand how the various organizational dynamics interact with alternative mechanisms of belief formation, with potential implications for learning and performance outcomes.