Skip to main content
Top
Published in: International Journal on Interactive Design and Manufacturing (IJIDeM) 2/2022

Open Access 23-03-2022 | Original Paper

Analysing paradoxes in design decisions: the case of “multiple-district” paradox

Authors: Fiorenzo Franceschini, Domenico A. Maisano

Published in: International Journal on Interactive Design and Manufacturing (IJIDeM) | Issue 2/2022

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In early design stages, a team of designers may often express conflicting preferences on a set of design alternatives, formulating individual rankings that must then be aggregated into a collective one. The scientific literature encompasses a variety of models to perform this aggregation, showing strengths and weaknesses. In particular situations, some of these models can lead to paradoxical results, i.e., contrary to logic and common sense. This article focuses on one of these paradoxes, known as multiple-district paradox, providing a new methodology aimed at identifying the reason of its potential triggering. This methodology can be a valid support for several decision problems. Some examples accompany the description.
Notes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Several decision-making problems in design concern the formulation of rankings amongst alternative design solutions [1, 2]. A very popular problem in the early design stage is that in which m engineering designers (or more simply experts: D1 to Dm) formulate their individual rankings of n design alternatives (or more simply alternatives: O1 to On) [39]. This problem may concern design activities devoted to both incremental and disruptive forms of innovation [54, 55].
For the sake of simplicity, this paper will consider complete rankings where: (i) each expert is able to rank all the alternatives of interest, and (ii) each ranking can be decomposed into paired-comparison relationships of strict preference (e.g., O1 ≻ O2 or O1 ≺ O2) and/or indifference (e.g., O1 ~ O2) [10].
Since designers often have conflicting opinions about the possible design alternatives, their rankings – which form the so-called preference profile – can be characterized by a certain degree of variability or discordance [1113]. The objective of the problem of interest is to aggregate the expert rankings into a collective one, which is supposed to reflect them as much as possible, even in the presence of diverging preferences [1425]. For this reason, the collective ranking is also defined as social, consensus or compromise ranking [2, 16, 26].
The scientific literature includes a variety of possible models to perform this aggregation. Different aggregation models often lead to different collective rankings [13, 22] and – paraphrasing what was theorized by Arrow – any aggregation model, in specific situations, is by its nature imperfect [23, 57].
In general, the choice of the most suitable model may depend on the specific objective(s) of the expert group and/or (ii) the characteristics of the preference profile [2732]. In addition, some aggregation models can occasionally cause paradoxical results that are (at least apparently) logically unreasonable or self-contradictory [33, 34].
This paper focuses on a specific paradox, known as “multiple-district paradox”, which can be summarized as follows: although an alternative can be the most preferred one in two (or more) sub-groups (districts) of rankings, yet it cannot necessarily be the most preferred one when merging the sub-groups of rankings into a single combined group [35, 36]. In other words, this paradox occurs when one alternative wins in every district but loses when merging them [37]. The expression “multiple-district” derives from the Voting Theory context, in which this paradox was originally studied.
This paradox is of potential interest even today, as it can occur in design problems involving distributed teams, whose local decisions should be merged into a single global decision [38]. Some practical examples concerning the Quality and Reliability field can be found in [56, 58].
This paper analyses the multiple-district paradox, providing a new “diagnostic” methodology aimed at identifying the reason of its potential triggering.
The remainder of this paper is organized into three sections. Section 2 conducts a qualitative analysis of the paradox, highlighting the typical conditions behind its occurrence, such as characteristics of the preference profile and/or the aggregation model. Section 3 illustrates the new diagnostic methodology, which is based on (i) some indicators representing the degree of concordance between the expert rankings and (ii) other indicators representing the consistency between the expert rankings and the collective ranking. The new methodology allows to investigate the causes of the paradox, on a case-by-case basis. Finally, Sect. 4 summarizes the original contributions of this paper and its practical implications, limitations and suggestions for future research. The Appendix section provides further details on the indicators used in the analysis.

2 The multiple-district paradox

This section illustrates the multiple-district paradox, with the support of several examples in the context of product design. The rest of the section is organized in three sub-sections, respectively dedicated to:
1.
Exemplifying the occurrence of the paradox and raising some research questions through a preliminary case study;
 
2.
Showing that the paradox may concern different aggregation models, depending on the preference profile of interest;
 
3.
Identifying the typical conditions that favor the occurrence of the paradox.
 

2.1 Case study

Let us consider the interior design of a luxury car. It is assumed that three alternative interior-design concepts (i.e. O1, O2, O3) should be assessed by two sub-groups of experts, with the aim of identifying the best concept in terms of aesthetics. Sub-group A is composed of seventeen engineering-design experts (i.e. \(e_{{{\text{A}}_{1} }}\) to \(e_{{{\text{A}}_{17} }}\)) from a specific headquarters of a major design company, while sub-group B is composed of fifteen engineering-design experts (i.e. \(e_{{{\text{B}}_{1} }}\) to \(e_{{{\text{B}}_{15} }}\)) from another headquarters of the same company.
The notion of aesthetics is defined from a triple perspective: (i) colour matching; (ii) harmonious design; and (iii) comfort and practicality. Since the aesthetics assessment is intrinsically subjective, each expert is asked to formulate his/her individual ranking of O1, O2 and O3, as summarized in Table 1a and b, for sub-group A and B respectively.
Table 1
Rankings of three interior-design concepts (i.e., alternatives O1, O2, O3), formulated by two sub-groups of engineering designers (experts): a sub-group A (\({e}_{{\mathrm{A}}_{1}}\)to\({e}_{{\mathrm{A}}_{17}}\)) and b sub-group B (\({e}_{{\mathrm{B}}_{1}}\)to\({e}_{{\mathrm{B}}_{15}}\)). c These sub-groups are then merged into a single combined group (A + B). d Synthesis of the experts’ rankings
(a) Sub-group A
(b) Sub-group B
(c) Combined group (A + B)
Expert
Ranking
Expert
Ranking
Expert
Ranking
Expert
Ranking
\({e}_{{\mathrm{A}}_{1}}\)
\({O}_{1}\succ {O}_{2}\succ {O}_{3}\)
\({e}_{{\mathrm{B}}_{1}}\)
\({O}_{1}\succ {O}_{3}\succ {O}_{2}\)
\({e}_{{\mathrm{A}}_{1}}\)
\({O}_{1}\succ {O}_{2}\succ {O}_{3}\)
\({e}_{{\mathrm{B}}_{8}}\)
\({O}_{2}\succ {O}_{3}\succ {O}_{1}\)
\({e}_{{\mathrm{A}}_{2}}\)
\({O}_{1}\succ {O}_{2}\succ {O}_{3}\)
\({e}_{{\mathrm{B}}_{2}}\)
\({O}_{1}\succ {O}_{3}\succ {O}_{2}\)
\({e}_{{\mathrm{A}}_{2}}\)
\({O}_{1}\succ {O}_{2}\succ {O}_{3}\)
\({e}_{{\mathrm{B}}_{9}}\)
\({O}_{2}\succ {O}_{3}\succ {O}_{1}\)
\({e}_{{\mathrm{A}}_{3}}\)
\({O}_{1}\succ {O}_{2}\succ {O}_{3}\)
\({e}_{{\mathrm{B}}_{3}}\)
\({O}_{1}\succ {O}_{3}\succ {O}_{2}\)
\({e}_{{\mathrm{A}}_{3}}\)
\({O}_{1}\succ {O}_{2}\succ {O}_{3}\)
\({e}_{{\mathrm{B}}_{10}}\)
\({O}_{2}\succ {O}_{3}\succ {O}_{1}\)
\({e}_{{\mathrm{A}}_{4}}\)
\({O}_{1}\succ {O}_{2}\succ {O}_{3}\)
\({e}_{{\mathrm{B}}_{4}}\)
\({O}_{1}\succ {O}_{3}\succ {O}_{2}\)
\({e}_{{\mathrm{A}}_{4}}\)
\({O}_{1}\succ {O}_{2}\succ {O}_{3}\)
\({e}_{{\mathrm{B}}_{11}}\)
\({O}_{2}\succ {O}_{3}\succ {O}_{1}\)
\({e}_{{\mathrm{A}}_{5}}\)
\({O}_{2}\succ {O}_{1}\succ {O}_{3}\)
\({e}_{{\mathrm{B}}_{5}}\)
\({O}_{1}\succ {O}_{3}\succ {O}_{2}\)
\({e}_{{\mathrm{B}}_{1}}\)
\({O}_{1}\succ {O}_{3}\succ {O}_{2}\)
\({e}_{{\mathrm{B}}_{12}}\)
\({O}_{2}\succ {O}_{3}\succ {O}_{1}\)
\({e}_{{\mathrm{A}}_{6}}\)
\({O}_{2}\succ {O}_{3}\succ {O}_{1}\)
\({e}_{{\mathrm{B}}_{6}}\)
\({O}_{1}\succ {O}_{3}\succ {O}_{2}\)
\({e}_{{\mathrm{B}}_{2}}\)
\({O}_{1}\succ {O}_{3}\succ {O}_{2}\)
\({e}_{{\mathrm{B}}_{13}}\)
\({O}_{2}\succ {O}_{3}\succ {O}_{1}\)
\({e}_{{\mathrm{A}}_{7}}\)
\({O}_{2}\succ {O}_{3}\succ {O}_{1}\)
\({e}_{{\mathrm{B}}_{7}}\)
\({O}_{2}\succ {O}_{3}\succ {O}_{1}\)
\({e}_{{\mathrm{B}}_{3}}\)
\({O}_{1}\succ {O}_{3}\succ {O}_{2}\)
\({e}_{{\mathrm{B}}_{14}}\)
\({O}_{2}\succ {O}_{3}\succ {O}_{1}\)
\({e}_{{\mathrm{A}}_{8}}\)
\({O}_{2}\succ {O}_{3}\succ {O}_{1}\)
\({e}_{{\mathrm{B}}_{8}}\)
\({O}_{2}\succ {O}_{3}\succ {O}_{1}\)
\({e}_{{\mathrm{B}}_{4}}\)
\({O}_{1}\succ {O}_{3}\succ {O}_{2}\)
\({e}_{{\mathrm{A}}_{11}}\)
\({O}_{3}\succ {O}_{1}\succ {O}_{2}\)
\({e}_{{\mathrm{A}}_{9}}\)
\({O}_{2}\succ {O}_{3}\succ {O}_{1}\)
\({e}_{{\mathrm{B}}_{9}}\)
\({O}_{2}\succ {O}_{3}\succ {O}_{1}\)
\({e}_{{\mathrm{B}}_{5}}\)
\({O}_{1}\succ {O}_{3}\succ {O}_{2}\)
\({e}_{{\mathrm{A}}_{12}}\)
\({O}_{3}\succ {O}_{1}\succ {O}_{2}\)
\({e}_{{\mathrm{A}}_{10}}\)
\({O}_{2}\succ {O}_{3}\succ {O}_{1}\)
\({e}_{{\mathrm{B}}_{10}}\)
\({O}_{2}\succ {O}_{3}\succ {O}_{1}\)
\({e}_{{\mathrm{B}}_{6}}\)
\({O}_{1}\succ {O}_{3}\succ {O}_{2}\)
\({e}_{{\mathrm{A}}_{13}}\)
\({O}_{3}\succ {O}_{1}\succ {O}_{2}\)
\({e}_{{\mathrm{A}}_{11}}\)
\({O}_{3}\succ {O}_{1}\succ {O}_{2}\)
\({e}_{{\mathrm{B}}_{11}}\)
\({O}_{2}\succ {O}_{3}\succ {O}_{1}\)
\({e}_{{\mathrm{A}}_{5}}\)
\({O}_{2}\succ {O}_{1}\succ {O}_{3}\)
\({e}_{{\mathrm{A}}_{14}}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
\(e_{{{\text{A}}_{12} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
\(e_{{{\text{B}}_{12} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
\(e_{{{\text{A}}_{6} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
\(e_{{{\text{A}}_{15} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
\(e_{{{\text{A}}_{13} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
\(e_{{{\text{B}}_{13} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
\(e_{{{\text{A}}_{7} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
\(e_{{{\text{A}}_{16} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
\(e_{{{\text{A}}_{14} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
\(e_{{{\text{B}}_{14} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
\(e_{{{\text{A}}_{8} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
\(e_{{{\text{A}}_{17} }}\)
\(O_{3} \succ O_{2} \succ O_{1}\)
\(e_{{{\text{A}}_{15} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
\(e_{{{\text{B}}_{15} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
\(e_{{{\text{A}}_{9} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
\(e_{{{\text{B}}_{15} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
\(e_{{{\text{A}}_{16} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
  
\(e_{{{\text{A}}_{10} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
  
\(e_{{{\text{A}}_{17} }}\)
\(O_{3} \succ O_{2} \succ O_{1}\)
  
\(e_{{{\text{B}}_{7} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
  
(d) Synthesis of the experts’ rankings
 Sub-group A
 Sub-group B
 Combined group (A + B)
Expert
Ranking
Expert
Ranking
Expert
Ranking
4
\(O_{1} \succ O_{2} \succ O_{3}\)
6
\(O_{1} \succ O_{3} \succ O_{2}\)
4
\(O_{1} \succ O_{2} \succ O_{3}\)
1
\(O_{2} \succ O_{1} \succ O_{3}\)
8
\(O_{2} \succ O_{3} \succ O_{1}\)
6
\(O_{1} \succ O_{3} \succ O_{2}\)
5
\(O_{2} \succ O_{3} \succ O_{1}\)
1
\(O_{3} \succ O_{1} \succ O_{2}\)
1
\(O_{2} \succ O_{1} \succ O_{3}\)
6
\(O_{3} \succ O_{1} \succ O_{2}\)
  
13
\(O_{2} \succ O_{3} \succ O_{1}\)
1
\(O_{3} \succ O_{2} \succ O_{1}\)
  
7
\(O_{3} \succ O_{1} \succ O_{2}\)
    
1
\(O_{3} \succ O_{2} \succ O_{1}\)
Total: 17
 
Total: 15
 
Total: 32
 
The team leader decides to aggregate the expert rankings into a collective one, through an aggregation model called Instant-Runoff Voting (IRV), sometimes referred to as Alternative Vote [22, 36]. The IRV was originally conceived as part of the Voting Theory in single-seat elections with more than two candidates [37]. Instead of voting support for only one candidate, voters in IRV elections can rank the candidates in order of preference. Ballots are initially counted for each voter's first-choice. If a candidate obtains more than half of the votes based on first-choices, that candidate wins. If not, the candidate with the fewest votes is eliminated. The voters who selected the defeated candidate as a first-choice then have their votes added to the totals of their next choice. This process continues until a candidate has more than half of the votes. Of course, the application of the IRV model can be extended to other contexts, such as that of product design, where candidates are replaced with alternative design concepts and voters are replaced with design experts.
Returning to the case study, the IRV model can be applied separately to the two previous expert sub-groups (districts), obtaining the results below.
Sub-group A
  • In the first round (see Table 1-d), the design concept O1 obtains 4 first-choices, O2 obtains 6 first-choices, and O3 obtains 7 first-choices. Since no alternative has obtained more than half of the preferences based on first-choices, O1 – i.e., the alternative with fewest first-choices – is eliminated.
  • In the head-to-head comparison between O2 and O3, O2 obtains 10 first-choices while O3 obtains 7 first-choices. The winner is then O2.
  • The resulting collective ranking for sub-group A is: \(O_{2} \succ O_{3} \succ O_{1}\).
    Sub-group B
  • In the first round, the design concept O1 obtains 6 first-choices, O2 obtains 8 first-choices, and O3 obtains 1 first-choice. O2 obtains more than half of the preferences based on first-choices, while O3 is the one with fewest first-choices.
  • The resulting collective ranking for sub-group B is: \(O_{2} \succ O_{1} \succ O_{3}\).
    Combined group
    Assuming that, ceteris paribus, the two expert sub-groups A and B are merged into a combined group (A + B) of thirty-two experts (see Table 1c), the IRV can be applied to all their merged rankings as follows.
  • In the first round, the design concept O1 obtains 10 first-choices, O2 obtains 14 first-choices, and O3 obtains 8 first-choices. Since no alternative has obtained more than half of the preferences based on first-choices, O3 – i.e., the one with fewest first-choices – is eliminated.
  • In the head-to-head comparison between O1 and O2, O1 obtains 17 first-choices while O2 obtains 15 first-choices. The winner is then O1.
  • The resulting collective ranking for the combined group (A + B) is: \(O_{1} \succ O_{2} \succ O_{3}\).
    The above results are paradoxical: considering both the two sub-groups A and B separately, the most preferred alternative is O2, while combining the two sub-groups, the most preferred alternative becomes O1. This result is difficult to justify since it is (at least apparently) contradictory and against logic: how could the team leader (or whoever) tolerate that – although O2 is the best design concept according to each individual sub-group – when combining the two sub-groups, O1 is the (new) best one?
    Table 2a summarizes the results obtained from the three previous applications of the IRV aggregation model.
The aforementioned paradox example raises some research questions, which will be addressed in the remainder of the paper:
(1)
Is the multiple-district paradox originated by a specific aggregation model or a specific preference profile, or both?
 
(2)
Can we develop an operational procedure to quantitatively analyse the reasons behind the occurrence of this paradox?
 
Table 2
Collective rankings obtained by applying the: a IRV, b Coombs’ and c BC models to the sub-groups (A and B) of experts and the combined group (A + B). The corresponding expert rankings are reported in Table 1
 
No. of experts
(a) IRV
(b) Coombs’
(c) BC
Sub-group A
17
\({\varvec{O}}_{2} \succ O_{3} \succ O_{1}\)
\(O_{3} \succ O_{1} \sim O_{2}\)
\(O_{3} \succ O_{2} \succ O_{1}\)
Sub-group B
15
\({\varvec{O}}_{2} \succ O_{1} \succ O_{3}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
\(O_{2} \sim O_{3} \succ O_{1}\)
Combined group (A + B)
32
\({\varvec{O}}_{1} \succ O_{2} \succ O_{3}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
\(O_{3} \succ O_{2} \succ O_{1}\)

2.2 Changing aggregation model and preference profile

The previous example showed the occurrence of the multiple-district paradox when applying the IRV aggregation model to a certain preference profile. However, what would happen if changing the aggregation model? And what would happen if changing the preference profile?
Let us consider two further aggregation models, respectively (i) the one proposed by Coombs [39, 40] and (ii) the so-called Borda Count model [22, 41, 42], applying them to each of the same three (sub-)groups of rankings (i.e., A, B and A + B). The following sub-sections illustrate the results obtained through the application of these other aggregation models.

2.2.1 Coombs’ aggregation model

This model is very similar to the IRV, except that the alternative eliminated in a certain round is the one ranked last by the largest number of experts, not the one ranked first by the smallest number of experts [36].
By applying the Coombs’ model to the individual sub-groups (A and B) of rankings in Table 1, the following results can be obtained.
Sub-group A
In the first round for sub-group A, the design concept O1 obtains 6 last-choices, O2 obtains 6 last-choices, and O3 obtains 5 last-choices. Being the alternatives with the largest number of last-choices, O1 and O2 are then eliminated and the winner is O3.
The collective ranking for sub-group A is then: \(O_{3} \succ O_{1} \sim { }O_{2}\).
Sub-group B
In the first round for sub-group B, the design concept O1 obtains 8 last-choices, O2 obtains 7 last-choices, and O3 obtains no last-choice. Being the alternative with the largest number of last-choices, O1 is then eliminated.
In the head-to-head comparison between O2 and O3, O2 obtains 7 last-choices while O3 obtains 8 last-choices. O3 is then eliminated and the winner is O2.
The collective ranking for sub-group B is then: \(O_{2} \succ O_{3} \succ O_{1}\).
Combined group
The Coombs’ model can then be applied to the combined group (A + B) of thirty-two rankings, as follows.
In the first round, the design concept O1 obtains 14 last-choices, O2 obtains 13 last-choices, and O3 obtains 5 last-choices. Being the alternative with the largest number of last-choices, O1 is then eliminated.
In the head-to-head comparison between O2 and O3, O2 obtains 14 last-choices while O3 obtains 18 last-choices. O3 is then eliminated and the winner is O2.
The collective ranking for the combined group (A + B) is then: \(O_{2} \succ O_{3} \succ O_{1}\).
It can be noticed that the collective ranking related to the combined group coincides with that of the sub-group B. Therefore, the multiple-district paradox does not occur in this case.
Table 2b summarizes the afore-described results.

2.2.2 Borda count model

The Borda Count (BC) model works as follows. For each expert ranking, the first alternative obtains one point, the second two points, and so on [22, 41, 42]. Thus, the cumulative score of one alternative can be calculated by cumulating the corresponding scores obtained in each ranking. Applying this model to the three (sub-)groups of rankings – A, B and (A + B) – in Table 1, the following results are obtained (see also Table 2c).
Sub-group A
With reference to the rankings in sub-group A, the so-called Borda Counts related to the three alternatives (i.e. O1, O2 and O3) can be calculated as:
$$\begin{gathered} {\text{BC}}_{{\text{A}}} (O_{1} ) = 4 \cdot 1 + 1 \cdot 2 + 5 \cdot 3 + 6 \cdot 2 + 1 \cdot 3 = 36 \hfill \\ {\text{BC}}_{{\text{A}}} \left( {O_{2} } \right) = 4 \cdot 2 + 1 \cdot 1 + 5 \cdot 1 + 6 \cdot 3 + 1 \cdot 2 = 34 \hfill \\ {\text{BC}}_{{\text{A}}} \left( {O_{3} } \right) = 4 \cdot 3 + 1 \cdot 3 + 5 \cdot 2 + 6 \cdot 1 + 1 \cdot 1 = 32 \hfill \\ \end{gathered}$$
(1)
Of course, the degree of preference of an i-th alternative decreases as the corresponding \({\text{BC}}_{{\text{A}}} \left( {O_{i} } \right)\) value increases. The collective ranking for sub-group A is then: \(O_{3} \succ O_{2} \succ O_{1}\).
Sub-group B
With reference to the rankings in sub-group B, the Borda Counts (\({\text{BC}}_{{\text{B}}}\)) are:
$$\begin{gathered} {\text{BC}}_{{\text{B}}} (O_{1} ) = 6 \cdot 1 + 8 \cdot 3 + 1 \cdot 2 = 32 \hfill \\ {\text{BC}}_{{\text{B}}} \left( {O_{2} } \right) = 6 \cdot 3 + 8 \cdot 1 + 1 \cdot 3 = 29 \hfill \\ {\text{BC}}_{{\text{B}}} \left( {O_{3} } \right) = 6 \cdot 2 + 8 \cdot 2 + 1 \cdot 1 = 29 \hfill \\ \end{gathered}$$
(2)
the collective ranking for the sample (B) then is: \(O_{2} { }\sim { }O_{3} \succ O_{1}\).
Combined group
The Borda Counts related to the alternatives in the rankings of the combined-group \({\text{BC}}_{{{\text{A}} + {\text{B}}}} \left( {O_{i} } \right)\) are:
$$\begin{gathered} {\text{BC}}_{{{\text{A}} + {\text{B}}}} (O_{1} ) = 4 \cdot 1 + 6 \cdot 1 + 1 \cdot 2 + 13 \cdot 3 + 7 \cdot 2 + 1 \cdot 3 = 68 \hfill \\ {\text{BC}}_{{{\text{A}} + {\text{B}}}} \left( {O_{2} } \right) = 4 \cdot 2 + 6 \cdot 3 + 1 \cdot 1 + 13 \cdot 1 + 7 \cdot 3 + 1 \cdot 2 = 63 \hfill \\ {\text{BC}}_{{{\text{A}} + {\text{B}}}} \left( {O_{3} } \right) = 4 \cdot 3 + 6 \cdot 2 + 1 \cdot 3 + 13 \cdot 2 + 7 \cdot 1 + 1 \cdot 1 = 61 \hfill \\ \end{gathered}$$
(3)
The collective ranking for the combined group (A + B) is then \(O_{3} \succ O_{2} \succ O_{1}\), which coincides with that of the sub-group A. Again, the paradox observed when applying the IRV model (see Sect. 2.1) does not occur.
It is worth remarking that the BC aggregation model guarantees a sort of “overlapping of effects”, which results in the following additive relationship:
$${\text{BC}}_{{{\text{A}} + {\text{B}}}} (O_{i} ) = {\text{BC}}_{{\text{A}}} (O_{i} ) + {\text{BC}}_{{\text{B}}} (O_{i} ){ }\forall i \in \left[ {1,n} \right].$$
(4)
In addition, the BC aggregation model can be classified as positional scoring procedure (PSP), since the scores assigned to alternatives are based on their respective position on the ranking [36, 43]. On the other hand, the IRV or Coombs' models are not PSPs, as the differences between the points awarded to alternatives in other positions than the first- or last-choices are equal.
With reference to the preference profile in Table 1, the IRV seems more prone to the multiple-district paradox than the Coombs’ or BC model. Even though this rule is not necessarily general, what happens when changing the preference profile?

2.2.3 Further case study

TheLet us consider a second case study, which is similar to the previous one but characterized by a different repartition of the (new) expert rankings into two (new) sub-groups (A' and B'). Table 3 shows a first sub-group (A') consisting of thirty-four experts (i.e., \(e_{{{\text{A}}_{1}^{^{\prime}} }}\) to \(e_{{{\text{A}}_{34}^{^{\prime}} }}\)) and corresponding rankings, and a second sub-group (B') consisting of seven experts and corresponding rankings.
Table 3
Rankings of three interior-design concepts (i.e. O1, O2, O3) formulated by two sub-groups of engineering designers: a sub-group A’ (\({e}_{{\mathrm{A}}_{1}^{^{\prime}}}\) to \({e}_{{\mathrm{A}}_{34}^{^{\prime}}}\)) and b sub-group B’ (\({e}_{{\mathrm{B}}_{1}^{^{\prime}}}\) to \({e}_{{\mathrm{B}}_{7}^{^{\prime}}}\)). c These sub-groups are then merged into a single combined group (A’ + B’). d Synthesis of the experts’ rankings
(a) Sub-group A’
(b) Sub-group B’
(c) Combined group (A’ + B’)
Expert
Ranking
Expert
Ranking
Expert
Ranking
Expert
Ranking
Expert
Ranking
\({e}_{{\mathrm{A}}_{1}^{^{\prime}}}\)
\({O}_{1}\succ {O}_{2}\succ {O}_{3}\)
\({e}_{{\mathrm{A}}_{22}^{^{\prime}}}\)
\({O}_{3}\succ {O}_{1}\succ {O}_{2}\)
\({e}_{{\mathrm{B}}_{1}^{^{\prime}}}\)
\({O}_{1}\succ {O}_{2}\succ {O}_{3}\)
\({e}_{{\mathrm{A}}_{1}^{^{\prime}}}\)
\({O}_{1}\succ {O}_{2}\succ {O}_{3}\)
\({e}_{{\mathrm{A}}_{21}^{^{\prime}}}\)
\({O}_{3}\succ {O}_{1}\succ {O}_{2}\)
\({e}_{{\mathrm{A}}_{2}^{^{\prime}}}\)
\({O}_{1}\succ {O}_{2}\succ {O}_{3}\)
\({e}_{{\mathrm{A}}_{23}^{^{\prime}}}\)
\({O}_{3}\succ {O}_{1}\succ {O}_{2}\)
\({e}_{{\mathrm{B}}_{2}^{^{\prime}}}\)
\({O}_{2}\succ {O}_{1}\succ {O}_{3}\)
\({e}_{{\mathrm{A}}_{2}^{^{\prime}}}\)
\({O}_{1}\succ {O}_{2}\succ {O}_{3}\)
\({e}_{{\mathrm{A}}_{22}^{^{\prime}}}\)
\({O}_{3}\succ {O}_{1}\succ {O}_{2}\)
\({e}_{{\mathrm{A}}_{3}^{^{\prime}}}\)
\({O}_{1}\succ {O}_{2}\succ {O}_{3}\)
\({e}_{{\mathrm{A}}_{24}^{^{\prime}}}\)
\({O}_{3}\succ {O}_{1}\succ {O}_{2}\)
\({e}_{{\mathrm{B}}_{3}^{^{\prime}}}\)
\({O}_{2}\succ {O}_{1}\succ {O}_{3}\)
\({e}_{{\mathrm{A}}_{3}^{^{\prime}}}\)
\(O_{1} \succ O_{2} \succ O_{3}\)
\(e_{{{\text{A}}_{23}^{^{\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
\(e_{{{\text{A}}_{4}^{^{\prime}} }}\)
\(O_{1} \succ O_{2} \succ O_{3}\)
\(e_{{{\text{A}}_{25}^{^{\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
\(e_{{{\text{B}}_{4}^{^{\prime}} }}\)
\(O_{2} \succ O_{1} \succ O_{3}\)
\(e_{{{\text{A}}_{4}^{^{\prime}} }}\)
\(O_{1} \succ O_{2} \succ O_{3}\)
\(e_{{{\text{A}}_{24}^{^{\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
\(e_{{{\text{A}}_{5}^{^{\prime}} }}\)
\(O_{1} \succ O_{2} \succ O_{3}\)
\(e_{{{\text{A}}_{26}^{^{\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
\(e_{{{\text{B}}_{5}^{^{\prime}} }}\)
\(O_{2} \succ O_{1} \succ O_{3}\)
\(e_{{{\text{A}}_{5}^{^{\prime}} }}\)
\(O_{1} \succ O_{2} \succ O_{3}\)
\(e_{{{\text{A}}_{25}^{^{\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
\(e_{{{\text{A}}_{6}^{^{\prime}} }}\)
\(O_{1} \succ O_{2} \succ O_{3}\)
\(e_{{{\text{A}}_{27}^{^{\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
\(e_{{{\text{B}}_{6}^{^{\prime}} }}\)
\(O_{2} \succ O_{1} \succ O_{3}\)
\(e_{{{\text{A}}_{6}^{^{\prime}} }}\)
\(O_{1} \succ O_{2} \succ O_{3}\)
\(e_{{{\text{A}}_{26}^{^{\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
\(e_{{{\text{A}}_{7}^{^{\prime}} }}\)
\(O_{1} \succ O_{2} \succ O_{3}\)
\(e_{{{\text{A}}_{28}^{^{\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
\(e_{{{\text{B}}_{7}^{^{\prime}} }}\)
\(O_{2} \succ O_{1} \succ O_{3}\)
\(e_{{{\text{A}}_{7}^{^{\prime}} }}\)
\(O_{1} \succ O_{2} \succ O_{3}\)
\(e_{{{\text{A}}_{27}^{^{\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
\(e_{{{\text{A}}_{8}^{^{\prime}} }}\)
\(O_{1} \succ O_{2} \succ O_{3}\)
\(e_{{{\text{A}}_{29}^{^{\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
  
\(e_{{{\text{A}}_{8}^{^{\prime}} }}\)
\(O_{1} \succ O_{2} \succ O_{3}\)
\(e_{{{\text{A}}_{28}^{^{\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
\(e_{{{\text{A}}_{9}^{^{\prime}} }}\)
\(O_{1} \succ O_{2} \succ O_{3}\)
\(e_{{{\text{A}}_{30}^{^{\prime}} }}\)
\(O_{3} \succ O_{2} \succ O_{1}\)
  
\(e_{{{\text{A}}_{9}^{^{\prime}} }}\)
\(O_{1} \succ O_{2} \succ O_{3}\)
\(e_{{{\text{A}}_{29}^{^{\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
\(e_{{{\text{A}}_{10}^{^{\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
\(e_{{{\text{A}}_{31}^{^{\prime}} }}\)
\(O_{3} \succ O_{2} \succ O_{1}\)
  
\(e_{{{\text{B}}_{1}^{^{\prime}} }}\)
\(O_{1} \succ O_{2} \succ O_{3}\)
\(e_{{{\text{A}}_{30}^{^{\prime}} }}\)
\(O_{3} \succ O_{2} \succ O_{1}\)
\(e_{{{\text{A}}_{11}^{^{\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
\(e_{{{\text{A}}_{32}^{^{\prime}} }}\)
\(O_{3} \succ O_{2} \succ O_{1}\)
  
\(e_{{{\text{A}}_{10}^{^{\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
\(e_{{{\text{A}}_{31}^{^{\prime}} }}\)
\(O_{3} \succ O_{2} \succ O_{1}\)
\(e_{{{\text{A}}_{12}^{^{\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
\(e_{{{\text{A}}_{33}^{^{\prime}} }}\)
\(O_{3} \succ O_{2} \succ O_{1}\)
  
\(e_{{{\text{A}}_{11}^{^{\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
\(e_{{{\text{A}}_{32}^{^{\prime}} }}\)
\(O_{3} \succ O_{2} \succ O_{1}\)
\(e_{{{\text{A}}_{13}^{^{\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
\(e_{{{\text{A}}_{34}^{^{\prime}} }}\)
\(O_{3} \succ O_{2} \succ O_{1}\)
  
\(e_{{{\text{A}}_{12}^{^{\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
\(e_{{{\text{A}}_{33}^{^{\prime}} }}\)
\(O_{3} \succ O_{2} \succ O_{1}\)
\(e_{{{\text{A}}_{14}^{^{\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
    
\(e_{{{\text{A}}_{13}^{^{\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
\(e_{{{\text{A}}_{34}^{^{\prime}} }}\)
\(O_{3} \succ O_{2} \succ O_{1}\)
\(e_{{{\text{A}}_{15}^{^{\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
    
\(e_{{{\text{A}}_{14}^{^{\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
\(e_{{{\text{B}}_{2}^{^{\prime}} }}\)
\(O_{2} \succ O_{1} \succ O_{3}\)
\(e_{{{\text{A}}_{16}^{^{\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
     
\(O_{2} \succ O_{3} \succ O_{1}\)
\(e_{{{\text{B}}_{3}^{^{\prime}} }}\)
\(O_{2} \succ O_{1} \succ O_{3}\)
\(e_{{{\text{A}}_{17}^{^{\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
    
\(e_{{{\text{A}}_{16}^{^{\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
\(e_{{{\text{B}}_{4}^{^{\prime}} }}\)
\(O_{2} \succ O_{1} \succ O_{3}\)
\(e_{{{\text{A}}_{18}^{^{\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
    
\(e_{{{\text{A}}_{17}^{^{\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
\(e_{{{\text{B}}_{5}^{^{\prime}} }}\)
\(O_{2} \succ O_{1} \succ O_{3}\)
\(e_{{{\text{A}}_{19}^{^{\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
    
\(e_{{{\text{A}}_{18}^{^{\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
\(e_{{{\text{B}}_{6}^{^{\prime}} }}\)
\(O_{2} \succ O_{1} \succ O_{3}\)
\(e_{{{\text{A}}_{20}^{^{\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
    
\(e_{{{\text{A}}_{19}^{^{\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
\(e_{{{\text{B}}_{7}^{^{\prime}} }}\)
\(O_{2} \succ O_{1} \succ O_{3}\)
\(e_{{{\text{A}}_{21}^{^{\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
    
\(e_{{{\text{A}}_{20}^{^{\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2}\)
  
(d) Synthesis of the experts’ rankings
Sub-group A
Sub-group B
Combined group (A + B)
Expert
Ranking
Expert
Ranking
Expert
Ranking
9
\(O_{1} \succ O_{2} \succ O_{3}\)
1
\(O_{1} \succ O_{2} \succ O_{3}\)
10
\(O_{1} \succ O_{2} \succ O_{3}\)
9
\(O_{2} \succ O_{3} \succ O_{1}\)
6
\(O_{2} \succ O_{1} \succ O_{3}\)
6
\(O_{2} \succ O_{1} \succ O_{3}\)
11
\(O_{3} \succ O_{1} \succ O_{2}\)
  
9
\(O_{2} \succ O_{3} \succ O_{1}\)
5
\(O_{3} \succ O_{2} \succ O_{1}\)
  
11
\(O_{3} \succ O_{1} \succ O_{2}\)
    
5
\(O_{3} \succ O_{2} \succ O_{1}\)
Total: 34
 
Total: 7
 
Total: 41
 
application of the three aggregation models (IRV, Coombs’ and BC) to the new (sub-)groups of rankings (A’, B’ and A’ + B’ in Table 3) results into the nine collective rankings in Table 4. Interestingly, the multiple-district paradox occurs when applying the Coombs’ model, while it does not occur when applying the IRV or BC models. It can be noticed that when expert rankings are very “polarized”, as for sub-group B', the three different aggregation models tend to converge towards the same collective ranking (e.g., \(O_{2} \succ O_{1} \succ O_{3}\) in this case) (Table 4).
Table 4
Collective rankings obtained by applying the IRV, Coombs’, and BC models to the sub-groups (A’ and B’) of experts and the combined group (A’ + B’). For Coombs' model, the effect of the multiple-district paradox on the top alternatives is highlighted in bold. The corresponding expert rankings are reported in Table 3
 
No. of experts
(a) IRV
(b) Coombs’
(c) BC
Sub-group A’
34
\(O_{3} \succ O_{1} \sim O_{2}\)
\({\varvec{O}}_{2} \succ O_{3} \succ O_{1}\)
\(O_{3} \succ O_{2} \succ O_{1}\)
Sub-group B’
7
\(O_{2} \succ O_{1} \succ O_{3}\)
\({\varvec{O}}_{2} \succ O_{1} \succ O_{3}\)
\(O_{2} \succ O_{1} \succ O_{3}\)
Combined group (A’ and B’)
41
\(O_{2} \succ O_{3} \succ O_{1}\)
\({\varvec{O}}_{1} \succ O_{2} \succ O_{3}\)
\(O_{2} \succ O_{3} \succ O_{1}\)
The previous examples show that the occurrence of paradoxes is not easily predictable. In general, paradoxes may arise from a difficult-to-predict combination between the characteristics of (i) the aggregation model, (ii) the expert rankings and (iii) their repartition into sub-groups.
Predicting a paradox is a very complex issue and still an open problem [44]. However, it is proven that so-called PSPs (see definition in Sect. 2.2.2), like the BC model, are “immune” from the multiple-district paradox, due to their structural features [36, 43].

2.3 Triggering factors of the paradox

Besides providing some examples of occurrence of the multiple-district paradox, the previous sub-sections showed that this paradox can affect different aggregation models, depending on the specific preference profile. Let us go deeper into the issue, trying to identify the main "triggers" of the paradox, as explained in the following points.
1.
This paradox can be seen as a manifestation of incoherence in the positioning of top alternatives within collective rankings, namely: (i) the winning alternative of the sub-groups' collective rankings and (ii) the winning alternative of the combined group's collective ranking. Of course, similar manifestations of incoherence could also affect other alternatives in intermediate or bottom positions – especially for rankings characterized by a relatively large number of alternatives – without producing the paradox.
 
2.
The paradox is probably more likely to occur for decision-making problems characterized by a relatively high degree of discordance among the expert rankings, with particular reference to the alternatives in the top positions. For example, returning to the seventeen rankings of sub-group A in Table 1, O1 would prevail over O2 for ten rankings, while O2 would prevail over O1 for the remaining seven rankings. For sub-group B, the overall result from the comparison between O1 and O2 would be 7 versus 8.
 
3.
The paradox seems not to affect the so-called PSPs, confirming what rigorously demonstrated by some authors [36, 43].
 

3 Methodology

This section proposes a new methodology for “diagnosing” the multiple-district paradox, based on the use of some indicators of concordance and coherence. The description is organized in three sub-sections:
  • The first one briefly recalls the aforementioned indicators;
  • The second one illustrates the use of these indicators for decision-making problems involving rankings with a relatively limited number of alternatives (as those previously exemplified);
  • The third part shows a step-by-step technique – denominated technique of partialized rankings – able to identify the potential triggering reasons of the paradox.

3.1 Concordance and coherence indicators

The proposed methodology is based on the use of three indicators:
1.
The first one is the Kendall's concordance coefficient, \(W^{\left( m \right)}\), able to express the so-called degree of concordance (or agreement) between a set of m-rankings into a single number [14, 45, 46, 54]. The range of \(W^{\left( m \right)}\) is \(\left[ {0,1} \right]\); it has unit value in the case of perfect agreement (i.e., all rankings coincide), while it is null in the case of total disagreement (i.e., all rankings are completely unrelated). For more detailed information on the construction and meaning of \(W^{\left( m \right)}\), the reader is referred to Sect. A.1 (in the Appendix).
 
2.
The second indicator, \(W_{k}^{{\left( {m + 1} \right)}}\), was recently proposed by the authors to depict the coherence between the expert rankings and the collective ranking resulting from the application of a generic (k-th) aggregation model [47]. This indicator is nothing more than the Kendall's concordance coefficient, applied to (m + 1) rankings consisting of:
 
3.
The m-expert rankings, denoting the preference profile;
 
4.
The collective ranking obtained applying the (k-th) aggregation model to the previous expert rankings.
 
The coherence between the collective ranking and the expert rankings is evaluated in relative terms, comparing \(W_{k}^{{\left( {m + 1} \right)}}\) with \(W^{\left( m \right)}\). \(W_{k}^{{\left( {m + 1} \right)}} \ge { }W^{\left( m \right)}\) denotes coherence (or positive coherence) between the collective ranking and m-rankings, while \(W_{k}^{{\left( {m + 1} \right)}} < W^{\left( m \right)}\) denotes incoherence (or negative coherence) [47]. The latter situation can occur when a collective ranking is somehow conflicting with the m-rankings.
To make the coherence assessment easier, a third synthetic indicator can be used:
$$b_{k}^{\left( m \right)} = \frac{{W_{k}^{{\left( {m + 1} \right)}} }}{{W^{\left( m \right)} }}$$
(5)
It can be proven that \(b_{k}^{\left( m \right)} \in \left] {0, + \infty } \right]\) [47]. For a specific set of m rankings, \(b_{k}^{\left( m \right)} \ge 1\) indicates that the (k-th) aggregation model provides a somehow coherent collective ranking (positive coherence), while \(b_{k}^{\left( m \right)} < 1\) indicates that it provides a somehow incoherent collective ranking (negative coherence).

3.2 Interpretation of the paradox

Table 5 exemplifies the application of the three indicators, \(W^{\left( m \right)}\), \(W_{k}^{{\left( {m + 1} \right)}}\) and \(b_{k}^{\left( m \right)}\), to the decision-making problem in Table 1, when considering the collective rankings resulting from the application of the (a) IRV, (b) Coombs' and (c) BC models respectively (cf. Table 2). Regardless of the aggregation model in use, the preference profile is characterized by a very low degree of concordance among experts, as evidenced by the very low \(W^{\left( m \right)}\) values, both for sub-groups A and B and for their combination (A + B).
Table 5
\(W^{\left( m \right)}\), \(W_{k}^{{\left( {m + 1} \right)}}\) and \(b_{k}^{\left( m \right)}\) values related to sub-groups A, B and the corresponding combined group (A + B), for each of the three aggregation models: k = {IRV, Coombs’, BC}. Expert rankings are reported in Table 1
k-th model
(Sub-)group
No. of experts
Collect. ranking
\(W^{\left( m \right)}\)
\(W_{k}^{{\left( {m + 1} \right)}}\)
\(b_{k}^{\left( m \right)}\)
(a)
      
IRV
Sub-group A
17
\({\varvec{O}}_{2} \succ O_{3} \succ O_{1}\)
1.38%
2.16%
1.56
 
Sub-group B
15
\({\varvec{O}}_{2} \succ O_{1} \succ O_{3}\)
1.35%
1.56%
1.15
 
Combined group (A and B)
32
\({\varvec{O}}_{1} \succ O_{2} \succ O_{3}\)
1.27%
0.64%
0.51
(b)
      
Coombs’
Sub-group A
17
\(O_{3} \succ O_{1} \sim O_{2}\)
1.38%
2.42%
1.75
 
Sub-group B
15
\(O_{2} \succ O_{3} \succ O_{1}\)
1.35%
2.73%
2.02
 
Combined group (A and B)
32
\(O_{2} \succ O_{3} \succ O_{1}\)
1.27%
1.70%
1.34
(c)
      
BC
Sub-group A
17
\(O_{3} \succ O_{2} \succ O_{1}\)
1.38%
2.77%
2.00
 
Sub-group B
15
\(O_{2} \sim O_{3} \succ O_{1}\)
1.35%
2.68%
1.98
 
Combined group (A and B)
32
\(O_{3} \succ O_{2} \succ O_{1}\)
1.27%
1.93%
1.52
The coherence of the collective rankings with the corresponding preference profiles can be assessed by comparing the \(W_{k}^{{\left( {m + 1} \right)}}\) values with the relevant \(W^{\left( m \right)}\) values and/or by observing the synthetic indicator \(b_{k}^{\left( m \right)}\). The Coombs’ and BC models do not trigger the paradox; \(W_{k}^{{\left( {m + 1} \right)}}\) and \(b_{k}^{\left( m \right)}\) show positive coherence either when considering A, B and A + B (see Table 5b). On the other hand, the IRV model triggers the paradox; we notice positive coherence (\(b_{k}^{\left( m \right)} \ge 1\)) for sub-groups A and B, but negative coherence (\(b_{k}^{\left( m \right)} < 1\)) for A + B (see Table 5a).
Moving our attention to the second application example in Table 3, something similar happens: all the \(b_{k}^{\left( m \right)}\) values related to the IRV and BC models denote positive coherence, while that one related to the Coombs' model denotes negative coherence for the combined group. Again, the multiple-district paradox results in an incoherence between collective ranking and expert rankings at the combined-group level (see Table 6).
Table 6
\(W^{\left( m \right)}\), \(W_{k}^{{\left( {m + 1} \right)}}\) and \(b_{k}^{\left( m \right)}\) values related to sub-groups A’, B’ and the corresponding combined group (A’ + B’), for each of the three aggregation models: k = {IRV, Coombs’, BC}. Expert rankings are reported in Table 3
k-th model
(Sub-)group
No. of experts
Collect. ranking
\(W^{\left( m \right)}\)
\(W_{k}^{{\left( {m + 1} \right)}}\)
\(b_{k}^{\left( m \right)}\)
(a)
      
IRV
Sub-group A’
34
\(O_{3} \succ O_{1} { }\sim { }O_{2}\)
3.37%
4.13%
1.22
 
Sub-group B’
7
\(O_{2} \succ O_{1} \succ O_{3}\)
87.7%
89.0%
1.01
 
Combined group (A’ and B’)
41
\(O_{2} \succ O_{3} \succ O_{1}\)
0.95%
1.42%
1.49
(b)
      
Coombs’
Sub-group A’
34
\({\varvec{O}}_{2} \succ O_{3} \succ O_{1}\)
3.37%
3.51%
1.04
 
Sub-group B’
7
\({\varvec{O}}_{2} \succ O_{1} \succ O_{3}\)
87.7%
89.0%
1.01
 
Combined group (A’ and B’)
41
\({\varvec{O}}_{1} \succ O_{2} \succ O_{3}\)
0.95%
0.74%
0.77
(c)
      
BC
Sub-group A’
34
\(O_{3} \succ O_{2} \succ O_{1}\)
3.37%
4.24%
1.26
 
Sub-group B’
7
\(O_{2} \succ O_{1} \succ O_{3}\)
87.7%
89.0%
1.01
 
Combined group (A’ and B’)
41
\(O_{2} \succ O_{3} \succ O_{1}\)
0.95%
1.42%
1.49
The indicators \(W^{\left( m \right)}\), \(W_{k}^{{\left( {m + 1} \right)}}\) and \(b_{k}^{\left( m \right)}\) are therefore useful to explain the reasons of the occurrence of the multiple-district paradox. However, the examples proposed so far have two distinctive, but not necessarily general, features:
1.
Rankings have a relatively limited number of alternatives (i.e., just three) and the \(W_{k}^{{\left( {m + 1} \right)}}\) and \(b_{k}^{\left( m \right)}\) indicators well respond to the incoherence that characterizes the multiple-district paradox. However, it cannot be excluded that – for rankings with a larger number of alternatives – the above indicators would not be equally responsive. Section 3.3 exemplifies a situation in which – in the presence of the paradox – the (local) incoherences concerning a small number (e.g., 2 or 3) of alternatives in the top of the rankings can be “masked” by other (local) incoherences concerning the alternatives in the middle and/or at the bottom of the rankings.
 
2.
In the presence of the paradox, the previous examples show incoherences at the combined-group level but never at the level of single sub-groups. However, it cannot be excluded that the paradox could be triggered by incoherence in one of the sub-groups and not in the combined group.
 

3.3 The technique of partialized rankings

Let us exemplify a new decision-making problem with a plurality of expert rankings of four alternatives (O1 to O4), organized into two sub-groups: A'' and B'', including 17 and 15 rankings respectively (see Table 7).
Table 7
Rankings of four interior-design concepts (i.e. O1 to O4) formulated by two sub-groups of engineering designers: A’’ (\(e_{{{\text{A}}_{1}^{^{\prime\prime}} }}\) to \(e_{{{\text{A}}_{17}^{^{\prime\prime}} }}\)) and B’’ (\(e_{{{\text{B}}_{1}^{^{\prime\prime}} }}\) to \(e_{{{\text{B}}_{15}^{^{\prime\prime}} }}\)). These sub-groups are then merged into a single combined group (A’’ + B’’). It can be noticed that these rankings are "compatible" with those in the example in Table 1: if we eliminate the alternative O4 from each of these rankings, we obtain those ones in Table 1
(a) Sub-group A’’
(b) Sub-group B’’
(c) Combined group (A’’ + B’’)
Expert
Ranking
Expert
Ranking
Expert
Ranking
Expert
Ranking
\(e_{{{\text{A}}_{1}^{^{\prime\prime}} }}\)
\({O}_{1}\succ {O}_{2}\succ {O}_{3}\succ {O}_{4}\)
\(e_{{B_{1}^{^{\prime\prime}} }}\)
\({O}_{1}\succ {O}_{3}\succ {O}_{2}\succ {O}_{4}\)
\(e_{{{\text{A}}_{1}^{^{\prime\prime}} }}\)
\({O}_{1}\succ {O}_{2}\succ {O}_{3}\succ {O}_{4}\)
\(e_{{{\text{A}}_{7}^{^{\prime\prime}} }}\)
\({O}_{2}\succ {O}_{3}\succ {O}_{4}\succ {O}_{1}\)
\(e_{{{\text{A}}_{2}^{^{\prime\prime}} }}\)
\({O}_{1}\succ {O}_{2}\succ {O}_{3}\succ {O}_{4}\)
\(e_{{B_{2}^{^{\prime\prime}} }}\)
\({O}_{1}\succ {O}_{3}\succ {O}_{2}\succ {O}_{4}\)
\(e_{{{\text{A}}_{2}^{^{\prime\prime}} }}\)
\({O}_{1}\succ {O}_{2}\succ {O}_{3}\succ {O}_{4}\)
\(e_{{{\text{A}}_{12}^{^{\prime\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2} \succ O_{4}\)
\(e_{{{\text{A}}_{3}^{^{\prime\prime}} }}\)
\(O_{1} \succ O_{2} \succ O_{3} \succ O_{4}\)
\(e_{{{\text{B}}_{3}^{^{\prime\prime}} }}\)
\(O_{1} \succ O_{3} \succ O_{2} \succ O_{4}\)
\(e_{{{\text{A}}_{3}^{^{\prime\prime}} }}\)
\(O_{1} \succ O_{2} \succ O_{3} \succ O_{4}\)
\(e_{{{\text{A}}_{14}^{^{\prime\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2} \succ O_{4}\)
\(e_{{{\text{A}}_{4}^{^{\prime\prime}} }}\)
\(O_{4} \succ O_{1} \succ O_{2} \succ O_{3}\)
\(e_{{{\text{B}}_{4}^{^{\prime\prime}} }}\)
\(O_{1} \succ O_{3} \succ O_{2} \succ O_{4}\)
\(e_{{{\text{A}}_{4}^{^{\prime\prime}} }}\)
\(O_{4} \succ O_{1} \succ O_{2} \succ O_{3}\)
\(e_{{{\text{A}}_{15}^{^{\prime\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2} \succ O_{4}\)
\(e_{{{\text{A}}_{5}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{1} \succ O_{3} \succ O_{4}\)
\(e_{{{\text{B}}_{5}^{^{\prime\prime}} }}\)
\(O_{1} \succ O_{3} \succ O_{2} \succ O_{4}\)
\(e_{{{\text{A}}_{5}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{1} \succ O_{3} \succ O_{4}\)
\(e_{{{\text{A}}_{16}^{^{\prime\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2} \succ O_{4}\)
\(e_{{{\text{A}}_{6}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{B}}_{6}^{^{\prime\prime}} }}\)
\(O_{1} \succ O_{3} \succ O_{2} \succ O_{4}\)
\(e_{{{\text{A}}_{6}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{A}}_{17}^{^{\prime\prime}} }}\)
\(O_{3} \succ O_{2} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{A}}_{7}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{4} \succ O_{1}\)
\(e_{{{\text{B}}_{7}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{A}}_{8}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{A}}_{11}^{^{\prime\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2} \succ O_{4}\)
\(e_{{{\text{A}}_{8}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{B}}_{8}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{A}}_{9}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{B}}_{15}^{^{\prime\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2} \succ O_{4}\)
\(e_{{{\text{A}}_{9}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{B}}_{9}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{A}}_{10}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{A}}_{13}^{^{\prime\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{4} \succ O_{2}\)
\(e_{{{\text{A}}_{10}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{B}}_{10}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{B}}_{7}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{B}}_{1}^{^{\prime\prime}} }}\)
\(O_{1} \succ O_{3} \succ O_{2} \succ O_{4}\)
\(e_{{{\text{A}}_{11}^{^{\prime\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2} \succ O_{4}\)
\(e_{{{\text{B}}_{11}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{B}}_{8}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{B}}_{2}^{^{\prime\prime}} }}\)
\(O_{1} \succ O_{3} \succ O_{2} \succ O_{4}\)
\(e_{{{\text{A}}_{12}^{^{\prime\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2} \succ O_{4}\)
\(e_{{{\text{B}}_{12}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{B}}_{9}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{B}}_{3}^{^{\prime\prime}} }}\)
\(O_{1} \succ O_{3} \succ O_{2} \succ O_{4}\)
\(e_{{{\text{A}}_{13}^{^{\prime\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{4} \succ O_{2}\)
\(e_{{{\text{B}}_{13}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{B}}_{10}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{B}}_{4}^{^{\prime\prime}} }}\)
\(O_{1} \succ O_{3} \succ O_{2} \succ O_{4}\)
\(e_{{{\text{A}}_{14}^{^{\prime\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2} \succ O_{4}\)
\(e_{{{\text{B}}_{14}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{B}}_{11}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{B}}_{5}^{^{\prime\prime}} }}\)
\(O_{1} \succ O_{3} \succ O_{2} \succ O_{4}\)
\(e_{{{\text{A}}_{15}^{^{\prime\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2} \succ O_{4}\)
\(e_{{{\text{B}}_{15}^{^{\prime\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2} \succ O_{4}\)
\(e_{{{\text{B}}_{12}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
\(e_{{{\text{B}}_{6}^{^{\prime\prime}} }}\)
\(O_{1} \succ O_{3} \succ O_{2} \succ O_{4}\)
\(e_{{{\text{A}}_{16}^{^{\prime\prime}} }}\)
\(O_{3} \succ O_{1} \succ O_{2} \succ O_{4}\)
  
\(e_{{{\text{B}}_{13}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
  
\(e_{{{\text{A}}_{17}^{^{\prime\prime}} }}\)
\(O_{3} \succ O_{2} \succ O_{1} \succ O_{4}\)
  
\(e_{{{\text{B}}_{14}^{^{\prime\prime}} }}\)
\(O_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
  
It can be noticed that these rankings are "compatible" with those in Table 1: eliminating the alternative O4 from each of the rankings in Table 7, the rankings in Table 1 are obtained [24, 31]; for example, the ranking by \(e_{{{\text{A}}_{7}^{^{\prime\prime}} }}\) (\(O_{2} \succ O_{3} \succ O_{4} \succ O_{1}\)) in Table 7 is turned into the ranking by \(e_{{{\text{A}}_{7} }}\) (\(O_{2} \succ O_{3} \succ O_{1}\)) in Table 1. It can also be noticed that the alternative O4 is generally placed in the bottom positions of the rankings.
Applying the IRV model to the various (sub-)groups of rankings, the same paradox seen for the example in Table 1 occurs: the winner of sub-groups A'' and B'' is O2, while that for the combined-group (A'' + B'') is O1. Not surprisingly, the alternative O4 is placed at the bottom of all three respective collective rankings (see Table 8-Step 3).
Table 8
Results of the application of the step-by-step procedure to the problem in Table 7. We observe that the reasons for the paradox are already visible by excluding only the O4 alternative from the initial complete rankings 1. The effect of the multiple-district paradox on the top alternatives is highlighted in bold
Step
(Sub-)group
No. of experts
(Partialized) collect. ranking
\(W^{\left( m \right)}\)
\(W_{k}^{{\left( {m + 1} \right)}}\)
\(b_{k}^{\left( m \right)}\)
(Step 1) “Partialized” rankings excluding O3 and O4
Sub-group A’’
17
\(O_{1} \succ O_{2}\)
3.11%
4.94%
1.586
Sub-group B’’
15
\(O_{2} \succ O_{1}\)
0.44%
1.56%
3.516
Combined group
32
\(O_{1} \succ O_{2}\)
0.39%
0.83%
2.116
(Step 2) “Partialized” rankings excluding O4
Sub-group A’’
17
\({\varvec{O}}_{2} \succ O_{3} \succ O_{1}\)
1.38%
2.16%
1.561
Sub-group B’’
15
\({\varvec{O}}_{2} \succ O_{1} \succ O_{3}\)
1.33%
1.56%
1.172
Combined group
32
\({\varvec{O}}_{1} \succ O_{2} \succ O_{3}\)
1.27%
0.64%
0.506
(Step 3) Complete rankings
Sub-group A’’
17
\({\varvec{O}}_{2} \succ O_{3} \succ O_{1} \succ O_{4}\)
39.65%
40.99%
1.034
Sub-group B’’
15
\({\varvec{O}}_{2} \succ O_{1} \succ O_{3} \succ O_{4}\)
60.53%
60.63%
1.002
Combined group
32
\({\varvec{O}}_{1} \succ O_{2} \succ O_{3} \succ O_{4}\)
48.79%
48.83%
1.001
Applying the indicators \(W^{\left( m \right)}\), \(W_{k}^{{\left( {m + 1} \right)}}\) and \(b_{k}^{\left( m \right)}\) to the rankings in Table 7, somehow unexpected results are obtained (see Table 8-Step 3).
  • The degree of concordance between expert rankings is not as dramatically low as in the previous examples. The introduction of the new alternative O4, which is typically placed by the experts in the bottom positions, contributes to increase the \(W^{\left( m \right)}\) indicator compared to the example in Table 1, "masking" the discordance related to the positioning of O1, O2 and O3. Indicators are sensitive to the presence of all alternatives and to the so-called “irrelevant alternatives” too [22, 23].
  • Despite the occurrence of the paradox, \(W_{k}^{{\left( {m + 1} \right)}} \ge W^{\left( m \right)}\) and \(b_{k}^{\left( m \right)} \ge 1\), denoting positive coherence between (the three) collective rankings and the relevant expert rankings; again, the (local) incoherence due to the presence of the paradox seems to be compensated by a relative coherence of the alternatives in non-top positions. In this case, \(W^{\left( m \right)}\), \(W_{k}^{{\left( {m + 1} \right)}}\) and \(b_{k}^{\left( m \right)}\) do not “respond” to the paradox, which only concerns the top alternatives (O1 and O2).
The example shows that, for problems including expert rankings with a relatively large number of alternatives, \(W^{\left( m \right)}\), \(W_{k}^{{\left( {m + 1} \right)}}\) and \(b_{k}^{\left( m \right)}\) can lose effectiveness in identifying the incoherence behind the occurrence of the multiple-district paradox. This weakness can be overcome with a simple contrivance, as illustrated below.
The basic idea is to “partialize”1 the initial rankings, excluding the alternatives with lower impact on the top positions and recalculating the three indicators of interest. This process can be implemented iteratively, initially considering only the top alternatives (i.e., excluding the remaining ones) and then gradually adding the alternatives in the non-top positions. Precisely:
1.
The starting point of the procedure is the collective ranking generated when applying the aggregation model to the combined group, which is conventionally considered as the one that best reflects the global positioning of alternatives. Observing this collective ranking (e.g.,\(O_{1} \succ O_{2} \succ O_{3} \succ O_{4}\) for the problem in Table 7), it is possible to discriminate roughly between the top two alternatives (O1 and O2) and the remaining ones (O3 and O4).
 
2.
The first iteration considers the partialized rankings, related to the sub-groups and the combined group, with the two top alternatives only (e.g., O1 and O2 in the problem in Table 7). The indicators \(W_{k}^{{\left( {m + 1} \right)}} ,{ }W^{\left( m \right)} , b_{k}^{\left( m \right)}\) are then calculated and analysed.
 
3.
In the i-th iteration, the procedure is repeated considering the partialized rankings related to the sub-groups and the combined group of the first (i + 1) top alternatives. Again, the indicators \(W_{k}^{{\left( {m + 1} \right)}} ,{ }W^{\left( m \right)} , b_{k}^{\left( m \right)}\) are calculated and analysed.
 
4.
The procedure is repeated until the (n – 1)-th iteration, which considers the complete rankings with all n alternatives.
 
5.
Analysing the indicators determined in each iteration, it is possible to identify the underlying reasons for the occurrence of the paradox.
 
Returning to the example in Table 7, let us exemplify the technique of "partial rankings" when applying the IRV aggregation model. For the (sub-)groups of rankings, the collective rankings in Table 8-Step 3 are obtained. Leaving aside the multiple-district paradox, which concerns only the two top alternatives (O1 and O2) – the (collective) ranking that conventionally best reflects the global positioning of the totality of the alternatives based on expert rankings is the one related to the combined group: \(O_{1} \succ O_{2} \succ O_{3} \succ O_{4}\).
Next, both the experts' and the collective rankings are “partialized”, omitting the non-top alternatives. The first iteration considers the partialized rankings with the two top alternatives only (\(O_{1} {\text{ and }}O_{2}\)), omitting the remaining ones (\(O_{3}\) and \(O_{4}\)). The application of the IRV to the (sub-)groups of rankings in Table 7 leads to the collective rankings in Table 8-Step 1. Although (i) the paradox does not occur and (ii) the indicators \(W_{k}^{{\left( {m + 1} \right)}}\) and \(b_{k}^{\left( m \right)}\) denote positive coherence for sub-groups A'', B'' and for the combined group, the \(W^{\left( m \right)}\) values related to sub-group B'' and the combined group denote a significant degree of discordance among the corresponding partialized expert rankings.
The second iteration includes the three top alternatives: \(O_{1} ,O_{2} ,O_{3}\) (see Table 8-Step 2). In this case, the multiple-district paradox occurs and the underlying incoherence is detected by the \(W_{k}^{{\left( {m + 1} \right)}}\) and \(b_{k}^{\left( m \right)}\) values related to the combined group (A'' + B'') of partialized rankings.
The procedure can be further iterated considering the complete rankings (see Table 8-Step 3). In this case the paradox occurs but is not detected by the indicators in use. As noted earlier, the “irrelevant” alternative \(O_{4}\) undermines the effectiveness of the indicators in identifying the paradox incoherence. In other words, the irrelevant alternative \(O_{4}\) attenuates the sensitivity of \(W_{k}^{{\left( {m + 1} \right)}}\) and \(b_{k}^{\left( m \right)}\), which both grow, denoting an increase in concordance among experts; in this case, for example, all experts agree in locating the alternative \(O_{4}\) in the last position. This growth of concordance among experts masks the incoherence due to the paradox.

4 Conclusions

The paper focused on the reasons behind the occurrence of the multiple-district paradox in ranking-aggregation problems. Summarizing, it was found that:
  • The occurrence of the paradox is typically associated with a very low degree of concordance among the expert rankings, with particular reference to the alternatives in the top positions.
  • The occurrence of the paradox may concern different aggregation models, depending on the specific (i) preference profile and (ii) repartition of the rankings into sub-groups.
  • The choice of a method to aggregate the expert rankings into a collective one may affects the results even more than the preference profile.
  • Some aggregation models, classified as PSPs, are “immune” from the multiple-district paradox [43].
It was proposed a methodology based on the use of three indicators:
  • \(W^{\left( m \right)}\), which measures the concordance between expert rankings;
  • \(W_{k}^{{\left( {m + 1} \right)}}\) and \(b_{k}^{\left( m \right)}\), which measure the consistency between the expert rankings and the collective ranking obtained through a certain (k-th) aggregation model.
The proposed methodology allows to highlight the incoherence characterizing the paradox occurrence, distinguishing whether it occurs at the level of sub-groups (districts) or combined groups (multiple districts).
For rankings with a relatively large number of alternatives, the above indicators can lose responsiveness. To overcome this obstacle, a step-by-step procedure based on the progressive "partialization" of rankings was proposed. This procedure is a valid support tool for design problems involving distributed teams with (partly) conflicting opinions [38]. Additionally, the proposed methodology can be used to assess the robustness of the collective ranking obtained through a certain aggregation model [59].
Some limitations of this research are as follows:
  • The proposed methodology is based on application of specific (concordance and consistency) indicators. The choice of other indicators could lead to (at least partially) different outcomes [54].
  • Although the multiple-district paradox is especially interesting for design decision-making problems in which the best alternative should be determined, it remains one-and-one-only of the possible paradoxes documented in the scientific literature; e.g., other paradoxes are the so-called no-shows, preference inversion, absolute majority loser, etc. [34].
Regarding the future, we plan to extend the proposed methodology (i) to the use of new concordance and coherence indicators, and (ii) to investigate further paradoxes.

Acknowledgements

This research was partially supported by the award “TESUN-83486178370409 Finanziamento dipartimenti di eccellenza CAP. 1694 TIT. 232 ART. 6”, which was conferred by “Ministero dell'Istruzione, dell'Università e della Ricerca—ITALY”.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix

Appendix: Theoretical remarks on\(W^{\left( m \right)}\)and\(W_{k}^{{\left( {m + 1} \right)}}\)

The scientific literature includes a popular indicator to evaluate the overall concordance or association for more than two expert rankings, i.e., the so-called Kendall’s coefficient of concordance, which is defined as [14, 32, 45, 46, 4851, 53]:
$$W^{\left( m \right)} = \frac{{12\cdot(\mathop \sum \nolimits_{i = 1}^{n} R_{i}^{2} ) - 3\cdot m^{2} \cdot n \cdot \left( {n + 1} \right)^{2} }}{{m^{2} \cdot n \cdot \left( {n^{2} - 1} \right) - m\cdot(\mathop \sum \nolimits_{j = 1}^{m} T_{j} )}}$$
(6)
where:
  • \(R_{i} = \mathop \sum \nolimits_{j = 1}^{m} r_{ij}\) is the sum of the rank positions for the i-th object, rij being the rank position of the object Oi according to the j-th expert;
  • n is the total number of objects;
  • m is the total number of rankings;
  • \(T_{j} = \mathop \sum \nolimits_{i = 1}^{{g_{j} }} \left( {t_{i}^{3} - t_{i} } \right), \forall j = 1,{ } \ldots ,m\), being \(t_{i}\) the number of objects in the i-th group of ties (a group is a set of tied objects), and \(g_{j}\) is the number of groups of ties in the ranking by the j-th expert. If there are no ties in the j-th ranking, then \(T_{j} = 0\).
Regarding the rank positions of the tied objects (rij), a convention is adopted whereby they should be the average rank positions that each set of tied objects would occupy if a strict dominance relationship could be expressed [52]. This convention guarantees that – for a certain j-th ranking and regardless of the presence of ties – the sum of the objects’ rank positions is an invariant:
$$\mathop \sum \limits_{i = 1}^{n} r_{ij} = \frac{{n \cdot \left( {n + 1} \right)}}{2}$$
(7)
In terms of range, \(W^{\left( m \right)} \in \left[ {0,1} \right]\). \(W^{\left( m \right)} = 0\) indicates the absence of concordance, while \(W^{\left( m \right)} = 1\) indicates the complete concordance (or unanimity). The superscript “(m)” is added by the authors to underline that the coefficient of concordance is applied to the m expert rankings and to distinguish it from another indicator – referred to as \(W_{k}^{{\left( {m + 1} \right)}}\), – which will be applied to m + 1 rankings.
The basic idea of \(W_{k}^{{\left( {m + 1} \right)}}\), recently proposed by the authors, is to analyse the level of coherence between the expert rankings and the collective ranking resulting from the application of the (k-th) aggregation model [47]. The test is based on the construction of an indicator, which is nothing more than the Kendall's concordance coefficient (see Eq. A.1), applied to the (m + 1) rankings consisting of (i) the m expert rankings, involved in an Engineering Design decision-making problem, and (ii) the collective ranking obtained by applying a generic (k-th) aggregation model to the previous m rankings. The collective ranking is actually treated as an additional (m + 1)-th ranking.
The formula of the indicator \(W_{k}^{{\left( {m + 1} \right)}}\) follows:
$$W_{k}^{{\left( {m + 1} \right)}} = \frac{{12\cdot \left[ {\mathop \sum \nolimits_{i = 1}^{n} \left( {R_{i} + r_{i} } \right)^{2} } \right] - 3\cdot \left( {m + 1} \right)^{2} \cdot n \cdot \left( {n + 1} \right)^{2} }}{{\left( {m + 1} \right)^{2} \cdot n\cdot \left( {n^{2} - 1} \right) - \left( {m + 1} \right)\cdot (\mathop \sum \nolimits_{j = 1}^{m} T_{j} ) - \left( {m + 1} \right)\cdot T_{m + 1} }}$$
(8)
where ri is the rank position of the i-th object in the collective ranking (\(r_{i} \in \left[ {1,n} \right]\)). In case of tied objects, the same convention described above is adopted.
Footnotes
1
The term “partialize” indicates that the initially complete rankings are modified, excluding some of the alternatives and obtaining new incomplete rankings, which can also be classified as partial [10, 54].
 
Literature
1.
go back to reference Fortunet, C., Durieux, S., Chanal, H., Duc, E.: Multicriteria decision optimization for the design and manufacture of structural aircraft parts. Int. J. Interact. Des. Manuf. 14, 1015–1030 (2020)CrossRef Fortunet, C., Durieux, S., Chanal, H., Duc, E.: Multicriteria decision optimization for the design and manufacture of structural aircraft parts. Int. J. Interact. Des. Manuf. 14, 1015–1030 (2020)CrossRef
2.
go back to reference Herrera-Viedma, E., Cabrerizo, F.J., Kacprzyk, J., Pedrycz, W.: A review of soft consensus models in a fuzzy environment. Information Fusion 17, 4–13 (2014)CrossRef Herrera-Viedma, E., Cabrerizo, F.J., Kacprzyk, J., Pedrycz, W.: A review of soft consensus models in a fuzzy environment. Information Fusion 17, 4–13 (2014)CrossRef
3.
go back to reference Dwarakanath, S., Wallace, K.M.: Decision-making in engineering design– observations from design experiments. J. Eng. Des. 6(3), 191–206 (1995)CrossRef Dwarakanath, S., Wallace, K.M.: Decision-making in engineering design– observations from design experiments. J. Eng. Des. 6(3), 191–206 (1995)CrossRef
4.
go back to reference Fu, K., Cagan, J., Kotovsky, K.: Design team convergence: the influence of example solution quality. J. Mech. Des. 132(11), 111005 (2010)CrossRef Fu, K., Cagan, J., Kotovsky, K.: Design team convergence: the influence of example solution quality. J. Mech. Des. 132(11), 111005 (2010)CrossRef
5.
go back to reference Frey, D.D., et al.: Research in engineering design: the role of mathematical theory and empirical evidence. Res. Eng. Design 21(3), 145–151 (2010)CrossRef Frey, D.D., et al.: Research in engineering design: the role of mathematical theory and empirical evidence. Res. Eng. Design 21(3), 145–151 (2010)CrossRef
6.
go back to reference Hoyle, C., Chen, W.: Understanding and modelling heterogeneity of human preferences for engineering design. J. Eng. Des. 22(8), 583–601 (2011)CrossRef Hoyle, C., Chen, W.: Understanding and modelling heterogeneity of human preferences for engineering design. J. Eng. Des. 22(8), 583–601 (2011)CrossRef
7.
go back to reference Sebastian, P., Ledoux, Y.: Decision support systems in preliminary design. Int. J. Interact. Des. Manuf. 3, 223–226 (2009)CrossRef Sebastian, P., Ledoux, Y.: Decision support systems in preliminary design. Int. J. Interact. Des. Manuf. 3, 223–226 (2009)CrossRef
8.
go back to reference Yeo, S.H., Mak, M.W., Balon, S.A.P.: Analysis of decision-making methodologies for desirability score of conceptual design. J. Eng. Des. 15(2), 195–208 (2004)CrossRef Yeo, S.H., Mak, M.W., Balon, S.A.P.: Analysis of decision-making methodologies for desirability score of conceptual design. J. Eng. Des. 15(2), 195–208 (2004)CrossRef
9.
go back to reference Keeney, R.L.: The foundations of collaborative group decisions. Int. J. Collabor. Eng. 1, 4 (2009)CrossRef Keeney, R.L.: The foundations of collaborative group decisions. Int. J. Collabor. Eng. 1, 4 (2009)CrossRef
10.
go back to reference Gierz, G., Hofmann, K. H., Keimel, K., Mislove, M., Scott, D. S. (2003). Continuous lattices and domains. Encyclopedia of mathematics and its applications. 93. Cambridge University Press. ISBN 978–0–521–80338–0 Gierz, G., Hofmann, K. H., Keimel, K., Mislove, M., Scott, D. S. (2003). Continuous lattices and domains. Encyclopedia of mathematics and its applications. 93. Cambridge University Press. ISBN 978–0–521–80338–0
11.
go back to reference Weingart, L. R., et al.: Functional diversity and conflict in cross-functional product development teams. In Understanding Teams (ed. L. L. Neider & C. A. Schriesheim), pp. 89–110. Information Age Publishing (2005) Weingart, L. R., et al.: Functional diversity and conflict in cross-functional product development teams. In Understanding Teams (ed. L. L. Neider & C. A. Schriesheim), pp. 89–110. Information Age Publishing (2005)
12.
go back to reference See, T.K., Lewis, K.: A formal approach to handling conflicts in multiattribute group decision making. J. Mech. Des. 128(4), 678 (2006)CrossRef See, T.K., Lewis, K.: A formal approach to handling conflicts in multiattribute group decision making. J. Mech. Des. 128(4), 678 (2006)CrossRef
13.
go back to reference McComb, C., Goucher-Lambert, K., Cagan, J.: Impossible by design? fairness, strategy and arrow’s impossibility theorem. Des. Sci. 3, 1–26 (2017)CrossRef McComb, C., Goucher-Lambert, K., Cagan, J.: Impossible by design? fairness, strategy and arrow’s impossibility theorem. Des. Sci. 3, 1–26 (2017)CrossRef
14.
go back to reference Fishburn, P.C.: Voter concordance, simple majorities, and group decision methods. Behav. Sci. 18, 364–376 (1973)CrossRef Fishburn, P.C.: Voter concordance, simple majorities, and group decision methods. Behav. Sci. 18, 364–376 (1973)CrossRef
15.
go back to reference Franssen, M.: Arrow’s theorem, multi-criteria decision problems and multi-attribute preferences in engineering design. Res. Eng. Design 16(1–2), 42–56 (2005)CrossRef Franssen, M.: Arrow’s theorem, multi-criteria decision problems and multi-attribute preferences in engineering design. Res. Eng. Design 16(1–2), 42–56 (2005)CrossRef
16.
17.
go back to reference Hazelrigg, G.A.: An axiomatic framework for engineering design. J. Mech. Des. 121(3), 342 (1999)CrossRef Hazelrigg, G.A.: An axiomatic framework for engineering design. J. Mech. Des. 121(3), 342 (1999)CrossRef
18.
go back to reference Jacobs, J.F., van de Poel, I., Osseweijer, P.: Clarifying the debate on selection methods for engineering: arrow’s impossibility theorem, design performances, and information basis. Res. Eng. Design 25(1), 3–10 (2014)CrossRef Jacobs, J.F., van de Poel, I., Osseweijer, P.: Clarifying the debate on selection methods for engineering: arrow’s impossibility theorem, design performances, and information basis. Res. Eng. Design 25(1), 3–10 (2014)CrossRef
19.
go back to reference Katsikopoulos, K.: Coherence and correspondence in engineering design: informing the conversation and connecting with judgment and decision-making research. Judgm. Decis. Mak. 4(2), 147–153 (2009) Katsikopoulos, K.: Coherence and correspondence in engineering design: informing the conversation and connecting with judgment and decision-making research. Judgm. Decis. Mak. 4(2), 147–153 (2009)
21.
22.
go back to reference Saari D. G.: Decision and elections, Cambridge University Press (2011) Saari D. G.: Decision and elections, Cambridge University Press (2011)
23.
go back to reference Arrow, K.J.: Social Choice and Individual Values, 3rd edn. Yale University Press, New Haven (2012)MATH Arrow, K.J.: Social Choice and Individual Values, 3rd edn. Yale University Press, New Haven (2012)MATH
24.
go back to reference Chen, S., Liu, J., Wang, H., Augusto, J.C.: Ordering based decision making–a survey. Information Fusion 14(4), 521–531 (2012)CrossRef Chen, S., Liu, J., Wang, H., Augusto, J.C.: Ordering based decision making–a survey. Information Fusion 14(4), 521–531 (2012)CrossRef
25.
go back to reference Fu, Y., Lai, K.K., Leung, J.W.K., Liang, L.: A distance-based decision making method to improve multiple criteria supplier selection. Proc. Instit. Mech. Eng. Part B J. Eng. Manuf. 230(7), 1351–1355 (2016)CrossRef Fu, Y., Lai, K.K., Leung, J.W.K., Liang, L.: A distance-based decision making method to improve multiple criteria supplier selection. Proc. Instit. Mech. Eng. Part B J. Eng. Manuf. 230(7), 1351–1355 (2016)CrossRef
26.
go back to reference Franceschini, F., Maisano, D., Mastrogiacomo, L.: A new proposal for fusing individual preference orderings by rank-ordered agents: A generalization of the Yager’s algorithm. Eur. J. Oper. Res. 249(1), 209–223 (2016)MathSciNetMATHCrossRef Franceschini, F., Maisano, D., Mastrogiacomo, L.: A new proposal for fusing individual preference orderings by rank-ordered agents: A generalization of the Yager’s algorithm. Eur. J. Oper. Res. 249(1), 209–223 (2016)MathSciNetMATHCrossRef
27.
go back to reference Dong, A., Hill, A.W., Agogino, A.M.: A document analysis method for characterizing design team performance. J. Mech. Des. 126(3), 378–385 (2004)CrossRef Dong, A., Hill, A.W., Agogino, A.M.: A document analysis method for characterizing design team performance. J. Mech. Des. 126(3), 378–385 (2004)CrossRef
28.
go back to reference Paulus P. B., Dzindolet M. T., Kohn N.: Collaborative creativity, group creativity and team innovation. In Handbook of Organizational Creativity (ed. M. D.Mumford), pp. 327–357. Elsevier (2011) Paulus P. B., Dzindolet M. T., Kohn N.: Collaborative creativity, group creativity and team innovation. In Handbook of Organizational Creativity (ed. M. D.Mumford), pp. 327–357. Elsevier (2011)
29.
go back to reference Cagan, J., Vogel, C. M.: Creating breakthrough products: innovation from product planning to program approval, 2nd edn., FT Press (2012) Cagan, J., Vogel, C. M.: Creating breakthrough products: innovation from product planning to program approval, 2nd edn., FT Press (2012)
30.
go back to reference Franceschini, F., Galetto, M., Maisano, D.: Designing Performance Measurement Systems: Theory and Practice of Key Performance Indicators. Springer International Publishing, Cham, Switzerland (2019) Franceschini, F., Galetto, M., Maisano, D.: Designing Performance Measurement Systems: Theory and Practice of Key Performance Indicators. Springer International Publishing, Cham, Switzerland (2019)
31.
go back to reference Franceschini, F., Maisano, D.: Fusing incomplete preference rankings in manufacturing decision-making contexts through the ZMII-technique. Int. J. Adv. Manuf. Technol. 103(9–12), 3307–3322 (2019)CrossRef Franceschini, F., Maisano, D.: Fusing incomplete preference rankings in manufacturing decision-making contexts through the ZMII-technique. Int. J. Adv. Manuf. Technol. 103(9–12), 3307–3322 (2019)CrossRef
33.
go back to reference Oxford Dictionary (2014), The Concise Oxford Dictionary of Mathematics, 5 ed., Oxford University Press Oxford Dictionary (2014), The Concise Oxford Dictionary of Mathematics, 5 ed., Oxford University Press
34.
go back to reference Felsenthal, D.S., Nurmi, H.: Voting Procedures for Electing a Single Candidate. Springer, Cham, Switzerland (2018)MATHCrossRef Felsenthal, D.S., Nurmi, H.: Voting Procedures for Electing a Single Candidate. Springer, Cham, Switzerland (2018)MATHCrossRef
36.
go back to reference Felsenthal D.S.: Review of paradoxes afflicting procedures for electing a single candidate, in D.S. Felsenthal & M. Machover (Eds.) Electoral systems: Paradoxes, assumptions, and procedures, Chap.3, Berlin-Heidelberg, Springer (2012) Felsenthal D.S.: Review of paradoxes afflicting procedures for electing a single candidate, in D.S. Felsenthal & M. Machover (Eds.) Electoral systems: Paradoxes, assumptions, and procedures, Chap.3, Berlin-Heidelberg, Springer (2012)
38.
go back to reference Cash, P., Dekoninck, E.A., Ahmed-Kristensen, S.: Supporting the development of shared understanding in distributed design teams. J. Eng. Des. 28(3), 147–170 (2017)CrossRef Cash, P., Dekoninck, E.A., Ahmed-Kristensen, S.: Supporting the development of shared understanding in distributed design teams. J. Eng. Des. 28(3), 147–170 (2017)CrossRef
39.
go back to reference Coombs, C.H.: A Theory of Data. Wiley, New York (1964) Coombs, C.H.: A Theory of Data. Wiley, New York (1964)
40.
go back to reference Coombs, C.H., Cohen, J.L., Chamberlin, J.R.: An empirical study of some election systems. Am. Psycologist 39, 140–157 (1984)CrossRef Coombs, C.H., Cohen, J.L., Chamberlin, J.R.: An empirical study of some election systems. Am. Psycologist 39, 140–157 (1984)CrossRef
41.
go back to reference Borda, J.C.: (1781) Mémoire sur les élections au scrutin, Comptes Rendus de l’Académie des Sciences. Translated by Alfred de Grazia as Mathematical derivation of an election system, Isis, 44:42–51 Borda, J.C.: (1781) Mémoire sur les élections au scrutin, Comptes Rendus de l’Académie des Sciences. Translated by Alfred de Grazia as Mathematical derivation of an election system, Isis, 44:42–51
42.
go back to reference Dym, C.L., Wood, W.H., Scott, M.J.: Rank ordering engineering designs: pairwise comparison charts and Borda counts. Res. Eng. Design 13, 236–242 (2002)CrossRef Dym, C.L., Wood, W.H., Scott, M.J.: Rank ordering engineering designs: pairwise comparison charts and Borda counts. Res. Eng. Design 13, 236–242 (2002)CrossRef
43.
go back to reference Arrow, K.J., Sen, A., Suzumura, K.: Handbook of Social Choice and Welfare. North Holland, Elsevier (2010)MATH Arrow, K.J., Sen, A., Suzumura, K.: Handbook of Social Choice and Welfare. North Holland, Elsevier (2010)MATH
44.
go back to reference Saari D. G.: Disposing Dictators, Demystifying Voting Paradoxes, Cambridge University Press (2009) Saari D. G.: Disposing Dictators, Demystifying Voting Paradoxes, Cambridge University Press (2009)
45.
go back to reference Kendall, M.G.: Rank Correlation Methods. Griffin & C, London (1962) Kendall, M.G.: Rank Correlation Methods. Griffin & C, London (1962)
46.
go back to reference Legendre, P.: Coefficient of concordance in: Encyclopedia of Research Design, Vol. 1. pp. 164–169, N. J. Salkind, (editor), SAGE Publications, Inc., Los Angeles (2010) Legendre, P.: Coefficient of concordance in: Encyclopedia of Research Design, Vol. 1. pp. 164–169, N. J. Salkind, (editor), SAGE Publications, Inc., Los Angeles (2010)
47.
go back to reference Franceschini, F., Maisano, D.: Design decisions: concordance of designers and effects of the Arrow’s theorem on the collective preference ranking. Res. Eng. Design 30(3), 425–434 (2019)CrossRef Franceschini, F., Maisano, D.: Design decisions: concordance of designers and effects of the Arrow’s theorem on the collective preference ranking. Res. Eng. Design 30(3), 425–434 (2019)CrossRef
48.
go back to reference Chiclana, F., Herrera, F., Herrera-Viedma, E.: A note on the internal consistency of various preference representations. Fuzzy Sets Syst. 131(1), 75–78 (2002)MathSciNetMATHCrossRef Chiclana, F., Herrera, F., Herrera-Viedma, E.: A note on the internal consistency of various preference representations. Fuzzy Sets Syst. 131(1), 75–78 (2002)MathSciNetMATHCrossRef
49.
go back to reference Franceschini, F., Maisano, D.: Checking the consistency of the solution in ordinal semi-democratic decision-making problems. Omega 57(1), 188–195 (2015)CrossRef Franceschini, F., Maisano, D.: Checking the consistency of the solution in ordinal semi-democratic decision-making problems. Omega 57(1), 188–195 (2015)CrossRef
50.
go back to reference Franceschini, F., Maisano, D.: Consistency analysis in quality classification problems with multiple rank-ordered agents. Qual. Eng. 29(4), 672–689 (2017)CrossRef Franceschini, F., Maisano, D.: Consistency analysis in quality classification problems with multiple rank-ordered agents. Qual. Eng. 29(4), 672–689 (2017)CrossRef
51.
go back to reference Franceschini, F., Garcia-Lapresta, J.L.: Decision-making in semi-democratic contexts. Information Fusion. 52(1), 281–289 (2019)CrossRef Franceschini, F., Garcia-Lapresta, J.L.: Decision-making in semi-democratic contexts. Information Fusion. 52(1), 281–289 (2019)CrossRef
52.
go back to reference Gibbons, J.D., Chakraborti, S.: Nonparametric statistical inference, 5th edn. CRC Press, Boca Raton (2010) Gibbons, J.D., Chakraborti, S.: Nonparametric statistical inference, 5th edn. CRC Press, Boca Raton (2010)
53.
go back to reference Franceschini, F., Maisano, D.: Decisions concordance with incomplete rankings in manufacturing applications. Res. Eng. Design 31(4), 471–490 (2020)CrossRef Franceschini, F., Maisano, D.: Decisions concordance with incomplete rankings in manufacturing applications. Res. Eng. Design 31(4), 471–490 (2020)CrossRef
54.
go back to reference Franceschini, F., Maisano, D., Mastrogiacomo, L.: Rankings and Decisions in Engineering: Conceptual and Practical Insights. International Series in Operations Research & Management Science Series, Vol. 319, Springer International Publishing, Cham (Switzerland), ISSN: 0884–8289 (2022) Franceschini, F., Maisano, D., Mastrogiacomo, L.: Rankings and Decisions in Engineering: Conceptual and Practical Insights. International Series in Operations Research & Management Science Series, Vol. 319, Springer International Publishing, Cham (Switzerland), ISSN: 0884–8289 (2022)
55.
go back to reference Schilling, M.A., Shankar, R.: Strategic Management of Technological Innovation, 6th edn. McGraw-Hill Education, Chennai (India) (2019) Schilling, M.A., Shankar, R.: Strategic Management of Technological Innovation, 6th edn. McGraw-Hill Education, Chennai (India) (2019)
57.
go back to reference Suh, N.P.: Axiomatic design: advances and applications. Oxford University Press, Oxford (UK) (2001) Suh, N.P.: Axiomatic design: advances and applications. Oxford University Press, Oxford (UK) (2001)
58.
go back to reference Maisano, D.A., Franceschini, F., Antonelli, D.: dP-FMEA: An innovative failure mode and effects analysis for distributed manufacturing processes. Qual. Eng. 32(3), 267–285 (2020)CrossRef Maisano, D.A., Franceschini, F., Antonelli, D.: dP-FMEA: An innovative failure mode and effects analysis for distributed manufacturing processes. Qual. Eng. 32(3), 267–285 (2020)CrossRef
59.
go back to reference Saltelli, A.: Sensitivity analysis for importance assessment. Risk Anal. 22(3), 579–590 (2002)CrossRef Saltelli, A.: Sensitivity analysis for importance assessment. Risk Anal. 22(3), 579–590 (2002)CrossRef
Metadata
Title
Analysing paradoxes in design decisions: the case of “multiple-district” paradox
Authors
Fiorenzo Franceschini
Domenico A. Maisano
Publication date
23-03-2022
Publisher
Springer Paris
Published in
International Journal on Interactive Design and Manufacturing (IJIDeM) / Issue 2/2022
Print ISSN: 1955-2513
Electronic ISSN: 1955-2505
DOI
https://doi.org/10.1007/s12008-022-00860-x

Other articles of this Issue 2/2022

International Journal on Interactive Design and Manufacturing (IJIDeM) 2/2022 Go to the issue

Premium Partner